|
Technical Program -
Agenda
8:15-8:30 |
Welcome and introductions |
8:30-9:30 |
Invited Talk
- Adjudicating Binary Challenges, by Steven
W. Teppler
|
9:30-10:00
|
Paper presentation
- Drafting and Modeling of Regulations: Is It
Being Done Backwards?
|
10:00-10:30 |
Break
|
10:30-12:00 |
Paper presentations
- Measurement-Oriented Comparison of Multiple
Regulations with GRL
- Software Licenses, Coverage, and Subsumption
- Assessing Identification
of Compliance Requirements from Privacy
Policies
|
12:00-13:30 |
Lunch |
13:30-15:00 |
Paper presentations
- Extracting Meaningful Entities from Regulatory
Text: Towards Automating Regulatory Compliance
- Mining Contracts for Normative Requirements
- Defining and Retrieving Themes in Nuclear
Regulations
|
15:00-15:30 |
Break |
15:30-16:30 |
Paper presentations
- Towards Successful Subcontracting for Software
in Small to Medium-Sized Enterprises
- Licensing Security
|
16:30-17:00
|
Discussion, outcomes, and future
work |
Invited Talk
Adjudicating
Binary Challenges
Steven W. Teppler
(Kirk-Pinkerton, USA)
A federal appeals court judge stated in a 1976
opinion:
“Although the computer has tremendous potential for
improving our system of justice by generating more
meaningful evidence than was previously available,
it presents a real danger of being the vehicle of
introducing erroneous, misleading, or unreliable
evidence. The possibility of an undetected error in
computer-generated evidence is a function of many
factors: the underlying data may be hearsay; errors
may be introduced in any one of several stages of
processing; the computer might be erroneously
programmed, programmed to permit an error to go
undetected, or programmed to introduce error into
the data; and the computer may inaccurately display
the data or display it in a biased manner. Because
of the complexities of examining the creation of
computer-generated evidence and the deceptively neat
package in which the computer can display its work
product, courts and practitioners must exercise more
care with computer-generated evidence than with
evidence generated by more traditional means.”
Perma Research &
Development v The Singer Company, 542 F.2d 111 (2d
Cir. 1976)
In 2006, and a mere 30 years later, the Federal Rules
of Civil Procedure were amended to incorporate the
discovery of computer-generated evidence (also known
as electronically stored information) in federal court
litigation.
A veritable landslide of interpretive decisional
authority has issued in the intervening six years.
Steven will discuss where we stand (and in what
direction we appear to be headed) in connection with
issues relating to computer-generated evidence
acquisition, preservation, and provenance.
Biography:
Steven W. Teppler chairs Kirk-Pinkerton’s
information governance and electronic discovery
practice. He has practiced law since 1981, is admitted
to the bars of New York, the District of Columbia,
Florida, and Illinois and advises private and public
sector clients about risk, liability, and compliance
issues unique to information governance (i.e., from
instantiation through management, preservation and
disposition). Steven is an adjunct professor at Ave
Maria Law School, teaching electronic discovery,
and also lectures nationwide on evolving theories of
information governance and electronic discovery.Steven holds six
patents in the field of content authentication, is the
founder and CEO of a content
authentication provider. He also is the Co-Chair
of the eDiscovery and Digital Evidence Committee of
the American Bar Association, a member of the Seventh
Circuit Court of Appeals Electronic Discovery Pilot
Program, a founder and co-program chair of the
American Bar Association’s Electronic Discovery and
Information Governance Institute, and a contributing
author of the ANSI X9F4 trusted timestamp guideline
standards for the financial industry. Steven’s Florida
Bar activities include membership in the Florida Bar’s
Federal Court Practice Committee, membership in
(2005-2011) and past chair of (2010-2011) the Florida
Bar Professional Ethics Committee, where he
contributed to the Florida Bar Ethics Advisory
Opinions 06-02 (Metadata Mining), 07-2 (Off-Shoring),
and 10-2 (Storage Media Sanitization).Steven’s recent
publications include:
- "Digital Evidence
as Hearsay", Digital Evidence and Electronic
Signature Law Review (October 2009) Volume 6, The
HIPAA Technology Challenge: Protecting the Integrity
of Health Care Information, California Health Law
News – Volume XXVI, Issue 1, Winter 2007/2008;
- Spoliation in the
Digital Universe, The SciTech Lawyer, Science and
Technology Law Section of the American Bar
Association, Fall 2007;
- Life After
Sarbanes-Oxley – The Merger of Information Security
and Accountability (co-author), 45 Jurimetrics J.
379 (2005);
- Digital
Signatures Are Not Enough (co-author), Information
Systems Security Association, January 2006;
- State of
Connecticut v. Swinton: A Discussion of the Basics
of Digital Evidence Admissibility (co-author),
Georgia Bar Newsletter Technology Law Section,
Spring 2005;
- The Digital
Signature Paradox (co-author), IETF Information
Workshop (The West Point Workshop) June 2005;
- Observations on
Electronic Service of Process in the South Carolina
Court System, efiling Report, June 2005.
Steven is also a
contributing author of the book “Foundations of
Digital Evidence” (American Bar Association, July
2008) and of “Testable Reliability: A Modern Approach
to Digital Evidence Admissibility” (Ave Maria Law
Review, exp. Winter 2013).Steven received his
Bachelor of Arts in Political Science Summa Cum Laude
from the City College of New York, Phi Beta Kappa, and
received his Juris Doctor from the Benjamin N. Cardozo
School of Law in New York City.
Technical Program -
Paper Abstracts
Drafting and
Modeling of Regulations: Is It Being Done Backwards?
Edna Braun, Nick Cartwright, Azalia Shamsaei,
Saeed Ahmadi Behnam, Greg Richards, Gunter Mussbacher,
Mohammad Alhaj, and Rasha Tawhid
(Transport Canada, Canada; University of
Ottawa, Canada; Carleton University, Canada)
The performance modeling of regulations is
a relatively recent innovation. However, as
regulators in many domains increasingly look to
move from prescriptive regulations towards more
outcome-based regulations, the use of performance
modeling will become more common place. The major
difference of outcome-based regulations over
prescriptive regulations is that the main interest
lies in specifying clear objectives of the
regulations and measuring whether regulated
parties achieve these objectives, while leaving
much freedom to the regulated party on how to meet
these objectives. Recently, we have found that the
use of performance modeling provides benefits such
as revealing inconsistencies and lack of clarity
in existing regulatory language. In this paper, we
report on these experiences, summarize guidelines
for the modeling of regulations, and examine
whether the current drafting processes for
regulations are optimized to take advantage of
these additional benefits. We explore the
advantages and disadvantages of various ways of
augmenting the current approach with goal-oriented
modeling of regulations. Based on our experience
with Aviation Security regulations, we believe it
is time for modeling to play a new role in helping
to guide the drafting of regulations.
Measurement-Oriented
Comparison of Multiple Regulations with GRL
Andre Rifaut and Sepideh Ghanavati
(CRP Henri Tudor, Luxembourg; University
of Ottawa, Canada)
In recent years, intentional models have
been adapted to capture and analyze compliance
needs and requirements. Furthermore, intentional
models have been used to identify the impact of
regulations on organizational goals by helping to
elicit different alternatives about the business
operations supported by compliant business
processes and services. In other works,
intentional models based on measurement-frameworks
have provided well-structured models of
regulations and compliance alternatives. This
paper integrates Goal-Oriented Requirements
Language (GRL)-based methodologies with
measurement-based methodologies to improve support
for comparing regulations sharing the same
concerns via the (measurement) objectivity.
Software
Licenses, Coverage, and Subsumption
Thomas Alspaugh, Walt Scacchi, and Rihoko Kawai
(UC Irvine, USA; Saitama Institute of
Technology, Japan)
Software licensing issues for a system design,
instantiation, or configuration are often complex and
difficult to evaluate, and mistakes can be costly.
Automated assistance requires a formal representation
of the significant features of the software licenses
involved. We present results from an analysis directed
toward a formal representation capable of covering an
entire license. The key to such a representation is to
identify the license's actions, and relate them to the
actions for exclusive rights defined in law and to the
actions defined in other licenses. Parameterizing each
action by the object(s) acted on, the instrumental
entities through which the action is performed, and
similar contextual variables enables a subsumption
relation among the actions. The resulting formalism is
lightweight, flexible enough to support the scope of
legal interpretations, and extensible to a wide range
of software licenses. We discuss the application of
our approach to the Lesser General Public License
(LGPL) version 2.1.
Assessing
Identification of Compliance Requirements from Privacy
Policies
Jessica Young Schmidt, Annie I.
Anton, and Julia B. Earp
(North Carolina State
University, USA)In the
United States, organizations can be held liable by
the Federal Trade Commission for the statements
they make in their privacy policies. Thus,
organizations must include their privacy policies
as a source of requirements in order to build
systems that are policy-compliant. In this paper,
we describe an empirical user study in which we
measure the ability of requirements engineers to
effectively extract compliance requirements from a
privacy policy using one of three analysis
approaches—CPR (commitment, privilege, and right)
analysis, goal-based analysis, and
non-method-assisted (control) analysis. The
results of these three approaches were then
compared to an expert-produced set of expected
compliance requirements. The requirements
extracted by the CPR subjects reflected a higher
percentage of requirements that were expected
compliance requirements as well as a higher
percentage of the total expected compliance
requirements. In contrast, the goal-based and
control subjects produced a higher number of
synthesized requirements, or requirements not
directly derived from the policy than the CPR
subjects. This larger number of synthesized
requirements may be attributed to the fact that
these two subject groups employed more
inquiry-driven approaches than the CPR subjects
who relied primarily on focused and direct
extraction of compliance requirements.
Extracting
Meaningful Entities from Regulatory Text: Towards
Automating Regulatory Compliance
Krishna Sapkota, Arantza Aldea, David Duce,
Muhammad Younas, and Rene Banares-Alcantara
(Oxford Brookes University, UK; Oxford
University, UK)
Extracting essential meaning from the
regulatory text helps in the automation of the
Compliance Management (CM) process. CM is a
process where organizations assure that the
processes are run according to requirements and
expectations. However, extraction of meaningful
text from regulatory guidelines comes with many
research challenges such as dealing with different
document-format, implicit document-structure,
textual ambiguity and complexity. In this paper,
the extended version of the Semantic-ART framework
is described, which focuses on tackling the
challenges of document-structure identification
and regulatory-entity extraction. An initial
result has shown an inspirational result as
compared to the previous version of the framework.Mining Contracts
for Normative Requirements
Xibin Gao and Munindar P. Singh
(North Carolina State University, USA)
This paper considers requirements as they pertain to
interactions among autonomous parties, such as arise
in cross-organizational settings including business
service engagements and license agreements. Autonomous
parties often enter into business contracts that
express their expectations about their interactions in
high-level terms. This paper models such requirements
in terms of normative relationships of five main
types, namely, commitments (both practical and
dialectical), authorizations, prohibitions, powers,
and sanctions. These relationships can have legal
import, and we claim their modeling is essential for
extending software engineering to open systems.
This paper describes an automated
approach for mining contracts for normative
requirements. This approach combines natural language
processing with contract-specific heuristics and
lexicons and machine learning. An evaluation (ten-fold
cross-validation) over 500 sentences randomly drawn
from a corpus of real-life contracts (and manually
labeled) yields promising results. Specifically, it
shows average F-measures 85% and 81% for practical and
dialectical commitments, which make up nearly 400 of
the sentences. The results for the other types are not
as strong, possibly because of the paucity of training
data..
Defining and
Retrieving Themes in Nuclear Regulations
Nicolas Sannier and Benoit Baudry
(EDF, France; INRIA, France)
Safety systems in nuclear industry must conform to an
increasing set of regulatory requirements. These
requirements are scattered throughout multiple
documents expressing different levels of requirements
or different kinds of requirements. Consequently, when
licensees want to extract the set of regulations
related to a specific concern, they lack explicit
traces between all regulation documents and mostly get
lost while attempting to compare two different
regulatory corpora. This paper presents the regulatory
landscape in the context of digital Instrumentation
and Command systems in nuclear power plants. To cope
with this complexity, we define and discuss challenges
toward an approach based on information retrieval
techniques to first narrow the regulatory research
space into themes and then assist the recovery of
these traceability links.
Towards Successful
Subcontracting for Software in Small to Medium-Sized
Enterprises
Bernd Westphal, Daniel Dietsch, Sergio Feo-Arenis,
Andreas Podelski, Louis Pahlow, Jochen Morsbach,
Barbara Sommer, Anke Fuchs, and Christine
Meierhöfer
(University of Freiburg, Germany;
Saarland University, Germany; University of
Mannheim, Germany)
Many small to medium sized enterprises
(SMEs) that specialise in electrical or
communications engineering are challenged by the
increasing importance of software in their
products. Although they have a strong interest in
subcontracting competent partners for software
development tasks, they tend to refrain from doing
so. In this paper we identify three main reasons
for this situation, propose an approach to
overcome some of them and state remaining
challenges. Those reasons are situated in the
intersection of software engineering and
jurisprudence and therefore need to be addressed
in an integrated and multidisciplinary fashion.
Licensing Security
Thomas Alspaugh and Walt Scacchi
(Georgetown University, USA; UC Irvine,
USA)
There exist legal structures defining the
exclusive rights of authors, and means for
licensing portions of them to others in exchange
for appropriate obligations. We propose an
analogous approach for security, in which portions
of exclusive security rights owned by system
stakeholders may be licensed as needed to others,
in exchange for appropriate security obligations.
Copyright defines exclusive rights to reproduce,
distribute, and produce derivative works, among
others. We envision exclusive security rights that
might include the right to access a system, the
right to run specific programs, and the right to
update specific programs or data, among others.
Such an approach uses the existing legal
structures of licenses and contracts to manage
security, as copyright licenses are used to manage
copyrights. At present there is no law of
``security right'' as there is a law of copyright,
but with the increasing prevalence and prominence
of security attacks and abuses, of which Stuxnet
and Flame are merely the best known recent
examples, such legislation is not implausible. We
discuss kinds of security rights and obligations
that might produce fruitful results, and how a
license structure and approach might prove more
effective than security policies.
|