Privacy Risk Analysis. Sourya Joyee De
Чтение книги онлайн.
Читать онлайн книгу Privacy Risk Analysis - Sourya Joyee De страница 3
A Summary of Categories and Attributes of the Components of a Privacy Risk Analysis
B Definitions of Personal Data Across Regulations and Standards
C Definitions of Stakeholders Across Regulations and Standards
D Privacy Risk Analysis Components in Existing Frameworks
Preface
Risk analysis and risk management are common approaches in areas as varied as environment protection, public health and computer security. In some sense, one may also argue that the original purpose of data protection laws was to reduce the risks to privacy posed by the development of new technologies [58]. In Europe however, the current Data Protection Directive [47] does not rely heavily on privacy risk analysis or Privacy Impact Assessment (PIA).1 The situation is going to change dramatically with the new General Data Protection Regulation (GDPR) [48], which shall apply from May 25, 2018.
The GDPR represents a fundamental shift from an administrative process based on a priori controls to a risk-based accountability approach in which PIAs2 play a key role. The virtues of the risk-based approach to privacy have been praised by many authors and stakeholders [26]. The main practical benefit expected from the approach is an increased effectiveness in terms of privacy protection: risk assessment makes it possible to focus on the most significant problems and to calibrate measures based on the estimated risks. Organizations also appreciate the fact that legal requirements can be implemented with greater flexibility. Another argument in favor of the risk-based approach is the observation that it is more and more difficult to draw a clear line between anonymous data and personal data, or between sensitive data and non-sensitive data. For this reason, there is a growing view that the only way forward is to go beyond dual visions in this matter and to rely on assessments of actual risks rather than fixed definitions and obligations [120, 128].
However more nuanced views have also been expressed on this topic. For example, the Working Party 29 [6] stresses that the risk-based approach should never lead to a weakening of the rights of the individuals: the rights granted to the data subject should be respected regardless of the level of risk (right of access, erasure, objection, etc.). The fundamental principles applicable to data controllers should also remain the same (legitimacy, data minimization, purpose limitation, transparency, data integrity, etc.), even if they can be scalable (based on the results of a risk assessment). In addition, the risk-based approach should consider not only harms to individuals but also general societal impacts.
Some privacy advocates also fear that the flexibility provided by the risk-based approach is abused by some organizations, and risk assessment is perverted into a self-legitimation exercise [57]. To avoid this drift and ensure that the risk-based approach really contributes to improving privacy, a number of conditions have to be met. First and foremost, the analysis has to be rigorous, both from the technical point of view and from the procedural point of view. The methodology used for the analysis should be clearly defined, as well as the assumptions about the context and the potential privacy impacts. This is a key requirement to ensure that the results of a privacy risk analysis are trustworthy and can be subject to independent checks.
However, if existing PIA frameworks and guidelines [160, 161, 163] provide a good deal of details on organizational aspects (including budget allocation, resource allocation, stakeholder consultation, etc.), they are much vaguer on the technical part, in particular on the actual risk assessment task.
A key step to achieve a better convergence between PIA frameworks geared toward legal and organizational issues on one hand and technical approaches to privacy risk analysis on the other hand, is to agree on a common terminology and a set of basic notions. It is also necessary to characterize the main tasks to be carried out in a privacy risk analysis and their inputs and outputs.
The above objectives are precisely the subject of this book. The intended audience includes both computer scientists looking for an introductory survey on privacy risk analysis and stakeholders involved in a PIA process with the desire to address technical aspects in a rigorous way. We hope that the reader will have as much pleasure in reading this book as we had in putting it together.
Sourya Joyee De and Daniel Le Métayer
August 2016
1The notion is even not referred to explicitly in the text of the Directive.
2More precisely, the GDPR uses the wording “Data Protection Impact Assessment.”
Acknowledgments
We thank our colleagues of the PRIVATICS research group in Grenoble and Lyon, in particular Gergely Ács and Claude Castelluccia for their comments on an earlier draft of this book and many fruitful discussions on privacy risk analysis. This work has been partially funded by the French ANR-12-INSE-0013 project BIOPRIV and the Inria Project Lab CAPPRIS.
Sourya Joyee De and Daniel Le Métayer
August 2016
CHAPTER 1
Introduction
Considering that the deployment of new information technologies can lead to substantial privacy risks for individuals, there is a growing recognition that a privacy impact assessment (PIA) should be conducted before the design of a product collecting or processing personal data. De facto PIAs have become more and more popular during the last decade. Several countries such as Australia, New Zealand, Canada, the U.S. and the United Kingdom [164] have played a leading role in this movement. Europe has also promoted PIAs in areas such as RFIDs [9, 107] and smart grids [11, 12] and is putting strong emphasis on privacy and data protection risk analysis in its new General Data Protection Regulation (GDPR)1 [48]. However, if existing PIA frameworks and guidelines provide a good deal of details on organizational aspects (including budget allocation, resource allocation, stakeholder consultation, etc.), they are much vaguer on the technical part (what we call “Privacy Risk Analysis” or “PRA” in this book), in particular on the actual risk assessment task. Some tools have also been proposed to help in the management of organizational aspects [3, 118, 144] but no support currently exists to perform the technical analysis. For PIAs to keep up their promises and really play a decisive role to enhance privacy protection, they should be more precise with regard to these technical aspects. This is a key requirement to ensure that their results are trustworthy and can be subject to independent checks. However, this is also a challenge because privacy is a multifaceted notion involving a wide variety of factors that may be difficult to assess.