Privacy in Mobile and Pervasive Computing. Florian Schaub
Чтение книги онлайн.
Читать онлайн книгу Privacy in Mobile and Pervasive Computing - Florian Schaub страница 4
In both the Safeway and the Migros cases, all customers who had bought the suspicious item in question (fire starters and a contractor’s tool, respectively) instantly became suspects in a criminal investigation. All were ultimately acquitted of the charges against them, although particularly in the case of firefighter Lyons, the tarnished reputation that goes with such a suspicion is hard to rebuild. News stories tend to focus on suspects rather than less exciting acquittals—the fact that one’s name is eventually cleared might not get the same attention as the initial suspicion. It is also often much easier to become listed in a police database as a suspect, than to have such an entry removed again after an acquittal. For example, until recently, the federal police in Switzerland would only allow the deletion of such an entry if the suspect would bring forward clear evidence of their innocence. If, however, a suspect had to be acquitted simply through lack of evidence to the contrary—as in the case of the Migros tool—the entry would remain [Rehmann, 2014].
The three cases described above are examples of privacy violations, even though none of the data disclosures (Vons’ access of Robert Rivera’s shopping records, or the police access of the shopping records in the US or in Switzerland) were illegal. In all three cases, data collected for one purpose (“receiving store discounts”) was used for another purpose (as a perceived threat to tarnish one’s reputation, or as an investigative tool to identify potential suspects). All supermarket customers in these cases thought nothing about the fact that they used their loyalty cards to record their purchases—after all, what should be so secret about buying liquor (perfectly legal if you are over 21 in the U.S.), fire starters (sold in the millions to start BBQs all around the world) or work tools? None of the customers involved had done anything wrong, yet the data recorded about them put them on the defensive until they could prove their innocence.
A lot has happened since Rivera and Lyons were “caught” in their own data shadow—the personal information unwittingly collected about them in companies’ databases. In the 10–15 years since, technology has continued to evolve rapidly. Today, Rivera might use his Android phone to pay for all his purchases, letting not only Vons track his shopping behavior but also Google. Lyons instead might use Amazon Echo2 to ask Alexa, Amazon’s voice assistant, to order his groceries from the comfort of his home—giving police yet another shopping record to investigate. In fact, voice activation is becoming ubiquitous: many smartphones already feature “always-on” voice commands, which means they effectively listen in on all our conversations in order to identify a particular activation keyword.3 Any spoken commands (or queries) are sent to a cloud server for analysis and are often stored indefinitely. Many other household devices such as TVs and game consoles4 or home appliances and cars5 will soon do the same.
It is easy to imagine that a future populated with an ever-increasing number of mobile and pervasive devices that record our minute goings and doings will significantly expand the amount of information that will be collected, stored, processed, and shared about us by both corporations and governments. The vast majority of this data is likely to benefit us greatly—making our lives more convenient, efficient, and safer through custom-tailored services that anticipate what we need, where we need it, and when we need it. But beneath all this convenience, efficiency, and safety lurks the risk of losing control and awareness of what is known about us in the many different contexts of our lives. Eventually, we may find ourselves in a situation like Rivera or Lyons, where something we said or did will be misinterpreted and held against us, even if the activities were perfectly innocuous at the time. Even more concerning, while in the examples we discussed privacy implications manifested as an explicit harm, more often privacy harms manifest as an absence of opportunity, which may go unnoticed even though it may substantially impact our lives.
1.1 LECTURE GOALS AND OVERVIEW
In this book we dissect and discuss the privacy implications of mobile and pervasive computing technology. For this purpose, we not only look at how mobile and pervasive computing technology affects our expectations of—and ability to enjoy—privacy, but also look at what constitutes “privacy” in the first place, and why we should care about maintaining it.
A core aspect is the question: what do we actually mean when we talk about “privacy?” Privacy is a term that is intuitively understood by everyone, but at the same time the actual meaning may differ quite substantially—among different individuals, but also for the same individual in different situations [Acquisti et al., 2015]. In the examples we discussed above, superficially, the hinging problems were the interpretation or misinterpretation of facts (Robert Rivera allegedly being an alcoholic and Philip Lyons being wrongfully accused of arson, based on their respective shopping records), but ultimately the real issue is the use of personal information for purposes not foreseen (nor authorized) originally. In those examples, privacy was thus about being “in control”—or, more accurately, the loss of control—of one’s data, as well as the particular selection of facts known about oneself. However, other—often more subtle—issues exist that may rightfully be considered “privacy issues” as well. Thus, in this Synthesis Lecture we first closely examine the two constituents of the problem—privacy (Chapter 2) and mobile and pervasive computing technology (Chapter 3)—before discussing their intersection and illustrating the resulting challenges (Chapter 4). We finally discuss how those privacy challenges can potentially be addressed in the design of mobile and pervasive computing technologies (Chapter 5), and conclude with a summary of our main points (Chapter 6).
1.2 WHO SHOULD READ THIS
When one of the authors of this lecture was a Ph.D. student (some 15 years ago), he received a grant to visit several European research projects that worked in the context of a large EU initiative on pervasive computing—the “Disappearing Computer Initiative” [Lahlou et al., 2005]. The goal of this grant was to harness the collective experience of dozens of internationally renowned researchers that spearheaded European research in the area, in order to draft a set of “best practices” for creating future pervasive services with privacy in mind. In this respect, the visits were a failure: almost none of the half a dozen projects visited had any suggestions for building privacy-friendly pervasive systems. However, the visits surfaced an intriguing set of excuses why, as computer scientists and engineers working in the area, privacy was of no concern to them.
1. Some researchers found it best if privacy concerns (and their solutions) would be regulated socially, not technically: “It’s maybe about letting [users of pervasive technology]find their own ways of cheating.”
2. A large majority of researchers found that others where much more qualified (and required) to think about privacy: “For [my colleague] it is more appropriate to think about [security and privacy] issues. It’s not really the case in my case.”
3. Another large number of researchers thought of privacy issues simply as a problem that could (at the end) be solved trivially: “All you need is really good firewalls.”
4. Several researchers preferred not to think about privacy at all, as this would interfere with them building interesting systems: “I think you can’t think of privacy… it’s impossible, because if I do it, I have troubles with finding