Trust in Computer Systems and the Cloud. Mike Bursell
Чтение книги онлайн.
Читать онлайн книгу Trust in Computer Systems and the Cloud - Mike Bursell страница 10
This is a good statement of how I view the relationship from me to my brother, but what can we gain with more detail? Let us use the corollaries to move us to a better description of the relationship.
First Corollary “The medical aid is within an area of practice in which he has trained or with which he is familiar”.
Second Corollary “My brother will only undertake procedures for which his training is still sufficiently recent that he feels confident that he can perform them without further detriment to my health”.
Third Corollary “My brother does not expect me to provide him with emergency medical aid”.
This may seem like an immense amount of unpacking to do on what was originally presented as a simple statement. But when we move over to the world of computing systems, we need to consider exactly this level of detail, if not an even greater level.
Let us begin moving into the world of computing and see what happens when we start to apply some of these concepts there. We will begin with the concept of a trusted platform: something that is often a requirement for any computation that involves sensitive data or algorithms. Immediately, questions present themselves. When we talk about a trusted platform, what does that mean? It must surely mean that the platform is trusted by an entity (the workload?) to perform particular actions (provide processing time and memory?) whilst meeting particular expectations (not inspecting program memory? maintaining the integrity of data?). But the context of what we mean for a trusted platform is likely to be very different between a mobile phone, a military installation, and an Internet of Things (IoT) gateway. That trust may erode over time (are patches applied? Is there also a higher likelihood that an attacker may have compromised the platform a day, a month, or a year after the workload was provisioned to it?). We should also never simply say, following the third corollary (on the lack of trust symmetry), that “these entities trust each other” without further qualification, even if we are referring to the relationships between one trusted system and another trusted system.
One concrete example that we can use to examine some of these questions is when we connect to a web server using a browser to purchase a product or service. Once they connect, the web server and the browser may establish trust relationships, but these are definitely not symmetrical. The browser has probably established that the web server represents the provider of particular products and services with sufficient assurance for the person operating it to give up credit card details. The web server has probably established that the browser currently has permission to access the account of the user operating it. However, we already see some possible confusion arising about what the entities are: what is the web server, exactly? The unique instance of the server's software, the virtual machine in which it runs (if, in fact, it is running in a virtual machine), a broader and more complex computer system, or something entirely different? And what ability can the browser have to establish that the person operating it can perform particular actions?
These questions—about how trust is represented and to do what—are related to agency and will also help us consider some of the questions that arose around the examples we considered earlier about banks and their IT systems.
What Is Agency?
When you write a computer program that prints out “Hello, world!”, who is “saying” those words: you or the computer? This may sound like an idle philosophical question, but it is more than that: we need to be able to talk about entities as part of our definition of trust, and in order to do that, we need to know what entity we are discussing.
What exactly, then, does agency mean? It means acting for someone: being their agent—think of what actors' agents do, for example. When we engage a lawyer or a builder or an accountant to do something for us, we set very clear boundaries about what they will be doing on our behalf. This is to protect both us and the agent from unintended consequences. There exists a huge legal corpus around defining, in different fields, exactly the scope of work to be carried out by a person or a company who is acting as an agent for another person or organisation. There are contracts and agreed restitutions—basically, punishments—for when things go wrong. Say that my accountant buys 500 shares in a bank with my money, and then I turn around and say that they never had the authority to do so: if we have set up the relationship correctly, it should be entirely clear whether or not the accountant had that authority and whose responsibility it is to deal with any fallout from that purchase.
The situation is not so clear when we start talking about computer systems and agents. To think a little more about this question, here are two scenarios:
In the classic film WarGames, David Lightman (Matthew Broderick's character) has a computer that goes through a list of telephone numbers, dialling them and then recording the number for later investigation if they are answered by another machine that attempts to perform a handshake. Do we consider that the automatic dialling Lightman's computer performs is carried out as an act with agency? Or is it when the computer connects to another machine? Or when it records the details of that machine? I suspect that most people would not argue that the computer is acting with agency once Lightman gets it to complete a connection and interact with the other machine—that seems very intentional on his part, and he has taken control—but what about before?
Google used to run automated programs against messages received as part of the Gmail service.5 The programs were looking for information and phrases that Google could use to serve ads. The company were absolutely adamant that they, Google, were not doing the reading: it was just the computer programs.6 Quite apart from the ethical concerns that might be raised, many people would (and did) argue that Google, or at least the company's employees, had imbued these automated programs with agency so that philosophically—and probably legally—the programs were performing actions on behalf of Google. The fact that there was no real-time involvement by any employee is arguably unimportant, at least in some contexts.
This all matters because in order to understand trust, we need to identify an entity to trust. One current example of this is self-driving cars: whose fault is it when one goes wrong and injures or kills someone? Equally, when the software in certain Boeing 737 MAX 8 aircraft malfunctioned,7 pilots—who can be said to have trusted the software—and passengers—who equally can be said to have trusted the pilots and their ability to fly the aircraft correctly—lost their lives. What exactly was the entity to which they had a trust relationship, and how was that trust managed?
Another example may help us to consider the question of context. Consider a hypothetical automated defence system for a military base in a war zone. Let us say that, upon identifying intruders via its cameras, the system is programmed to play a recording over loudspeakers, warning them to move away; and, in the case that they do not leave within 30 seconds of a warning, to use physical means up to and including lethal force to stop them proceeding any further. The base commander trusts the system to perform its job and stop intruders: a trust relationship exists between the base commander and the automated defence system. Thus, in the language of our definition of trust:
“The base commander holds an assurance that the automated defence system will identify, warn, and then stop intruders who enter the area within its camera and weapon range”.
We have a fair amount of context already embedded within this example. We stated up front that the base is in a war zone, and we have mentioned the range of the cameras and weapons. A problem arises, however,