Trust in Computer Systems and the Cloud. Mike Bursell
Чтение книги онлайн.
Читать онлайн книгу Trust in Computer Systems and the Cloud - Mike Bursell страница 23
The question of whether computers can—or may one day be able to—think was one of the questions that exercised early practitioners of the field of artificial intelligence (AI): specifically, hard AI. Coming at the issue from a different point of view, Rossi48 writes about concerns that humans have about AI. She notes issues such as explainability (how humans can know why AI systems make particular decisions), responsibility, and accountability in humans trusting AI. Her interests seem to be mainly about humans failing to trust—she does not define the term specifically—AI systems, whereas there is a concomitant, but opposite concern: that sometimes humans may have too much trust in (that is, have an unjustified trust relationship to) AI systems.
Over the past few years, AI/ML systems49 have become increasingly good at mimicking humans for specific interactions. These are not general-purpose systems but in most cases are aimed at participating in specific fields of interaction, such as telephone answering services. Targeted systems like this have been around since the 1960s: a famous program—what we would call a bot now—known as ELIZA mimicked a therapist. Interacting with the program—there are many online implementations still available, based on the original version—quickly becomes unconvincing, and it would be difficult for any human to consider that it is truly “thinking”. The same can be said for many systems aimed at specific interactions, but humans can be quite trusting of such systems even if they do not seem to be completely human. In fact, there is a strange but well-documented effect called the uncanny valley. This is the effect that humans feel an increasing affinity for—and presumably, an increased propensity to trust—entities, the more human they look, but only to a certain point. Past that point, the uncanny valley kicks in, and humans become less happy with the entity with which they are interacting. There is evidence that this effect is not restricted to visual cues but also exists for other senses, such as hearing and audio-based interactions.50 The uncanny valley seems to be an example of a cognitive bias that may provide us with real protection in the digital world, restricting the trust we might extend towards non-human trustees that are attempting to appear human. Our ability to realise that they are non-human, however, may not always be sufficient to allow it to kick in. Deep fakes, a common term for the output of specialised ML tools that generate convincing, but ultimately falsified, images, audio, or even full video footage of people, is a growing concern for many: not least social media sites, which have identified the trend as a form of potentially damaging misinformation, or those who believed that what they saw was real. Even without these techniques, it appears that media such as Twitter have been used to put messages out—typically around elections—that are not from real people, but that, without skilled analysis and correlation with other messages from other accounts, are almost impossible to discredit.
Anthropomorphism is a term to describe how humans often attribute human attributes to non-human entities. In our case, this would be computer systems. We may do this for a number of reasons:
Maybe because humans have a propensity towards anthropomorphism in order to allow them better to understand the systems with which they interact, though they are not consciously aware that the system is non-human
Because humans are interacting with a system that they are clear is non-human, but they find it easier to interact with it as if it had at least some human characteristics
Because humans have been deceived by intentionally applied techniques into believing that the system is human
By this stage, we have maybe stretched the standard use of the term anthropomorphism beyond its normal boundaries: normal usage would apply to humans ascribing human characteristics to obviously non-human entities. The danger we are addressing here goes beyond that, as we are also concerned with the possibility that humans may form trust relationships to non-human entities exactly because they believe them to be human: they just do not have the ability (easily) to discriminate between the real and the generated.
Identifying the Real Trustee
When security measures are put in place, who puts them there, and for what reason? This might seems like a simple question, but often it is not. In fact, more important than asking “for what” are security measures put in place is the question “for whom are they put in place?” Ross Anderson and Tyler Moore are strong proponents of the study of security economics,51 arguing that microeconomics and game theory are vital studies for those involved in IT security.52 They are interested in questions such as the one we have just examined: where security measures—which will lead to what we termed behaviours—are put in place to benefit not the user interacting with the system but somebody else.
One example is Digital Rights Management (DRM). Much downloadable music or video media is “protected” from unauthorised use through the application of security technologies. The outcome of this is that people who download media that are DRM protected cannot copy them or play them on unapproved platforms or systems. This means, for example, that even if I have paid for access to a music track, I am unable to play it on a new laptop unless that laptop has approved software on it. What is more, the supplier from which I obtained the track can stop my previously authorised access to that track at any time (as long as I am online). How does this help me, the person interacting with the music via the application? The answer is that it does not help me at all but rather inconveniences me: the “protection” is for the provider of the music and/or the application. As Richard Harper points out, “trusting” a DRM system means trusting behaviour that enforces properties of the entity that commissioned it.53 Is this extra protection, which is basically against me, in that it stops my ease of use? Of course not: I, and other users of the service, will end up absorbing this cost through my subscription, a one-off purchase price, or my watching of advertisements as part of the service. This is security economics, where the entity benefiting from the security is not the one paying for it.
When considering a DRM system, it may be fairly clear what actions it is performing. In this case, this may include:
Decrypting media ready for playing
Playing the media
Logging your usage
Reporting your usage
According to our definition, we might still say that we have a trust relationship to the DRM software, and some of the actions it is performing are in my best interests—I do, after all, want to watch or listen to the media. If we think about assurances, then the trust relationship I have can still meet our definition. I have assurances of particular behaviours, and whether they are in my best interests or not, I know (let us say) what they are.
The