Trust in Computer Systems and the Cloud. Mike Bursell

Чтение книги онлайн.

Читать онлайн книгу Trust in Computer Systems and the Cloud - Mike Bursell страница 14

Trust in Computer Systems and the Cloud - Mike Bursell

Скачать книгу

href="#ulink_8fa5016b-4b62-5ef1-b995-6ba5151f5ef9">8 See Rescorla 2000 for a definition of the HTTP protocol, the core component of the communication.

      9 9 Wikipedia, “Morris Worm”, 2020.

      10 10 Clark, 2014, p. 22.

      11 11 Or attempt to do so: humans are quite good at seeking out those who do not want to interact with them, and bothering them anyway, as any tired parent of young children will tell you.

      That is what could fairly be labelled, within the literature, as mistrust.

      First, we need to admit that the field of study regarding trust is both active and wide: there are a lot of definitions of human-to-human trust, many of which are not easily reconcilable. Most of the definitions, understandably, focus on social elements, and, as noted by Harper, there is a strong overtone of mistrust. Here are some examples supplied by other noted authors ruminating on the notion of trust:

       Trust in social interactions is “the willingness to be vulnerable based on positive expectation about the behaviour of others”.2Cheshire notes that Baier's definition3 “depends on the possibility of betrayal by another person”.

       For Hardin, when considering interpersonal trust, “my trust in you is encapsulated in your interest in fulfilling the trust”.4 Cheshire distinguishes trustworthiness from trust and discusses how risk-taking can act as a signal that one party considers another trustworthy.5 Dasgupta6 has seven starting points for establishing trust, of which three are related directly to punishment, one to choice, one to perspective, one to context, and one to monitoring.

      All of these examples may be helpful when considering human-to-human trust relationships—though even there, they generally seem a little vague in terms of definition—but if we are to consider trust relationships involving computers and system-based entities, they are all insufficient, basically because all of them relate to human emotions, intentions, or objectives. Applying questions around emotions to, say, a mobile phone's connection to a social media site is clearly not a sensible endeavour, though we will examine later how intention and objectives may have some relevance in discussions about trust within the computer-to-computer realm.

       trust (or, symmetrically, distrust) is a particular level of the subjective probability with which an agent assesses that another agent or group of agents will perform a particular action, both before he [sic] can monitor such action (or independently of his capacity ever to be able to monitor it) and in a context in which it affects his own action.7

      There are some interesting points here. First, Gambetta discusses agents, though the usage is somewhat different to that which we employed in Chapter 1. We used agents to describe an entity acting for another entity, whereas he is using a different definition, where an agent is an actor that takes an active role in an interaction. Confusingly, the usage within computing sometimes falls between these two definitions. A software agent is considered to have the ability to act autonomously in a particular situation—the term autonomous agent is sometimes used equivalently—but that is not necessarily the same as acting as a person or an organisation. However, in the absence of artificial general intelligence (AGI), it would seem that software agents must be acting on behalf of humans or human organisations even if the intention is to “set them free” to act autonomously or even learn behaviour on their own.

      The second important point that Gambetta makes is that a trust relationship—he is specifically discussing human trust relationships—is partly defined by expectations before any actions are performed. This resonates closely with the points we made earlier about the importance of collecting information to allow us to form assurances. His third point is related to the second, in that he discusses the possible inability of the trustor to monitor the actions in which they are interested. Given such a lack of assuring information, the ability to evaluate the likelihood of trust is based on the same data: that presented beforehand.

      For his fourth point, however, Gambetta also identifies that there are contexts in which actions can be monitored, though he seems to tie such actions to actions the trustor will take. This seems too restrictive on the trustor, as there may be actions taken by the trustee that do not lead to corresponding actions by the trustor—unless the very lack of such actions is considered action in itself. More important, however, is the implicit assumption (from the negative explicit in the previous statement) that monitoring should take place.

Скачать книгу