Trust in Computer Systems and the Cloud. Mike Bursell

Чтение книги онлайн.

Читать онлайн книгу Trust in Computer Systems and the Cloud - Mike Bursell страница 21

Trust in Computer Systems and the Cloud - Mike Bursell

Скачать книгу

neutral. Clearly, such manipulation of data will alter risk calculations, as it will alter the probability assigned to particular events, skewing the results.

       Misconceptions of Regression Regression suggests that if a particular sample is above a mean, the next sample is likely to be below the mean. This can lead to misconceptions such that punishing a bad event is more effective—as it leads to a (supposedly causal) improvement—than rewarding a good event—as this leads to a (supposedly causal) deterioration. The failure to understand this effect tends to lead people to “overestimate the effectiveness of punishment and to underestimate the effectiveness of reward”. This feels like a particularly relevant piece of knowledge to take into a series of games like the Prisoner's Dilemma, as one is most likely to be rewarded for punishing others, yet most likely to be punished for rewarding them.39 More generally, in a system where trust is important, and needs to be encouraged, trying to avoid this bias may be a core goal in the design of the system.

       Biases on the Evaluation of Conjunctive and Disjunctive Events People are bad at realising that a number of improbable but conjunctive events are likely to yield a bad outcome. This is important to us when we consider chains of trust, which we will examine in Chapter 3, “Trust Operations and Alternatives”, as the chance of having a single broken chain of trust where all the links in the chain enjoy a high probability of trustworthiness may still in reality be fairly high.

      How relevant is this to us? For any trust relationships where humans are involved as the trustors, it is immediately clear that we need to be careful. There are multiple ways in which the trustor may misunderstand what is really going on or just be fooled by their cognitive biases. We have talked several times in this chapter about humans' continued problems with making rational choices in, for instance, game theoretical situations. The same goes for economic or purchasing decisions and a wide variety of other spheres. An understanding—or at least awareness of—cognitive biases can go a long way in trying to help humans to make more rational decisions. Sadly, while many of us involved with computing, IT security, and related fields would like to think of ourselves as fully rational and immune to cognitive biases, the truth is that we are as prone to them as all other humans, as noted in our examples of normalcy bias and observer-expectancy effect. We need to remember that when we consider the systems we are designing to be trustors and trustees, our unconscious biases are bound to come into play—a typical example being that we tend to assume that a system that we design will be more secure than a system that someone else designs.

      Trying to apply our definition of trust to ourselves is probably a step too far, as we are likely to find ourselves delving into questions of the conscious, subconscious, and unconscious, which are not only hotly contested after well over a century of study in the West, and over several millennia in the East, but are also outside the scope of this book. However, all of the preceding points are excellent reasons for being as explicit as possible about the definition and management of trust relationships and using our definition to specify all of the entities, assurances, contexts, etc. Even if we cannot be sure exactly how our brain is acting, the very act of definition may help us to consider what cognitive biases are at play; and the act of having others review the definition may uncover further biases, allowing for a stronger—more rational—definition. In other words, the act of observing our own thoughts with as much impartiality as possible allows us, over time, to lessen the power of our cognitive biases, though particularly strong biases may require more direct approaches to remedy or even recognise.

      Trusting Others

      Having considered the vexing question of whether we can trust ourselves, we should now turn our attention to trusting others. In this context, we are still talking about humans rather than institutions or computers, and we will be applying these lessons to computers and systems. What is more, as we noted when discussing cognitive bias, our assumptions about others—and the systems they build—will have an impact on how we design and operate systems involved with trust. Given the huge corpus of literature in this area, we will not attempt to go over much of it, but it is worth considering if there are any points we have come across already that may be useful to us or any related work that might cause us to sit back and look at our specific set of interests in a different light.

Скачать книгу