Trust in Computer Systems and the Cloud. Mike Bursell
Чтение книги онлайн.
Читать онлайн книгу Trust in Computer Systems and the Cloud - Mike Bursell страница 16
Both prisoners stay silent, in which case they are both sentenced to one year in prison.
One prisoner stays silent, but their colleague betrays them, in which case the betrayer goes free but the silent prisoner receives a sentence of three years in prison.
Both prisoners betray the other, in which case they both end up in prison for two years.
The rational position for each prisoner to take is to betray the other because betrayal provides a better reward than staying silent. Three interesting facts fall out of this game and the mountains of theoretical and experimental data associated with it:
If the prisoners play repeatedly but know the number of repeated games, then the most rational strategy is to punish the other for bad behaviour and keep betraying.
If they do not know the number of repetitions, then the most rational strategy is to stay silent.
In reality, humans tend to a more cooperative strategy when playing variants of this game, working together rather than betraying each other.
I have attended events where groups of people—unaware of the theory—have participated in multiple rounds of this game or a modified version of the Prisoner's Dilemma (sometimes, for instance, the payoffs are adjusted and counts kept of notional money won and lost). It is fascinating to watch people trying out strategies whilst reacting to past rounds and also being locked into a history of their own behaviour that they cannot change. Much of the foundational modern work around the Prisoner's Dilemma—and broader game theory—was done by Robert Axelrod.11 He noted the same points and posited that cooperation—in such games or more broadly—is a positive evolutionary trait. It encourages behaviours that are likely to benefit the survival of the species adopting them. He also suggested ways to encourage cooperation, based on computer models contributed by various academic institutions:
Enlarge the shadow of the future (make players more aware of future games and less bound into their—and their fellow player's—histories).
Ensure that the payoffs are immediate, clear, and motivating.
Teach players to care about each other.
Teach reciprocity (rewarding positive actions—typically cooperation—and punishing negative actions—typically betrayal).
Improve recognition abilities (being able to recognise what the other party's strategy is).
Although proposed as a thought experiment, it turns out that the Prisoner's Dilemma can be used to model rather closely a number of different situations. Game theory arguably suffered, in the last years of the twentieth century and the early years of the twenty-first, from being the blockchain of its day: it was seen by some as the future foundation for all rules and types of societal interaction. As Heap and Varoufakis12 noted, people's motivations are typically more complex than the somewhat simplistic models provided by game theory and are affected by what they called people's social location: the cultures and societies in which they live and their relative positions in terms of wealth, power, etc.
This does not mean, however, that game theory has nothing to offer us. What Axelrod's tactics to encourage cooperation seem to be promoting are ways to build some sort of trust between the two parties. Let us revisit our definition of trust and apply it to game theory:
“Trust is the assurance that one entity holds that another will perform particular actions according to a specific expectation”.
We need, of course, to consider our corollaries as well: how do they apply in this case?
First Corollary “Trust is always contextual”.
Second Corollary “One of the contexts for trust is always time”.
Third Corollary “Trust relationships are not symmetrical”.
The two entities are easily identified in this case: the two participants in the game. The context here is the game—one way of understanding the concerns of Heap and Varoufakis is that trying to extend the context beyond the game means we are extending the context too far. It is clear that time is a vital component of this relationship, given the impact of multiple games. And the final corollary is that although the trust relationships from each player to the other in this example are not necessarily symmetrical, the best outcome is achieved when they are. Most important, as the games proceed, each party is building an assurance that the other will perform certain actions—staying silent or betraying—when asked. It is interesting to note that in our definition of trust, there is no value associated with whether the outcome is positive or negative: each party can have an assurance that the other party will perform particular actions (always staying silent; alternatively betraying and then staying silent; staying silent in response to the first party's previous silence) without the outcome necessarily being positive.
Reputation and Generalised Trust
The Prisoner's Dilemma is not the only type of game covered in the field of game theory. There are many, of which most are two-player games, and most can also have multiple participants (with no theoretical limit). The two-player games serve to give an example of how assurances about future behaviour—what we are referring to as trust relationships—can be formed between two participants.
What about the case for multiple participants? When I set about forming a trust relationship “from scratch”—with no prior interactions—to someone (let us call her Alice), then I do so based on my expectations, biases, and interactions over time. If, on the other hand, somebody (we will call her Carol) asks me for information on Alice in order for her to form an initial opinion, and then asks multiple other people who have also formed a trust relationship to Alice for the same or similar information, then something else is happening: Carol is finding out information not first-hand, but based on information from others.
The standard term for this is reputation, and it does not map directly from a trust relationship that Carol has to Alice but is a second-order construct. Carol cannot directly map my views on my trust relationship to Alice, alongside the views of others on their trust relationships to Alice, directly to her trust relationship to Alice: rather, she derives enough information to describe a reputation that she can relate to Alice and use to decide how best to form a trust relationship.
In the Prisoner's Dilemma example, we discussed the best strategic approach but also noted that many humans often end up taking a much more positive approach than would be expected by theoretical analysis. One reason for this may be that humans do not always act rationally—that is, in ways that suggest informed self-interest. One alternative to a self-interested approach is known as generalised trust. Rather than assuming that all trust relationships need to be formed from an initial position of distrust, generalised trust suggests that the default should be to trust, in the absence of any evidence to suggest it would be wise to do the contrary.13 Given our interest in trust for security within computing, this approach may not be a very sensible one: it is much easier to assess risk from the point of view of starting from a position of no trust and build up a trust relationship built on known precepts than to reduce trust. Further, as Brian Rathburn points out,