Trust in Computer Systems and the Cloud. Mike Bursell
Чтение книги онлайн.
Читать онлайн книгу Trust in Computer Systems and the Cloud - Mike Bursell страница 22
Trust, But Verify
Without wanting to focus too much on mistrust, we should not, however, assume good intent when interacting with other humans. Humans do not always do what they say they will do, as we all well know from personal experience. In other words, they are not always trustworthy, which means our trust relationships to them will not always yield positive outcomes. Not only that, but even if we take our broader view of trust relationships, where we say that the action need not be positive as long as it is what we expect, we can also note that humans are not always consistent, so we should not always expect our assurances to be met in that case, either.
There is a well-known Russian proverb popularised in English by President Ronald Reagan in the 1980s as “trust, but verify”. He was using it in the context of nuclear disarmament talks with the Soviet Union, but it has been widely adopted by the IT security community. The idea is that while trust is useful—and important—verification is equally so. Of course, one can only verify the actions—or, equally, inactions—associated with a trust relationship over time: it makes no sense to talk about verifying something that has not happened. We will consider in later chapters how this aspect of time is relevant to our discussions of trust; but Nan Russell, writing for Psychology Today about trust for those in positions of leadership within organisations,44 suggests that “trust, but verify” is only the best strategy when the outcome—in our definition, the actions about which the trustor has assurances of being performed by the trustee—is more important than the relationship itself. Russell's view is that continuous verification is likely to signal to the trustee that the trustor distrusts them, leading to a negative feedback loop where the trustee fails to perform as expected, confirming the distrust by the trustor. What this exposes is the fact that the trust relationship (from the leader to the person being verified) to which Russell is referring actually exists alongside another relationship (from the person being verified to the leader) and that actions related to one may impact on the other. This is another example of how important it is to define trust relationships carefully, particularly in situations between humans.
Attacks from Within
To return to the point about not necessarily trusting other humans, there is often an assumption that all members of an organisation or institution will have intentions broadly aligned with each other, the institution, or the institution's aims. This leads to trust relationships between members of the same organisation based solely on their membership of that organisation, and not on any other set of information. This, we might expect, would be acceptable and, indeed, sensible, as long as the context for expected actions is solely activities associated with the organisation. If, say, I join a netball club, another member of the club might well form a trust relationship to me that expects me to lobby our local government officers for funding for a new netball court, particularly if one of the club's stated aims is that it wishes to expand by getting one or more new courts.
But what if I have daughters who play rugby and who train at an adjacent rugby club that also wishes to expand? I may have joined the netball club with no intention of lobbying for increased resources for netball, but with the plan of lobbying the local government with an alternative proposal for resources, directed instead towards my daughters' rugby club. This might seem like an underhanded trick, but it is a real one and can go even further than external actions, with plans to change the stated aims or rules of the organisation. If I can get enough other members of the rugby club to join the netball club, it may well be that the constitution of the club, if not robust enough, might be vulnerable to a general vote to change the club's goals to stay with existing resources or even reduce the number of courts, ceding them to the adjacent rugby club.
Something a little like this began to happen in the UK around 2015, when animal rights campaigners demanded that the National Trust, a charity that owns large tracts of land, ban all hunting with dogs on its land. Hunting wild animals with dogs was banned in England and Wales (where the National Trust holds much of its land) in 2004, but some animal rights campaigners complain that trail hunting—an alternative where a previously laid scent is followed instead—can be used as a cover for illegal hunting or lead to accidental fox chases. The policy of the National Trust—at the time of writing—is that trail hunting is permitted on its land, given the appropriate licences, “where it is consistent with our conservation aims and is legally pursued”.45 Two years later, in 2017, the League Against Cruel Sports supported46 a campaign by those opposed to any type of hunting to join the National Trust as members, with the aim of forcing a vote that would change the policy of the organisation and lead to a ban on any hunting on its land. This takes the concept of “revolt from within” in a different direction because the idea is to try to recruit enough members who are at odds with at least one of the organisation's policies to effect a change.
This is different to a single person working from within an organisation to try to subvert its aims or policies. Assuming that the employee has been hired in good faith, then their actions should be expected to be aligned with the organisation's policies. If that is the case, then the person is performing actions that are at odds with the trust relationship the organisation has with them: this assumes that we are modelling the contract between an organisation and an employee as a trust relationship from the former to the latter, an issue to which we will return in Chapter 8, “Systems and Trust”. In the case of “packing” the membership with those opposed to a particular policy or set of policies, those joining are doing so with the express and stated aim of subverting it, so there is no break in any expectations on the individual level.
It may seem that we have moved a long way from our core interest in security and computer systems, but attacks similar to those outlined above are very relevant, even if the trust models may be slightly different. Consider the single attacker who is subverting an organisation from within. This is how we might model the case where a component that is part of a larger system is compromised by a malicious actor—whether part of the organisation or not—and serves as a “pivot point” to attack other components or systems. Designing systems to be resilient to these types of attacks is a core part of the practice of IT or cybersecurity, and one of our tasks later in this book will be to consider how we can use models of trust to help that practice. In the case of the packing of members to subvert an organisation, this is extremely close, in terms of the mechanism used, to an attack on certain blockchains and crypto-currencies known as a 51% attack. At a simplistic level, a blockchain operates one or more of a variety of consensus mechanisms to decide what should count as a valid transaction and be recorded as part of its true history. Some of these consensus mechanisms are vulnerable to an attack where enough active contributors (miners) to the blockchain can overrule the true history and force an alternative that suits their ends, in a similar way to that in which enough members of an organisation can decide to vote in a policy that is at odds with the organisation's stated aims. The percentage required for at least some of these consensus mechanisms is a simple majority: hence the figure of 51%. We will be returning to blockchains later in this book, as the trust models are interesting, and many of the widely held assumptions around their operation turn out to be much more complex than are generally considered.
The Dangers of Anthropomorphism