Trust in Computer Systems and the Cloud. Mike Bursell
Чтение книги онлайн.
Читать онлайн книгу Trust in Computer Systems and the Cloud - Mike Bursell страница 13
There is a growing corpus of research and writing around how humans build trust relationships to each other and to organisations, and this is beginning to be applied to how humans and organisations trust computer systems. What is missing is often a realisation that interactions between computer systems themselves—case four in our earlier examples—are frequently modelled in terms of trust relationships. But as these models lack the rigour and theoretical underpinnings to allow strong statements to be made about what is really going on, we are left without the ability to allow detailed discussion of risk and risk mitigation.
Why does this matter, though? The first answer is that when you are running a business, you need to know that all the pieces are correct and doing the correct thing in relationship to each other. This set of behaviours and relationships makes up a system, and the pieces its components, a subject to which we will return in Chapter 5: The Importance of Systems. We can think of this as similar to ensuring that your car is made up of the correct parts, placed in the correct locations. If you have the wrong brake cable, then you may find that when you press the relevant pedal, your brakes don't engage. In the same way, if you have multiple computer systems trying to talk to each other, and your database is not correctly integrated with the other components, you may find that when somebody places an order with your company, the order is not processed. When discussing these relationships and integrations in computing and IT, we sometimes talk about one component having a contract that it offers to other components. This contract will describe the expected behaviour of the component in terms of the inputs it receives and output that it provides. Providers of such components generally try to keep these contracts as firm and unchanging across versions as possible because any changes can significantly impact the behaviour of the rest of the system.
Think, for instance, of a component that is calculating the risk associated with an event. It takes as input a probability in a range from 0 to 1 and a dollar amount and then outputs the product of the two according to our formula. What would happen if a new version was released that, instead of taking the probability as a range from 0 to 1, expected a percentage (in the range from 0 to 100)? This would be a change to the contract, and any components integrated with this one—either for input or output—would need to be informed of the change and possibly updated in order for the system to work as expected.
To return to our definition:
“Trust is the assurance that one entity holds that another will perform particular actions according to a specific expectation”.
The contract is the “specific expectation” in this case. The contract is usually defined with an application programming interface (API), either expressed using one of a common set of descriptive languages or specific to the particular language in which the component is written. The first reason that being able to discuss risk management and mitigation is important, then, is to allow us to construct a business by integrating various systems along the lines of the contracts they provide. The second reason for its importance is security.
Defining Correctness in System Behaviour
Earlier, we skirted slightly around the idea of correctness in terms of components and their behaviours, but one way of thinking about security is that it is concerned with maintaining the correctness of the behaviour of your systems in the face of attempts by malicious actors to make them act differently. Whether these malicious actors wish to use your systems to further their own ends—to mine crypto-currency, exfiltrate user data, attack other targets, or host their own content—or to disrupt your business, the outcome is the same: they want to use your systems in ways you did not intend.
To guard against this, you need to know:
How the systems should act
How to recognise when they do not act as expected
How to fix any problem that arises
The first goal is an expression of our trust definition, and the second is about monitoring to ensure that the trust is correctly held. The third—fixing the problem—is about remediation. All three of these goals may seem very obvious, but it is easy to miss that many security breakdowns arise precisely because trust is not explicitly stated and monitored. The key thesis of this book is that without a good understanding of the trust relationships between systems in contexts in which they operate or might operate, it is impossible to understand the possibilities available for malicious compromise (and, indeed, unintentional malfunction). Many attacks involve taking systems and using them in ways—in contexts—not considered by those who designed and/or operate them. A full understanding of trust relationships allows better threat modelling, stronger defences, closer monitoring, and easier remediation when things go wrong, partly because defining contexts where behaviours are defined allows for better consideration of where and how systems should be deployed.
We can state our three aims differently. To keep our systems working the way we expect, we need to know:
What trust relationships exist
How to recognise when a trust relationship has broken
How to re-establish trust
There is, of course, another thing we need to know: how to defend our systems in the first place and design them so that they can be defended. These are topics we will address later in the book.
Notes
1 1 I sympathise with anyone tasked with translating this book: “trust” is a concept that is very culturally and linguistically situated.
2 2 This book is not a work of literary criticism, and we will generally be steering clear of Derrida, Foucault, deconstructionism, post-structuralism, and other post-modernist agendas.
3 3 Or at least what appears to be a human—a topic to which we will return in a later chapter.
4 4 Gambetta, 1988.
5 5 Hern, 2017.
6 6 There is an interesting point about grammar here. In British English, collective nouns or nouns representing an organisation, such as Google, can often take either a singular or a plural verb form. In the US, they almost always take the singular. So, saying “The company were adamant that they…”, an easy way to show that there are multiple actors possibly being represented here, works in British English but not in US English. Thus British English speakers may be more likely than US readers to consider an organisation as a group of individuals than as a monolithic corporate whole.
7 7 Wikipedia, “Boeing 737 MAX groundings”, 2021.