Security Engineering. Ross Anderson
Чтение книги онлайн.
Читать онлайн книгу Security Engineering - Ross Anderson страница 27
The meanings of authenticity and integrity can also vary subtly. In the academic literature on security protocols, authenticity means integrity plus freshness: you have established that you are speaking to a genuine principal, not a replay of previous messages. We have a similar idea in banking protocols. If local banking laws state that checks are no longer valid after six months, a seven month old uncashed check has integrity (assuming it's not been altered) but is no longer valid. However, there are some strange edge cases. For example, a police crime scene officer will preserve the integrity of a forged check – by placing it in an evidence bag. (The meaning of integrity has changed in the new context to include not just the signature but any fingerprints.)
The things we don't want are often described as hacking. I'll follow Bruce Schneier and define a hack as something a system's rules permit, but which was unanticipated and unwanted by its designers [1682]. For example, tax attorneys study the tax code to find loopholes which they develop into tax avoidance strategies; in exactly the same way, black hats study software code to find loopholes which they develop into exploits. Hacks can target not just the tax system and computer systems, but the market economy, our systems for electing leaders and even our cognitive systems. They can happen at multiple layers: lawyers can hack the tax code, or move up the stack and hack the legislature, or even the media. In the same way, you might try to hack a cryptosystem by finding a mathematical weakness in the encryption algorithm, or you can go down a level and measure the power drawn by a device that implements it in order to work out the key, or up a level and deceive the device's custodian into using it when they shouldn't. This book contains many examples. In the broader context, hacking is sometimes a source of significant innovation. If a hack becomes popular, the rules may be changed to stop it; but it may also become normalised (examples range from libraries through the filibuster to search engines and social media).
The last matter I'll clarify here is the terminology that describes what we're trying to achieve. A vulnerability is a property of a system or its environment which, in conjunction with an internal or external threat, can lead to a security failure, which is a breach of the system's security policy. By security policy I will mean a succinct statement of a system's protection strategy (for example, “in each transaction, sums of credits and debits are equal, and all transactions over $1,000,000 must be authorized by two managers”). A security target is a more detailed specification which sets out the means by which a security policy will be implemented in a particular product – encryption and digital signature mechanisms, access controls, audit logs and so on – and which will be used as the yardstick to evaluate whether the engineers have done a proper job. Between these two levels you may find a protection profile which is like a security target, except written in a sufficiently device-independent way to allow comparative evaluations among different products and different versions of the same product. I'll elaborate on security policies, security targets and protection profiles in Part 3. In general, the word protection will mean a property such as confidentiality or integrity, defined in a sufficiently abstract way for us to reason about it in the context of general systems rather than specific implementations.
This somewhat mirrors the terminology we use for safety-critical systems, and as we are going to have to engineer security and safety together in ever more applications it is useful to keep thinking of the two side by side.
In the safety world, a critical system or component is one whose failure could lead to an accident, given a hazard – a set of internal conditions or external circumstances. Danger is the probability that a hazard will lead to an accident, and risk is the overall probability of an accident. Risk is thus hazard level combined with danger and latency – the hazard exposure and duration. Uncertainty is where the risk is not quantifiable, while safety is freedom from accidents. We then have a safety policy which gives us a succinct statement of how risks will be kept below an acceptable threshold (and this might range from succinct, such as “don't put explosives and detonators in the same truck”, to the much more complex policies used in medicine and aviation); at the next level down, we might find a safety case having to be made for a particular component such as an aircraft, an aircraft engine or even the control software for an aircraft engine.
1.8 Summary
‘Security’ is a terribly overloaded word, which often means quite incompatible things to different people. To a corporation, it might mean the ability to monitor all employees' email and web browsing; to the employees, it might mean being able to use email and the web without being monitored.
As time goes on, and security mechanisms are used more and more by the people who control a system's design to gain some commercial advantage over the other people who use it, we can expect conflicts, confusion and the deceptive use of language to increase.
One is reminded of a passage from Lewis Carroll:
“When I use a word,” Humpty Dumpty said, in a rather scornful tone, “it means just what I choose it to mean – neither more nor less.” “The question is,” said Alice, “whether you can make words mean so many different things.” “The question is,” said Humpty Dumpty, “which is to be master – that's all.”
The security engineer must be sensitive to the different nuances of meaning that words acquire in different applications, and be able to formalize what the security policy and target actually are. That may sometimes be inconvenient for clients who wish to get away with something, but, in general, robust security design requires that the protection goals are made explicit.
Note
1 1 The law around companies may come in handy when we start having to develop rules around AI. A company, like a robot, may be immortal and have some functional intelligence – but without consciousness. You can't jail a company but you can fine it.
CHAPTER 2 Who Is the Opponent?
Going all the way back to early time-sharing systems we systems people regarded the users, and any code they wrote, as the mortal enemies of us and each other. We were like the police force in a violent slum.
– ROGER NEEDHAM
False face must hide what the false heart doth know.
– MACBETH
2.1 Introduction
Ideologues may deal with the world as they would wish it to be, but engineers deal with the world as it is. If you're going to defend systems against attack, you first need to know who your enemies are.
In the early days of computing, we mostly didn't have real enemies; while banks and the military