You CAN Stop Stupid. Ira Winkler
Чтение книги онлайн.
Читать онлайн книгу You CAN Stop Stupid - Ira Winkler страница 11
In an ideal world, governance documents should cover how people are to do their jobs in a way that does not make them susceptible to attacks and in a way that their work processes do not result in losses. This includes how specific actions are to be taken and how specific decisions are to be made in performing job functions.
That ideal world represents the embodiment of a system. A good example of this is McDonald's. Generally, McDonald's expects to hire minimally qualified people to deliver a consistent product anywhere in the world. This involves specifying a process and using technology to consistently implement that process. Although people may be involved in performing a function, such as cooking and food preparation, technology is now driving those processes. A person might put the hamburgers on a grill, but the grill is automated to cook the hamburgers for a specific time at a given temperature. The same is true for french fries. Even the amount of ketchup that goes on a hamburger is controlled by a device. Robots control the drink preparation. McDonald's is now distributing kiosks to potentially eliminate cashiers. Although a fast-food restaurant might not seem to be technology-related, the entire restaurant has become a system, driven by governance that is implemented almost completely through technology.
We Propose a Strategy, Not Tactics
We described in the book's introduction how the scuba and loss prevention industries look at the concept of mitigating loss as a comprehensive strategy. When organizations fail to do this, they attempt to implement random tactics that are not cohesive and supporting of each other. For example, if you think the fact that users create loss is an awareness failing and that the solution is better awareness, you are focusing on a single countermeasure. This approach will fail.
A comprehensive strategy is required to mitigate damage resulting from user actions. This book provides such a strategy. This strategy is something that should be applied to all business functions, at all levels of the organization. Wherever there can be a loss resulting from user actions or inactions, you need to proactively determine whether that loss is worth mitigating and then how to mitigate it.
NOTE Implementing the strategy across the entire business at all levels doesn't mean that every user needs to actively know and apply the depth and the breadth of the entire strategy. (The fry cook doesn't need to know how the accounting department works, and vice versa.) The team that implements the strategy coordinates its efforts in a way that informs, directs, and empowers every user to accomplish the strategy in whichever ways are most relevant for their role.
In an ideal world, you will always look at any user-involved process and determine what damage the user can initiate and how the opportunity to cause damage may be removed, as best as possible. If the opportunity for damage cannot be completely removed, you will then look to specify for the user how to make the right decisions and take the appropriate actions to manage the possibility of damage. You then must consider that some user will inevitably act in a way that leads to damage, so you consider how to detect the damaging actions and mitigate the potential for resulting loss as quickly as possible.
Minimally, when you come across a situation where a user creates damage, you should no longer think, “Just another stupid user.” You should immediately consider why the user was in a position to create damage and why the organization wasn't more effective in preventing it.
2 Users Are Part of the System
Users inevitably make mistakes. That is a given. At the same time, within an environment that supports good user behavior, users behave reasonably well. The same weakest link who creates security problems and damages systems can also be an effective countermeasure that proactively detects, reports, and stops attacks.
While the previous statements are paradoxically true, the reality is that users are inconsistent. They are not computers that you can expect to consistently perform the same function from one occurrence to the next. More important, all users are not alike. There is a continuum across which you can expect a range of user behaviors.
Understanding Users' Role in the System
It is a business fact that users are part of the system. Some users might be data entry workers, accountants, factory workers, help desk responders, team members performing functions in a complex process, or other types of employees. Other users might be outside the organization, such as customers on the Internet or vendors performing data entry. Whatever the case, any person who accesses the system must be considered a part of the system.
Clearly, you have varying degrees of authority and responsibility for each type of user, but users remain autonomous, and you never have complete authority over them. Therefore, to consider users to be anything other than a part of the system will overlook their capacity to introduce errors and cause security breaches and thus lead to failure. The security and technology teams must consider the users to be one more part of the system that needs to be facilitated and secured. However, without absolute authority, from a business perspective, you must never consider users to be a resource that can be consistently relied upon.
It is especially critical to note that the technology and security teams rarely have any control over the hiring of users. Depending upon the environment, the end users might not be employees, but potentially customers and vendors over whom there is relatively little control. The technology and security teams have to account for every possible end user of any ability.
Given the limited control that technology and security teams have over users, it is not uncommon for some of these professionals to think of users as the weakest link in the system. However, doing so is one of the biggest copouts in security, if not technology management as a whole.
Users are not a “necessary evil.” They are not an annoyance to be endured when they have questions. Looking down upon users ignores the fact that they are a critical part of the system that security and technology teams are responsible for. In some cases, they might be the reason that these teams have a job in the first place.
It is your job to ensure that you proactively address any expected areas of loss in the system, including users. Users can only be your weakest link if you fail to mitigate expected user-related issues such as user error and malfeasance.
Perhaps one of the more notable examples is that of the B-17 bomber troubles. Clearly, a pilot is a critical part of flying the airplane. They are not just a “user” in the most limited sense of the term. When the B-17 underwent the first test flights in 1935, it was the most complex airplane at that time. The pilots chosen as the test pilots were among the top pilots in the country. Yet, these top test pilots crashed the plane. The reason was that they failed to disengage a locking mechanism on the flight controls.
It was determined that the pilots were overwhelmed by the complexity and made a simple mistake. As the pilots were a critical part of the system, removing them was not an option. They were highly experienced and trained professionals, so the problem was not that they were poorly trained. The government could have sent the pilots for additional training, but retraining top pilots in the basics of how to fly the plane was not going to be an efficient approach. Instead, they recognized that the problem was that the complexity of the airplane was overwhelming.
The solution was the implementation of a checklist to detail every basic step a pilot had to take to ensure the proper functioning of the airplane. Similar problems have since been solved for astronauts and surgeons, among countless other critical “pieces of the system.”
Users Aren't Perfect
Users can be both a blessing and a curse. For the