You CAN Stop Stupid. Ira Winkler

Чтение книги онлайн.

Читать онлайн книгу You CAN Stop Stupid - Ira Winkler страница 12

You CAN Stop Stupid - Ira  Winkler

Скачать книгу

rest of the system is designed appropriately, users will behave accordingly. At the same time, you must understand and anticipate that despite your best efforts, users sometimes do the wrong thing.

      For example, in one case that is unfortunately not isolated, author Ira Winkler ran a phishing simulation against a large company. Employees were sent a message that contained a link to a résumé. The sender claimed that one of the company's owners suggested they contact the recipient for help in referring them to potential jobs. If the employees clicked on the fake résumé, they received a message explaining that the email was fake and how they should have recognized it as such. In at least one case, the user replied, still believing it was a real message, saying there was a problem with the attached résumé. This sort of phishing training exercise can improve some user behaviors, but it is certainly far from making users a foolproof part of the system.

      Anticipating how people will behave helps you design better systems to capitalize on predictable behaviors, leading to better security. Even though people make mistakes, good systems should anticipate that and not break when they do.

      When we use the term users throughout the book, it might seem that we are implying end users or low-level workers. The reality is that we mean anyone with any function. This can be managers, software developers, system administrators, accountants, security team members, and so on.

      Anyone who has a job function or access that can result in damage is technically a user. Administrators can accidentally delete data and cause system outages. Security guards can leave doors open. Managers can leave sensitive documents in public places. Auditors can make accounting errors. Everyone is a user at some level.

      Cloud services and remote workers create additional concerns, where you potentially lose control over your information and users. For example, if a user goes into Starbucks and uses the free WiFi to connect to your network, that user creates a whole new class of users, increasing the risk profile. Cloud services change the profile of your users, given that access control methods change to allow for someone to theoretically log in from anywhere in the world. The risk can be mitigated, but you have to plan for it.

      Perhaps some of the more overlooked groups of users are the people who are responsible for mitigating risk. They tend to look at the errors caused by others and believe that they themselves would have never caused the errors. This causes two types of problems.

      The first is that if they don't conceive that an error can occur, they cannot proactively mitigate it. We have been on software test teams and found problems with potential uses of the software and told the developers. The developers have often responded that “nobody would ever do that” and fought us on implementing the fixes.

      The second issue is that the risk mitigation teams, like information security, IT, physical security, operations, and so on, don't perceive themselves as being the source of errors. They do not believe they will make mistakes. They can have tremendous privileges and access, which provides the capabilities for their errors to create more damage than any normal user would.

      Although the natural assumption is that user-initiated loss happens through ignorance or carelessness, a great deal of damage is caused by malicious users. The 2018 DBIR found that 28 percent of incidents result from malicious insiders who have clear intent to either steal something of value or create other forms of damage. That is a staggering number.

      When malice is involved, awareness efforts can sometimes even work against an organization. Awareness efforts typically educate people about how malicious actors accomplish their goals. This provides your malicious employees with information about how they too can commit those types of crimes. It also gives them ideas about how and where you allocate defensive resources and what countermeasures they need to bypass. Clever malicious insiders use this information to improve their own attacks.

      As a percentage of overall users, the number who will launch malicious attacks, let alone succeed at them, is fortunately small. Even so, the reality is that malicious users exist, so you must account for them. There have been various studies that have shown that a small percentage of users create the most damage. This is intuitively obvious. Such users will always exist. The best you can do is acknowledge this reality and prepare for them.

      Users need to perform their jobs properly in a fundamentally safe and secure manner. You need to ensure that security is embedded in job functions and that people know how to perform those functions properly. This should be well defined, and just like any other job function, you should set the expectation for those users to follow those definitions. We would love to say that you should also expect users to be fundamentally aware of security concerns beyond what is specifically defined, but that will not likely happen on a consistent basis.

      Therefore, businesses should factor the users' limited awareness into their risk management calculations and plans. You should provide awareness training and opportunities to further reduce risk. Although we don't want organizations to rely too strongly on awareness, it is a critical component of any security program to reduce risk.

      Although user ignorance can be partially improved with training, carelessness is another matter. Assuming you have properly instructed users in how they should perform their functions, if some users still consistently violate policies and cause damage, you may need to take disciplinary action against them.

      It is important to follow our recommended strategies to ensure that your systems reduce the opportunities for users to make errors or cause malicious damage and then mitigates any remaining potential harm. Then regardless of whether the harmful actions are due to malice, ignorance, or carelessness, your environment should be far more powerfully positioned to minimize or even stop the resulting damage.

      Users are expected to, and do, make mistakes, and some attempt to maliciously cause damage. However, those actions do not have to result in damage. There is a tendency to place all of the blame for mistakes on users. Instead, a better approach is to recognize the relationship between users and loss and work to improve the system in which they exist.

      For this reason, we will use the term user-initiated loss (UIL), which we define as loss, in some form, that results from user action or inaction. As Chapter 2, “Users Are Part of the System,” discussed, users are not just employees but anyone who interacts with and can have an effect on your system. These actions can be a mistake, or they can be a deliberate, malicious act. Obviously, sometimes the system is attacked by an external entity, so the attack itself is not user-initiated.

Скачать книгу