The Failure of Risk Management. Douglas W. Hubbard
Чтение книги онлайн.
Читать онлайн книгу The Failure of Risk Management - Douglas W. Hubbard страница 21
To build on the previous pharmaceutical outsourcing example, imagine applying a method that pharmaceutical companies would already be very familiar with in the clinical testing of drugs. Suppose that nearly all of the major health products companies (this includes drugs, medical instruments, hospital supplies, etc.) are recruited for a major risk management experiment. Let's say, in total, that a hundred different product lines that will be outsourced to China are given one particular risk management method to use. Another hundred product lines, again from various companies, implement a different risk management method. For a period of five years, each product line uses its new method to assess risks of various outsourcing strategies. Over this period of time, the first group experiences a total of twelve events resulting in adverse health effects traced to problems related to the overseas source. During the same period, the second group has only four such events without showing a substantial increase in manufacturing costs.
Of course, it would seem unethical to subject consumers to an experiment with potentially dangerous health effects just to test different risk management methods. (Patients in drug trials are at least volunteers.) But if you could conduct a study similar to what was just described, the results would be fairly good evidence that one risk management method was much better than the other. If we did the math (which I will describe more later on as well as show an example on the website www.howtomeasureanything.com/riskmanagement) we would find that it would be unlikely for this result to be pure chance if, in fact, the probability of the events were not different. In both groups, there were companies that experienced unfortunate events and those that did not, so we can infer something about the performance of the methods only by looking at the aggregation of all their experiences.
Although this particular study might be unethical, there were some examples of large studies similar to this that investigated business practices. For example, in July 2003, Harvard Business Review published the results of a study involving 160 organizations to measure the effectiveness of more than two hundred popular management tools, such as TQM, ERP, and so on.15 Then independent external reviews of the degree of implementation of the various management tools were compared to shareholder return over a five-year period. In an article titled “What Really Works,” the researchers concluded, to their surprise, that “most of the management tools and techniques we studied had no direct causal relationship to superior business performance” That would be good to know if your organization was about to make a major investment in one of these methods.
Another study, which was based on older but more relevant data, did look at alternative methods of risk management among insurance companies. There was a detailed analysis of the performance of insurance companies in mid-nineteenth century Great Britain when actuarial science was just emerging. Between 1844 and 1853, insurance companies were starting up and failing at a rate more familiar to Silicon Valley than the insurance industry. During this period 149 insurance companies formed and after that period just fifty-nine survived. The study determined that the insurance companies who were using statistical methods were more likely to stay in business (more on this study later).16 Actuarial methods that were at first considered a competitive advantage became the norm.
Again, this is the hard way to measure risk management methods. The best case for organizations would be to rely on research done by others instead of conducting their own studies—assuming they find the relevant study. Or, similar to the insurance industry study, the data are all historical and are available if you have the will to dig all of it up. Fortunately, there are alternative methods of measurement.
Direct Evidence of Cause and Effect
Of course, a giant experiment is not usually very practical, at least for individual companies to conduct by themselves. Fortunately, we have some other ways to answer this question without necessarily conducting our own massive controlled experiments. For example, there are some situations in which the risk management method caught what obviously would have been a disaster, such as detecting a bomb in a suitcase, only because of the implementation of a new plastic explosives–sniffing device. Another example would be where an IT security audit uncovered an elaborate embezzling scheme. In those cases, we know it would have been extremely unlikely to have discovered—and addressed—the risk without that particular tool or procedure. Likewise, there are examples of disastrous events that obviously would have been avoidable if some prudent amount of risk management had been taken. For example, if a bank was overexposed on bad debts and reasonable procedures would never have allowed such an overexposure, then we can confidently blame the risk management procedures (or lack thereof) for the problem.
But direct evidence of cause and effect is not as straightforward as it might at first seem. There are times when it appears that a risk management effort averted one risk but exacerbated another that was harder to detect. For example, the FAA currently allows parents traveling with a child under the age of two to purchase only one ticket for the adult who holds the child on his or her lap. Suppose the FAA is considering requiring parents to purchase seats for each child, regardless of age. If we looked at a crash where every separately seated toddler survived, is that evidence that the new policy reduced risk? Actually, no—even if we assume it is clear that particular children are alive because of the new rule. A study already completed by the FAA found that changing the “lap children fly free” rule would increase total fares for the traveling families by an average of $185, causing one-fifth of them to drive instead of fly. When the higher travel fatalities of driving are considered, it turns out that changing the rule would cost more lives than it saves. It appears we still need to check even the apparently obvious instances of cause and effect against some other independent measure of overall risk. The danger of this approach is that it may turn out that even when a cause-effect relationship is clear, it could just be anecdotal evidence. We still need other ways to check our conclusions about the effectiveness of risk management methods.
Component Testing
Lacking large controlled experiments, or obvious instances of cause and effect, we still have ways of evaluating the validity of a risk management method. The component testing approach looks at the gears of risk management instead of the entire machine. If the entire method has not been scientifically tested, we can at least look at how specific components of the method have fared under controlled experiments. Even if the data is from different industries or laboratory settings, consistent findings from several sources should give us some information about the problem.
As a matter of fact, quite a lot of individual components of larger risk management methods have been tested exhaustively. In some cases, it can be conclusively shown that a component adds error to the risk assessment or at least doesn't improve anything. We can also show that other components have strong theoretical backing and have been tested repeatedly with objective, scientific measures. Here are a few examples of component-level research that are already available:
The synthesis of data: One key component of risk management is how we synthesize historical experience. Where we rely on experts to synthesize data and draw conclusions, we should look at research into the relative performance of expert opinion versus statistical models.
Known human errors and biases: If we rely on expert opinion to assess probabilities, we should be interested in reviewing the research on how well experts do at assessing the likelihood of events, their level of inconsistency, and common biases. We should consider research into how hidden or explicit incentives or irrelevant factors affect judgment. We should know how estimates can be improved by accounting