The Failure of Risk Management. Douglas W. Hubbard
Чтение книги онлайн.
Читать онлайн книгу The Failure of Risk Management - Douglas W. Hubbard страница 18
Baxter International, Inc. was receiving reports of dangerous adverse reactions to its Chinese-manufactured blood-thinning drug called heparin. To its credit, by mid-January 2008, Baxter had voluntarily recalled some lots of the multidose vials of the drug. By then, the FDA was considering a mandatory recall but had not yet done so because they believed other suppliers might not be able to meet demand for this critical drug. The FDA reasoned that this additional risk to patients requiring heparin therapy would be higher. (I have no idea how much risk analysis went into that decision.)
By February, the FDA had determined that the supply of heparin by other manufacturers was adequate and that Baxter should proceed with the recall of various types of heparin products. At the beginning of the recall in February, the FDA had linked four deaths to the Chinese-manufactured heparin and by March the number had grown to nineteen deaths. By May 2008, the FDA had “clearly linked” a total of eighty-one deaths and 785 severe allergic reactions to the drug.
The risks of outsourcing drug production to China always were high, and the fact that some firms were at least attempting to develop a risk management method—regardless of its effectiveness—indicates that the industry was at least aware of the risk. The FDA is entrusted to inspect the operations of any drug manufacturer selling products in the United States, including foreign-based factories but, by March 2008, the FDA had inspected just 16 of the 566 Chinese drug manufacturers. Most drugs used in the United States are now produced overseas and most of those are from China. The scale of the problem easily justifies the very best risk analysis available.
Obviously, we can't be certain with only this information that the industry's lack of more sophisticated risk management for overseas drug manufacturing was the direct cause of the heparin incident. If the industry had used more sophisticated methods, such as it already uses for stop-gate analysis, we could not be certain that some similar problem would not still have occurred. And, because the entire industry was unsophisticated in this area of risk management, there is certainly no reason to single out Baxter as a particularly bad example. This anecdote, by definition, is merely a single sample of the types of events that can occur and, by itself, is not sufficient to draw scientifically justified conclusions.
For any risk management method used in the pharmaceutical industry or any other industry, we must ask, again, “How do we know it works?” If we can't answer that question, then our most important risk management strategy should be to find a way to answer it and adopt a risk assessment and risk mitigation method that does work.
WHY IT'S HARD TO KNOW WHAT WORKS
One reason why we should be skeptical of the perception of effectiveness of any decision-making method (not just in regards to risk management) is that we may be susceptible to a kind of “analysis placebo” effect. You are probably familiar with how placebos are used in the pharmaceutical industry. To test the effectiveness of a new drug, they don't simply ask whether patients or even the doctors felt a new drug is working. In order to determine that the new drug is really working, patients taking the real drug have to do measurably better than those taking the placebo (which may be a sugar pill). Which patients get the placebo is even hidden from the doctors so that their diagnoses are not biased.
An analysis placebo produces the feeling that some analytical method has improved decisions and estimates even when it has not. Placebo means “to please” and, no doubt, the mere appearance of structure and formality in risk management is pleasing to some. In fact, the analogy to a placebo is going a bit too easy on risk management. In medical research, there can actually be a positive physiological effect from a mere placebo beyond the mere perception of benefit. But when we use the term in the context of risk management we mean there literally is no benefit other than the perception of benefit. Several studies in very different domains show how it is possible for any of us to be susceptible to this effect:
Sports picks: A 2008 study at the University of Chicago tracked probabilities of outcomes of sporting events as assigned by participants given varying amounts of information about the teams without being told the names of teams or players. As the fans were given more information about the teams in a given game, they would increase their confidence that they were picking a winner, even though the actual chance of picking the winner was nearly flat no matter how much information they were given.2 In another study, sports fans were asked to collaborate with others to improve predictions. Again, confidence went up after collaboration but actual performance did not. Indeed, the participants rarely even changed their views from before the discussions. The net effect of collaboration was to seek confirmation of what participants had already decided.3
Psychological diagnosis: Another study showed how practicing clinical psychologists became more confident in their diagnoses and prognoses for various risky behaviors by gathering more information about patients, and yet, again, the agreement with observed outcomes of behaviors did not actually improve.4
Investments: A psychology researcher at MIT, Paul Andreassen, did several experiments in the 1980s showing that gathering more information about stocks in investment portfolios improved confidence but without any improvement in portfolio returns. In one study, he showed how people tend to overreact to news and assume that the additional information is informative even though, on average, returns were not improved by these actions.5
Trivia estimates: Another study investigating the benefits of collaboration asked subjects for estimates of trivia from an almanac. It considered multiple forms of interaction including the Delphi technique, free-form discussion, and other methods of collaboration. Although interaction did not improve estimates over simple averaging of individual estimates, the subjects did feel more satisfied with the results.6
Lie detection: A 1999 study measured the ability of subjects to detect lies in controlled tests involving videotaped mock interrogations of “suspects.” The suspects were actors who were incentivized to conceal certain facts in staged crimes to create real nervousness about being discovered. Some of the subjects reviewing the videos received training in lie detection and some did not. The trained subjects were more confident in judgments about detecting lies even though they were worse than untrained subjects at detecting lies.7
And these are just a few of many similar studies showing that we can engage in training, information gathering, and collaboration that improves confidence but not actual performance. We have no reason to believe that fundamental psychology observed in many different fields doesn't apply in risk management in business or government. The fact that a placebo exists in some areas means it could exist in other areas unless the data shows otherwise.
The placebo effect might not be as persistent if it were easier to learn from our experience in risk management. But learning is not a given in any environment. Two prolific psychologists, Daniel Kahneman and Gary Klein, once wrote an article about what it takes to learn from experience.
Kahneman and Klein came from what are thought of as opposing views of how judgments are made by experts. Klein, comes from the “naturalistic decision-making” school, where experts in fields such as firefighting are seen as having amazing and intuitive judgments in complex situations.