The Failure of Risk Management. Douglas W. Hubbard

Чтение книги онлайн.

Читать онлайн книгу The Failure of Risk Management - Douglas W. Hubbard страница 19

The Failure of Risk Management - Douglas W. Hubbard

Скачать книгу

where all sorts of errors in human judgment are analyzed. They wrote a joint paper in an effort to compare these apparently competing views. But what they found was that in important respects, they did not disagree, so they decided to name the paper, “Conditions for Intuitive Expertise: A Failure to Disagree.”8

      They found they agreed that developing expert intuition in any field is not an automatic outcome of experience. Experts needed “high-validity” feedback so that the outcomes of estimates and decisions could be learned from. Our feedback should be consistent (we get feedback most, if not all, of the time), quick (we don't have to wait long for it), and unambiguous.

      Risk management simply does not provide the type of consistent, immediate, and clear feedback that Kahneman and Klein argue we need as a basis for learning. Risk managers make estimates, decisions, or recommendations without knowing what the effect is for some time, if ever. If risk went down after the implementation of a new policy, how would you know? How long would it take to confirm that the outcome was related to the action taken? How would you determine whether the outcome was just due to luck?

      What we will not do to measure the performance of various methods is rely on the proclamations of any expert regardless of his or her claimed level of knowledge or level of vociferousness. So even though I may finally have some credibility in claiming experience after thirty years in quantitative management consulting, I will not rely on any appeals to my authority regarding what works and what does not. I will, instead, resort to using published research from large experiments. Any mention of anecdotes or quotes from “thought leaders” will only be used to illustrate a point, never to prove it.

      The potential existence of an analysis placebo, the difficulty of learning from experience alone in risk management, and the general lack of objective measurements of performance in risk management means that we should be wary of self-assessments in this field. We should bear in mind one particular statement in the previously mentioned article by Daniel Kahneman and Gary Klein:

      True experts, it is said, know when they don't know. However, nonexperts (whether or not they think they are) certainly do not know when they don't know. Subjective confidence is therefore an unreliable indication of the validity of intuitive judgments and decisions. (p. 524)

      There is no reason to believe risk management avoids the same problems regarding self-assessment. As we saw in the surveys, any attempt to measure risk management at all is rare. Without measurements, self-assessments in the effectiveness of risk management are unreliable given the effect of the analysis placebo, the low validity problem described by Kahneman and Klein, and the Dunning-Kruger effect.

      There is an old management adage that says, “You can't manage what you can't measure.” (This is often misattributed to W. E. Deming, but is a truism, nonetheless.) Management guru Peter Drucker considered measurement to be the “fourth basic element in the work of the manager.” Because the key objective of risk management—risk reduction or at least a minimized risk for a given opportunity—may not exactly be obvious to the naked eye, only deliberate measurements could even detect it. The only way organizations could be justified in believing they are “very effective” at risk management is if they have measured it.

      Risk professionals from Protiviti and Aon (two of the firms that conducted the surveys in chapter 2) also have their suspicions about the self-assessments in surveys. Jim DeLoach, a Protiviti managing director, states that “the number of organizations saying they were ‘very effective’ at managing risks was much higher than we expected.” Recall that in the Protiviti survey 57 percent of respondents said they quantify risks “to the fullest extent possible” (it was slightly higher, 63 percent, for those that rated themselves “very effective” at risk management). Yet this is not what DeLoach observes first-hand when he examines risk management in various organizations: “Our experience is that most firms aren't quantifying risks … I just have a hard time believing they are quantifying risks as they reported.”

      My own experience also seems to agree more with the personal observations of DeLoach and Bohn than the results of the self-assessment surveys. I treat the results of the HDR/KPMG survey as perhaps an upper bound in the adoption of quantitative methods. Whenever I give a speech about risk management to a large group of managers, I ask those who have a defined approach for managing risks to raise their hands. A lot of hands go up, maybe half on average. I then ask them to keep their hands up only if they measure risks. Many of the hands go down. Then I ask them to keep their hands up only if probabilities are used in their measurements of risks (note how essential this is, given the definition of risk we stated). More hands go down and, maybe, one or two remain up. Then I ask them to keep their hands up if they think their measures of probabilities and losses are in any way based on statistical analysis or methods used in actuarial science. After that, all the hands are down. It's not that the methods I'm proposing are not practical. I have used them routinely on a variety of problems. (I'll argue in more detail later against the myth that such methods aren't practical.)

      Of

Скачать книгу