Inside the Crystal Ball. Harris Maury

Чтение книги онлайн.

Читать онлайн книгу Inside the Crystal Ball - Harris Maury страница 7

Inside the Crystal Ball - Harris Maury

Скачать книгу

we see, it's rare to be highly successful in addressing all of these challenges. The sometimes famous forecasters who nail the big one are often neither accurate nor even directionally correct most of the time. On the other hand, the most reliable forecasters are less likely to forecast rare and very important events.

       One-Hit Wonders

      Reputations often are based on an entrepreneur, marketer, or forecaster “being really right when it counted most.” Our society lauds and rewards such individuals. They may attain a guru status, with hordes of people seeking and following their advice after their “home run.” However, an impressive body of research suggests that these one-hit wonders are usually unreliable sources of advice and forecasts. In other words, they strike out a lot. There is much to learn about how to make and evaluate forecasts from this phenomenon.

      In the decade since it was published in 2005, Phillip E. Tetlock's book Expert Political Judgment – How Good Is It? How Can We Know? has become a classic in the development of standards for evaluating political opinion.17 In assessing predictions from experts in different fields, Tetlock draws important conclusions for successful business and economic forecasting and for selecting appropriate decision-making/forecasting inputs. For instance:

       “Experts” successfully predicting rare events were often wrong both before and after their highly visible success. Tetlock reports that “When we pit experts against minimalist performance benchmarks – dilettantes, dart-throwing chimps, and assorted extrapolation algorithms, we find few signs that expertise translates into greater ability to make either ‘well-calibrated’ or ‘discriminating’ forecasts.”

       The one-hit wonders can be like broken clocks. They were more likely than most forecasters to occasionally predict extreme events, but only because they make extreme forecasts more frequently.

       Tetlock's “hedgehogs” (generally inaccurate forecasters who manage to correctly forecast some hard-to-forecast rare event) have a very different approach to reasoning than his more reliable “foxes.” For example, hedgehogs often used one big idea or theme to explain a variety of occurrences. However, “the more eclectic foxes knew many little things and were content to improvise ad hoc solutions to keep pace with a rapidly changing world.”

       While hedgehogs are less reliable as forecasters, foxes may be less stimulating analysts. The former encourage out-of-the-box thinking. The latter are more likely to be less decisive, two-handed economists.

      Tetlock's findings about political forecasts also apply to business and economic forecasts. Jerker Denrell and Christina Fang have provided such illustrations in their 2010 Management Science article titled “Predicting the Next Big Thing: Success as a Signal of Poor Judgement.”18 They conclude that “accurate predictions of an extreme event are likely to be an indication of poor overall forecasting ability, when judgment or forecasting ability is defined as the average level of forecast accuracy over a wide range of forecasts.”

      Denrell and Fang assessed the forecasting accuracy of professional forecasters participating in Wall Street Journal semi-annual forecasting surveys between July 2002 and July 2005. (Every six months at the start of January and July around 50 economists and analysts provided six-month-ahead forecasts of key economic variables, such as GNP, inflation, unemployment, interest rates, and exchange rates.) Their study focused on the overall accuracy of forecasters projecting extreme events, which were defined as results either 20 percent above or below average forecasts. For each forecaster, they compared overall accuracy for all of the forecast variables with the accuracy of each forecaster's projections of the defined extreme events.

       Forecasters who were more accurate than the average forecaster in predicting extreme outcomes were less accurate in predicting all outcomes. Also, the prognosticators who were comparatively more accurate in predicting extreme outcomes had extreme outcomes as a higher percentage of their overall forecasts. In the authors' assessment, “Forecasting ability should be based on all predictions, not only a selected subset of extreme predictions.”

      What do these results tell us about the characteristics of forecasters with these two different types of success? Denrell and Fang offer the following four observations:

      1. Extreme outcomes are, by definition, less likely than all outcomes. Therefore, extreme outcomes are more likely to be successfully predicted by forecasters who rely on intuition or who emphasize a single determinant than by those forecasters extrapolating from a more comprehensive sense of history.

      2. Forecasters who happen to be correct about extreme outcomes become overconfident in their judgment. The authors cite research indicating that securities analysts who have been relatively more accurate in predicting earnings over the previous four quarters tend to be less accurate compared to their peers in subsequent quarters.19

      3. Forecasters can be motivated by their past successes and failures. Forecasters with relatively bold past forecasts may be tempted to alter their tarnished reputations with more bold forecasts. (Denrell and Fang cite research by Chevalier and Ellison and by Leone and Wu.20,21) On the other hand, successful forecasters might subsequently move closer to consensus expectations in order to avoid the risk of incorrect bold forecasts on their reputations and track records. (The authors cite research by Pendergrast and Stole.22) In other words, in the world of forecasting both successes and failures can feed on each other.

      4. Some forecasters may be motivated to go out on a limb if the rewards for being either relatively right or highly visible with such forecasts are worth taking the reputational risk of building up a bad track record with many long shots that don't pan out. This may be especially so if forecasters perceive that they will be more commercially successful with the attention garnered by comparatively unique projections. However, Denrell and Fang cite research suggesting that securities analysts are more likely to be terminated if they make bold and inaccurate forecasts.23

       Who Is More Likely to Go Out on a Limb?

      It can be dangerous for a forecaster to go out on a limb too often, especially if the proverbial limb usually gets sawed off. But why do some forecasters make that choice? Do we know who they are, so that we can consider the source when hearing their forecasts?

      Researchers have examined popular, well-publicized forecast surveys to identify who is most likely to go against the grain and what are the consequences of doing so. Among the studied forecaster surveys have been those appearing in The Blue Chip Economic Indicators, the Wall Street Journal, and Business Week.

      Karlyn Mitchell and Douglas Pearce have examined six-month-ahead interest rate and foreign exchange rate forecasts appearing in past Wall Street Journal forecaster surveys.24 They asked whether a forecaster's employment influenced various forecast characteristics. Specifically, they compared forecasters employed by banks, securities firms, finance departments of corporations, econometric modelers, and independent firms.

      Their research indicated that economists with firms bearing their own name deviated more from the consensus than did other forecasters of interest rates and foreign exchange rates. In the authors' view, such behavior could have been motivated by the desire to gain publicity. (Note: Large sell-side firms employing economists and analysts have the financial means for large advertising budgets and are presumably not as hungry for free publicity.)

      While Mitchell and Pearce studied the Wall Street Journal

Скачать книгу


<p>17</p>

Phillip E. Tetlock, Expert Political Judgment – How Good Is It? How Can We Know? (Princeton, NJ: Princeton University Press, 2005).

<p>18</p>

Jerker Denrell and Christina Fang, “Predicting the Next Big Thing: Success as a Signal of Poor Judgment,” Management Science 56, no. 10 (2010): 1653–1667.

<p>19</p>

G. Hilary and L. Menzly, “Does Past Success Lead Analysts to Become Overconfident?” Management Science 52, no. 4 (2006): 489–500.

<p>20</p>

J. Chevalier and G. Ellison, “Risk Taking by Mutual Funds in Response to Incentives,” Journal of Political Economy 105, no. 6 (1997): 1167–1200.

<p>21</p>

Leone and Wu, “What Does It Take?”

<p>22</p>

Canice Prendergrast and Lars Stole, “Impetuous Youngsters and Jaded Old-Timers: Acquiring a Reputation for Learning,” Journal of Political Economy 104, no. 6 (1996): 1105–1134.

<p>23</p>

H. Hong, J. Kubik, and A. Solomon, “Securities Analysts' Career Concerns and Herding of Earnings Forecasts,” Rand Journal of Economics 31 (2000): 122–144.

<p>24</p>

Karlyn Mitchell and Douglas K. Pearce, “Professional Forecasts of Interest Rates and Exchange Rates: Evidence from the Wall Street Journal's Panels of Economists,” North Carolina State University Working Paper 004, March 2005.