Social Psychology. Daniel W. Barrett
Чтение книги онлайн.
Читать онлайн книгу Social Psychology - Daniel W. Barrett страница 34
In one study, participants were presented with a personality profile of Steve, whose name was randomly pulled out of a set of 100 profiles, 70 of whom were engineers and 30 of whom were librarians. Steve was described as “very shy and withdrawn, invariably helpful, but with little interest in people, or in the world of reality. A meek and tidy soul, he has a need for order and structure, and a passion for detail.” Is Steve an engineer or a librarian? Because Steve resembles the stereotypical librarian, most people would guess librarian. However, in doing so they would be ignoring the base rate probabilities that Steve is one or the other (Jasper & Ortner, 2014; Tversky & Kahneman, 1974). Note that a name randomly pulled out of that set is much more likely to be an engineer, based solely on probabilities. In the Steve example, we fail to take into account the base rate at which the phenomenon occurs.
A second common mistake has to do with the way people appeal to personal anecdotes or limited observation to reject the findings of a particular social psychological study. On many occasions, I have presented research results in class and a student raised his hand and said something to the effect that “I was in a situation like that and I (or someone I know) didn’t act the way the people in the study did, and therefore the study isn’t valid.” For instance, we may be discussing how people tend to affiliate with winning teams—often by donning that team’s shirts and caps—and dissociate from losers (Cialdini et al., 1976), a trend that this is very noticeable after a major sporting event like the Super Bowl or World Cup. A student may say that she wore her Denver Broncos T-shirt the day after they lost to the Seattle Seahawks in the 2014 Super Bowl and consequently conclude that the study is incorrect. However, both she and you need to be cognizant of the fact that social psychology is not able to predict the behavior of any particular person or of every person—nor does it attempt to. Rather, we describe, explain, and predict what most people are likely to think, feel, and do in specific situations. Very rarely—if ever—does each and every one of the participants in a study behave in exactly the same way. Therefore, identifying an apparent exception or counterexample for a given phenomenon and arguing that this disproves the finding represents an overreliance on anecdotal information and a neglect of base rate information.
This base rate fallacy occurs when we ignore underlying probabilities—the base rate or frequency of an event—and instead focus on unusual or atypical instances (Bar-Hillel, 1980). We are particularly likely to do this when we reject the validity of abstract information in favor of concrete, vivid examples, such as anecdotes (Schwarz, Strack, Hilton, & Naderer, 1991; Taylor & Thompson, 1982). In our football example, the fact that most people behave in a specific way (an abstract statement) is not undermined by an instance of one person acting differently (a concrete example). The base rate is how frequent members of various categories occur in the corresponding population (Bar-Hillel, 1990; Kahneman & Tversky, 1972). Given that there are more debaters than wrestlers in the university population, there is a much greater likelihood that Jose is a debater. Likewise, for Steve being an engineer versus librarian.
Relying on the base rate can clearly lead to incorrect decisions. But is it ever useful? Of course it is. We categorize people (and things) for a reason: Categories allow us to simplify our world and make it cognitively manageable. Category members, by definition, share particular traits or characteristics. The traits that make up a category are different from those that make up another category, although some of these characteristics may overlap. Therefore, it is not only natural but also desirable that we use similarity as a basis for categorizing people. The underlying problem with doing so occurs when we ignore other relevant information (like base rates) and use only representativeness (Fiedler, Brinkmann, Betsch, & Wild, 2000).
Representativeness Heuristic: Mental shortcut in which people categorize a particular instance based on how similar the instance is to a typical member of that category
Base Rate Fallacy: Judging how likely an event is to occur, based on unusual or atypical instances, while ignoring its actual base rate or probability of occurrence
Base Rate: Frequency at which a given phenomenon occurs
Anchoring and Adjustment
Before reading further, write down whether the Mississippi River is longer or shorter than 800 miles. How many miles long is it? Did you guess 775 or 900 or another number in the rough vicinity of 800? What if I were to instead ask if the Mississippi River is longer or shorter than 2100 miles, and then you were to estimate its length? Would you have given a different estimate? If you are like most of my social psychology students, then you would have guessed a much larger number after being asked the latter question. Why? The reason is that you assume that the number that I inserted into the question—either 800 or 2100—is relevant to the answer and reasonably close to the river’s actual length. You expect that the number was presented for a reason and therefore that you should use it as an informational guide for your answer (Chapman & Johnson, 2002; Morrow, 2002). You start with the given number—you anchor your estimate on it—and then adjust it either up or down (Tversky & Kahneman, 1974). The human tendency to rely on readily available information on which to base estimation and then to adjust that estimate up or down is another mental shortcut, called the anchoring and adjustment heuristic (Tversky & Kahneman, 1974). We use this heuristic in order to simplify the estimation process and conserve our mental resources. It often serves us well, providing a generally correct answer that can then be tweaked to produce an even better one. Interestingly, people adjust the anchor less if that anchor is more precise rather than rounded. For instance, participants in one study made smaller adjustments to the precise anchor of $4,998 versus the rounded anchor of $5,000 (see Research Box 3.1) (Janiszewski & Uy, 2008).
Why do we make this error? We do so because we assume that the information that we are provided is relevant to the answer requested (Chapman & Johnson, 2002). The assumption makes perfect sense—much of the time. Typically when we are faced with a problem or puzzle, whatever information is provided is relevant to the problem being solved. Again, this heuristic often works, but like availability and representativeness, it can sometimes lead us into error. A real-world illustration of anchoring and adjustment can be seen in the way that juries determine awards in liability cases based on whatever numbers are presented to them, regardless of the origin of or justification for those numbers (Chapman & Bornstein, 1996). This is the reason attorneys for the plaintiff often initially ask for unreasonable large dollar amounts in a legal settlement. Similar findings have been found with regard to the valuation of homes in real estate (Northcraft & Neale, 1987), estimating how long Gandhi lived (F. Strack & Mussweiler, 1997), and guessing the year in which George Washington was elected U.S. president (Epley & Gilovich, 2001).
Anchoring and Adjustment Heuristic: Mental shortcut in which people use readily available information on which to base estimation and then adjust that estimate up or down to arrive at a final judgment
Research Box 3.1
Anchoring and Adjustment
Hypothesis: Estimates of the value of goods would diverge more from the anchor if given imprecise or rounded anchors versus more precise anchors.
Research Method: Participants were randomly assigned to receive either imprecise or precise anchor values for a variety of consumer goods, including a plasma TV, beverage, or a chunk of cheese. For example,