Damned Lies and Statistics. Joel Best

Чтение книги онлайн.

Читать онлайн книгу Damned Lies and Statistics - Joel Best страница 6

Damned Lies and Statistics - Joel Best

Скачать книгу

they deserve. This is not simply because an innumerate public is being manipulated by advocates who cynically promote inaccurate statistics. Often, statistics about social problems originate with sincere, well-meaning people who are themselves innumerate; they may not grasp the full implications of what they are saying. Similarly, the media are not immune to innumeracy; reporters commonly repeat the figures their sources give them without bothering to think critically about them.

      The result can be a social comedy. Activists want to draw attention to a problem—prostitution, homelessness, or whatever. The press asks the activists for statistics—How many prostitutes? How many homeless? Knowing that big numbers indicate big problems and knowing that it will be hard to get action unless people can be convinced a big problem exists (and sincerely believing that there is a big problem), the activists produce a big estimate, and the press, having no good way to check the number, simply publicizes it. The general public—most of us suffering from at least a mild case of innumeracy—tends to accept the figure without question. After all, it’s a big number, and there’s no real difference among big numbers.

      ORGANIZATIONAL PRACTICES AND OFFICIAL STATISTICS

      One reason we tend to accept statistics uncritically is that we assume that numbers come from experts who know what they’re doing. Often these experts work for government agencies, such as the U.S. Bureau of the Census, and producing statistics is part of their job. Data that come from the government—crime rates, unemployment rates, poverty rates—are official statistics.10 There is a natural tendency to treat these figures as straightforward facts that cannot be questioned.

      This ignores the way statistics are produced. All statistics, even the most authoritative, are created by people. This does not mean that they are inevitably flawed or wrong, but it does mean that we ought to ask ourselves just how the statistics we encounter were created.

      Let’s say a couple decides to get married. This requires going to a government office, taking out a marriage license, and having whoever conducts the marriage ceremony sign and file the license. Periodically, officials add up the number of marriage licenses filed and issue a report on the number of marriages. This is a relatively straightforward bit of recordkeeping, but notice that the accuracy of marriage statistics depends on couples’ willingness to cooperate with the procedures. For example, imagine a couple who decide to “get married” without taking out a license; they might even have a wedding ceremony, yet their marriage will not be counted in the official record. Or consider couples that cohabit—live together—without getting married; there is no official record of their living arrangement. And there is the added problem of recordkeeping: is the system for filing, recording, and generally keeping track of marriages accurate, or do mistakes occur? These examples remind us that the official number of marriages reflects certain bureaucratic decisions about what will be counted and how to do the counting.

      Now consider a more complicated example: statistics on suicide. Typically, a coroner decides which deaths are suicides. This can be relatively straightforward: perhaps the dead individual left behind a note clearly stating an intent to commit suicide. But often there is no note, and the coroner must gather evidence that points to suicide—perhaps the deceased is known to have been depressed, the death occurred in a locked house, the cause of death was an apparently self-inflicted gunshot to the head, and so on. There are two potential mistakes here. The first is that the coroner may label a death a “suicide” when, in fact, there was another cause (in mystery novels, at least, murder often is disguised as suicide). The second possibility for error is that the coroner may assign another cause of death to what was, in fact, a suicide. This is probably a greater risk, because some people who kill themselves want to conceal that fact (for example, some single-car automobile fatalities are suicides designed to look like accidents so that the individual’s family can avoid embarrassment or collect life insurance benefits). In addition, surviving family members may be ashamed by a relative’s suicide, and they may press the coroner to assign another cause of death, such as accident.

      In other words, official records of suicide reflect coroners’ judgments about the causes of death in what can be ambiguous circumstances. The act of suicide tends to be secretive—it usually occurs in private—and the motives of the dead cannot always be known. Labeling some deaths as “suicides” and others as “homicides,” “accidents,” or whatever will sometimes be wrong, although we cannot know exactly how often. Note, too, that individual coroners may assess cases differently; we might imagine one coroner who is relatively willing to label deaths suicides, and another who is very reluctant to do so. Presented with the same set of cases, the first coroner might find many more suicides than the second.11

      It is important to appreciate that coroners view their task as classifying individual deaths, as giving each one an appropriate label, rather than as compiling statistics for suicide rates. Whatever statistical reports come out of coroners’ offices (say, total number of suicides in the jurisdiction during the past year) are by-products of their real work (classifying individual deaths). That is, coroners are probably more concerned with being able to justify their decisions in individual cases than they are with whatever overall statistics emerge from those decisions.

      The example of suicide records reveals that all official statistics are products—and often by-products—of decisions by various officials: not just coroners, but also the humble clerks who fill out and file forms, the exalted supervisors who prepare summary reports, and so on. These people make choices (and sometimes errors) that shape whatever statistics finally emerge from their organization or agency, and the organization provides a context for those choices. For example, the law requires coroners to choose among a specified set of causes for death: homicide, suicide, accident, natural causes, and so on. That list of causes reflects our culture. Thus, our laws do not allow coroners to list “witchcraft” as a cause of death, although that might be considered a reasonable choice in other societies. We can imagine different laws that would give coroners different arrays of choices: perhaps there might be no category for suicide; perhaps people who kill themselves might be considered ill, and their deaths listed as occurring from natural causes; or perhaps suicides might be grouped with homicides in a single category of deaths caused by humans. In other words, official statistics reflect what sociologists call organizational practices—the organization’s culture and structure shape officials’ actions, and those actions determine whatever statistics finally emerge.

      Now consider an even more complicated example. Police officers have a complex job; they must maintain order, enforce the law, and assist citizens in a variety of ways. Unlike the coroner who faces a relatively short list of choices in assigning cause of death, the police have to make all sorts of decisions. For example, police responding to a call about a domestic dispute (say, a fight between husband and wife) have several, relatively ill-defined options. Perhaps they should arrest someone; perhaps the wife wants her husband arrested—or perhaps she says she does not want that to happen; perhaps the officers ought to encourage the couple to separate for the night; perhaps they ought to offer to take the wife to a women’s shelter; perhaps they ought to try talking to the couple to calm them down; perhaps they find that talking doesn’t work, and then pick arrest or a shelter as a second choice; perhaps they decide that the dispute has already been settled, or that there is really nothing wrong. Police must make decisions about how to respond in such cases, and some—but probably not all—of those choices will be reflected in official statistics. If officers make an arrest, the incident will be recorded in arrest statistics, but if the officers decide to deal with the incident informally (by talking with the couple until they calm down), there may be no statistical record of what happens. The choices officers make depend on many factors. If the domestic dispute call comes near the end of the officers’ shift, they may favor quick solutions. If their department has a new policy to crack down on domestic disputes, officers will be more likely to make arrests. All these decisions, each shaped by various considerations, will affect whatever statistics eventually summarize the officers’ actions.12

      Like our earlier examples of marriage records and coroners labeling

Скачать книгу