Smart Swarm: Using Animal Behaviour to Organise Our World. Don Tapscott

Чтение книги онлайн.

Читать онлайн книгу Smart Swarm: Using Animal Behaviour to Organise Our World - Don Tapscott страница 13

Автор:
Серия:
Издательство:
Smart Swarm: Using Animal Behaviour to Organise Our World - Don  Tapscott

Скачать книгу

to pay very close attention to bosses or those with higher status,” Surowiecki says. “That can be very damaging, from my perspective, because one of the great things about the wisdom of crowds, or whatever you want to call it, is that it recognizes that people may have useful things to contribute who aren’t necessarily at the top. They may not be the ones everyone automatically looks to. And that goes by the wayside when people imitate those at the top too closely.”

      Diversity. Independence. Combinations of perspectives. These principles should sound familiar. They’re versions of the lessons we learned from the honeybees: Seek a diversity of knowledge. Encourage a friendly competition of ideas. Use an effective mechanism to narrow your choices. What was smart for the honeybees is smart for groups of people, too.

      It’s not so easy, after all, to make decisions as efficiently as honeybees do. With millions of years of evolution behind them, they’ve fashioned an elegant system that fits their needs and abilities perfectly. If we could do as well—if we could harness our diversity to overcome our bad habits—then perhaps people wouldn’t say that we’re still thinking with caveman brains.

      

      Caveman Brains

      

      Imagine this scenario: Intelligence agencies have turned up evidence of a plot by at least three individuals to carry out a terrorist attack in Boston. Exactly what kind of attack is not known, but it might be related to a religious conference being held in the city. Possible targets include the Episcopal Church of St. Paul, Harvard’s Center for World Religion, One Financial Plaza, and the Federal Reserve Bank. Security cameras at each building have captured blurry images of ten different individuals acting suspiciously during the past week, though none have been positively identified as terrorists. Intercepted e-mail between suspects appears to include simple code words, such as “crabs” for explosives and “bug dust” for diversions. Time’s running out to crack the plot.

      This was the fictional situation presented to fifty-one teams of college students during a CIA-funded experiment at Harvard not long ago. Each four-person team was simulating a counterterrorism task force. Their assignment: sort through the evidence to identify the terrorists, figure out what they were planning to do, and determine which building was their target. They were given an hour to complete the task.

      The experiment was organized by Richard Hackman and Anita Woolley, a pair of social psychologists, with collaborators Margaret Giabosi and Stephen Kosslyn. A few weeks earlier, they’d given the students a battery of tests to find out who was good at remembering code words (verbal working memory) and who was good at identifying faces from a large set of photos (face-recognition ability), skills that tap separate functions of the brain. They used the results of these tests to assign students to teams, arranging it so that some teams had two experts (students who scored unusually high on either verbal or visual skills) and two generalists (students who scored average on both skills), and some teams had all generalists. This was important, because they wanted to find out if a team’s cognitive diversity really affected its performance as strongly as did its level of skills.

      The researchers had another goal. They wanted to see if a group’s performance might be improved if its members took time to explicitly sort out who was good at what, put each person to work on an appropriate task—such as decoding e-mails or studying images—and then talked about the information they turned up. Would it enable them, in other words, to exploit not only their diversity of knowledge but also their diversity of abilities? To find out, they told all of the teams how each member had scored on the skills tests, but they coached only half of the teams on how to make task assignments. They left the other half on their own.

      The researchers had hired a mystery writer to dream up the terrorist scenario. The solution was that a fictional anti-Semitic group was planning to spray a deadly virus in the vault at the Federal Reserve Bank where Israel stores its gold, thereby making it unavailable for months and supposedly bankrupting that nation. “We made it a little bit ridiculous because we didn’t want to scare anybody,” Woolley says.

      Who did the best job at solving the puzzle? Not surprisingly, the most successful teams—the ones that correctly identified the target, terrorists, and plot details—were those with experts that applied their skills appropriately and actively collaborated with one another. What no one expected, however, was that the teams with experts who made little effort to coordinate their work would do so poorly. They did even worse, in fact, than teams that had no experts at all.

      “We filmed all the teams and watched them several times,” Woolley says. “What seems to happen is that, when two of the people are experts and two are not, there’s a status thing that goes on. The two that aren’t experts defer to the two that are, when in fact you really need information from all four to answer the problem correctly.”

      Why was this disturbing? Because that’s how many analytic teams function in real life, Woolley says, whether they’re composed of intelligence agents interpreting data, medical personnel making a diagnosis, or financial teams considering an investment. Smart people with special skills are often put together to make important decisions, but they’re frequently left on their own to figure out how to apply those skills as a group. Because they’re good at what they do, many talented people don’t feel it’s necessary to collaborate. They don’t see themselves as a group. As a result, they often fail to make the most of their collective talents and end up making a poor decision.

      “We’ve done a bunch of field research in the intelligence community and I can tell you that no agency, not the Defense Department, not the CIA, not the FBI, not the state police, not the Coast Guard, not drug enforcement, has everything they need to figure out what’s going on,” Hackman told a workshop on collective intelligence at MIT. “That means that most antiterrorism work is done by teams from multiple organizations with their own strong cultures and their own ways of doing things. And the stereotypes can be awful. You see the intelligence people looking at the people from law enforcement saying, You guys are not very smart, all you care about is your badge and your gun. We know how to do this work, okay? And the law enforcement people saying, You guys wouldn’t recognize a chain of evidence if you tripped over it. All you can do is write summa cum laude essays in political science at Princeton. That’s the level of stereotyping. And they don’t get over it, so they flounder.”

      Personal prejudice is a poor guide to decision making, of course. But it’s only one in a long list of biases and bad habits that routinely hinder our judgment. During the past fifty years, psychologists have identified numerous “hidden traps” that subvert good decisions, whether they’re made by business executives, political leaders, or consumers at the mall. Many can be traced to the sort of mental shortcuts we use every day to manage life’s challenges—the rules of thumb we apply unconsciously because our brains, unlike those of ants or bees, weren’t designed to tackle problems collectively.

      Consider the trap known as “anchoring,” which results from our tendency to give too much weight to the first thing we hear. Suppose someone asks you the following questions:

      Is the population of Chicago greater than 3 million?

      What’s your best estimate of Chicago’s population?

      Chances are, when you answer the second question, you’ll be basing it on the first. You can’t help it. That’s the way your brain is hardwired. If the number in the first question was 10 million, your answer to the second one would be significantly higher. Late-night TV commercials exploit this kind of anchoring. “How much would you pay for this slicer-dicer?” the announcer asks. “A hundred dollars? Two hundred? Call now and pay only nineteen ninety-five.”

      Then there’s the “status quo” trap, which stems from our preference not to rock the boat. All things being equal, we prefer options that keep

Скачать книгу