SuperCooperators. Roger Highfield

Чтение книги онлайн.

Читать онлайн книгу SuperCooperators - Roger Highfield страница 10

Автор:
Серия:
Издательство:
SuperCooperators - Roger  Highfield

Скачать книгу

He added one of his own—one that randomly cooperates and defects—and pitched all of them against each other in a round-robin tournament. Success was easy to measure. The winner would be the strategy that received the highest number of points after having played all other strategies in the computer over two hundred moves. During the entire tournament, Axelrod explored 120,000 moves and 240,000 choices.

      Because the computers allowed for limitless complexity of the programs entered into the tournament, one might expect that the biggest—and thus “smartest”—program would win. But size is not everything. In fact, the simplest contestant won hands down, much to the surprise of the theorists. The champion turned out to be a measly little four line computer program devised by none other than Anatol Rapoport.

      Called Tit for Tat, the strategy starts with a cooperative move and then always repeats the co-player’s previous move. A player always starts by keeping faith with his partner but from then on mimics the last move of his opponent, betraying only when his partner betrays him. This is more forgiving than Grim, where a single defection triggers an eternity of defection.

      Standing back from the Prisoner’s Dilemma, it is easy to see the advantage of adopting a simple strategy. If you are too clever, your opponent may find it hard to read your intentions. If you appear too unresponsive or obscure or enigmatic, your adversary has no incentive to cooperate with you. Equally, if a program (or a person for that matter) acts clearly and sends out a signal that it cannot be pushed around, it does make sense to cooperate.

      What was also striking was that this discovery was old hat. The contestants in the computer Prisoner’s Dilemma tournament already knew about this powerful strategy. Work published at the start of that decade had shown that Tit for Tat does well. Indeed, the strategy carries echoes of the one that the nuclear powers had adopted during the cold war, each promising not to use its stockpiles of A- and H-bombs so long as the other side also refrained. Many of the contestants tried to improve on this basic recipe. “The striking fact is that none of the more complex programs submitted was able to perform as well as the original, simple Tit for Tat,” observed Axelrod.

      When he looked in detail at the high-ranking and low-ranking strategies to tease out the secret of success, Axelrod found one property in particular appeared to be important. “This is the property of being nice, which is to say never being the first to defect.” This strategy is interesting because it does not bear a grudge beyond the immediate retaliation, thereby perpetually furnishing the opportunity of establishing “trust” between opponents: if the opponent is conciliatory, both reap the rewards of cooperation.

      Axelrod went on to organize a second tournament, this time attracting sixty-three entries from six countries, ranging from a ten-year-old computer obsessive to a gaggle of professors of various persuasions. One entry arrived from the British biologist John Maynard Smith, whom we will learn much more about later. Maynard Smith submitted Tit for Two Tats, a strategy that cooperates unless the opponent has defected twice in a row. Maynard Smith, a revered figure in his field, limped in at twenty-fourth place.

      Rapoport, however, followed the maxim of British soccer leagues: “Never change a winning team.” Once more, he fielded the Tit-for-Tat strategy, and once again it won: it really did pay to follow this simple strategy. This was the very tournament that had inspired Karl Sigmund to focus on the Dilemma and that, in turn, inspired me when he gave me that sermon on the mountain. Robert Axelrod’s book The Evolution of Cooperation is now regarded as a classic in the field, and deservedly so.

      But did Axelrod’s computer tournament have anything to say about the real world? Yes. A real-life example of such a contest was reported in 1987, when Manfred Milinski, now the director of the Max Planck Institute for Evolutionary Biology in Ploen, Germany, studied the behavior of stickleback fish. When a big predator such as a pike looms, one or more of a school of sticklebacks will approach to see how dangerous it is. This “predator inspection” is risky for these scouts, but the information can benefit them as well as the rest of the school—if the interloper is not a predator or if it has just fed and is not hungry, the smaller fish don’t need to move away. Assessing whether it is necessary to flee seems foolish but is important because in their natural habitat there are many pike and other fish swimming about, so moving away is not always a good strategy: one can jump out of the way of one snapping predator into the jaws of another.

      Milinski found that stickleback fish rely on the Tit-for-Tat strategy during this risky maneuver. If a pike shows up in the neighborhood, two sticklebacks often swim together in short spurts toward the open mouth of the predator to size him up. Each spurt can be thought of as a single round of the Dilemma. Cooperating in this game of chicken is best for both fish, since it cuts the risk of being eaten. This is due to the “predator confusion” effect: pike can waste valuable time when they have to decide which of two or more prey to strike first, a real-life version of the paradox of Buridan’s ass, the hypothetical situation in which a donkey cannot choose between two stacks of hay and so dies of hunger. Yet each little fish has an understandable incentive to hang back a little and let the other stickleback soak up more of the risk.

      To investigate what was going through their little fishy heads, Milinski made ingenious use of a mirror. When held in the right place, it could create the illusion that a single stickleback was accompanied by another scout. By tilting the looking glass, Milinski could make it seem to a stickleback scout that his mirror-image “companion” was either cooperating by swimming alongside or falling behind and defecting, like the officer leading the charge who slowly slips behind his troops and out of harm’s way. The lead scout would often react to the apparent defection of its mirror fish by slowing down or turning tail, without completing its scouting mission. If the mirror image kept pace with the scout, the latter usually approached the predator more closely than it would if swimming alone.

      NOISE

      So far, so satisfyingly straightforward. But there is a problem with Tit for Tat, one that is not immediately obvious when using computer programs that interact flawlessly. Humans and other animals make mistakes. Sometimes their wires get crossed. Sometimes the players become distracted. They suffer mood swings. Or they simply have a bad day. Nobody’s perfect, after all. One type of mistake is due to a “trembling hand”: I would like to cooperate but I slip up and fail to do so. Another is caused by a “fuzzy mind”: I am convinced that this person was mean to me and defected in the last round, when in fact he did not. Perhaps I was confusing him with someone else. Trembling hands and fuzzy minds lead to what I call “noisy” interactions.

      The significant role of noise for the evolution of cooperation was first pointed out in a paper in the journal Nature by Robert May of Oxford University, a brilliant former physicist who would come to exert a profound influence on theoretical biology. Bob (being Australian, he prefers “Bob”) is best known for the great strides he made in putting ecology on a mathematical basis. In his short essay he argued that evolutionary biologists should study the influence of mistakes on the repeated Prisoner’s Dilemma. He realized that the conclusions from a game that is perfectly played, as was the case in Axelrod’s tournaments, are not necessarily robust or realistic.

      This is an important point. Even infrequent mistakes can have devastating consequences. When pitched against another player adopting the same approach, the Tit-for-Tat strategy can trigger endless cycles of retaliation. Since all it knows how to do is strike back at defectors, one scrambled signal or slipup can send Tit for Tat spiraling ever downward into vendettas that overshadow those seen in Romeo and Juliet, between the Hatfields and McCoys, or anything witnessed in Corsica, for that matter. The obvious way to end this bloody spiral of retaliation is to let bygones be bygones: for example, only to demand revenge now and again, or to decide it by the throw of a die. Inspired by this important insight, I would extend Axelrod’s pioneering work and incorporate the effects of noise to make it more true to life.

      TAKE

Скачать книгу