SuperCooperators. Roger Highfield

Чтение книги онлайн.

Читать онлайн книгу SuperCooperators - Roger Highfield страница 9

Автор:
Серия:
Издательство:
SuperCooperators - Roger  Highfield

Скачать книгу

      After all, natural selection puts a premium on passing genes to future generations, and how can it shape a behavior that is “altruistic” in the long term when defection offers such tempting short-term rewards? In modern society, a hefty apparatus of law and order ensures that this temptation to cheat will remain, in general, resistible. But how can direct reciprocity work in the absence of authoritarian institutions? Why, in the case of cleaning stations on the reef, do clients refrain from eating their helpful cleaners after the little fish have discharged their duties?

      This issue has been discussed for decades but, from the perspective of my field, was first framed the right way in a paper by Robert Trivers, an American evolutionary biologist. A fascinating character, Trivers, who suffers from bipolar disorder, became steeped in controversy because of his friendship with the leader of the Black Panther Party, Huey Newton. Today, at Rutgers, the State University of New Jersey, he specializes in the study of symmetry in human beings, “especially Jamaican.” Steven Pinker hails Trivers as one of the greats in western intellectual history.

      One of the reasons Pinker rates him so highly is a milestone paper that Trivers published in The Quarterly Review of Biology in 1971, inspired by a visit to Africa, where he had studied baboons. In “The Evolution of Reciprocal Altruism” Trivers highlighted the conundrum of cheats by borrowing a well-known metaphor from game theory. He showed how the conflict between what is beneficial from an individual’s point of view and what is beneficial from the collective’s point of view can be encapsulated in the Prisoner’s Dilemma. As I explained in the last chapter, it is a powerful mathematical metaphor to sum up how defection can undermine cooperation.

      At that time, Trivers did not refer to direct reciprocity but used the term “reciprocal altruism,” where altruism is an unselfish concern for the welfare of others. Although altruism is the opposite of the “selfish” behavior that underpins the more traditional view of evolution, it comes loaded with baggage when it comes to underlying motive. Over the course of this book I hope it will become clear that, although it seems paradoxical, “altruistic” behavior can emerge as a direct consequence of the “selfish” motives of a rational player.

      Among the mechanisms to escape from the clutches of the Prisoner’s Dilemma, the most obvious one, as I have already hinted, is simply to repeat the game. That is why cooperation by direct reciprocity works best within a long-lived community. In many sorts of society, the same two individuals have an opportunity to interact not once but frequently in the village pub, workplace, or indeed the coral reef. A person will think twice about defecting if it makes his co-player decide to defect on the next occasion, and vice versa. The same goes for a fish.

      Trivers was the first to establish the importance of the repeated—also known as the iterated—Prisoner’s Dilemma for biology, so that in a series of encounters between animals, cooperation is able to emerge. He cited examples such as the cleaner fish and the warning cries of birds. What is remarkable is that Trivers went further than this. He discussed how “each individual human is seen as possessing altruistic and cheating tendencies,” from sympathy and trust to dishonesty and hypocrisy.

      Trivers went on to suggest that a large proportion of human emotion and experience—such as gratitude, sympathy, guilt, trust, friendship, and moral outrage—grew out of the same sort of simple reciprocal tit-for-tat logic that governed the daily interactions between big fish and the smaller marine life that scrubbed their gills. These efforts built on earlier attempts to explain how reciprocity drives social behavior. In the Nicomachean Ethics, Aristotle discusses how the best form of friendship involves a relationship between equals—one in which a genuinely reciprocal relationship is possible. In Plato’s Crito, Socrates considers whether citizens might have a duty of gratitude to obey the laws of the state, in much the way they have duties of gratitude to their parents for their existence, sustenance, and education. Overall, one fact shines through: reciprocity rules.

      THE ITERATED DILEMMA

      Since the Prisoner’s Dilemma was first formulated in 1950, it has been expressed in many shapes, forms, and guises. The game had been played in a repeated form before, but Trivers made a new advance when he introduced the repeated game to an analysis of animal behavior. This iterative Prisoner’s Dilemma is possible in a colony of vampire bats and at the cleaning stations used by fish on a reef, which were the subject of Trivers’s paper.

      However, the implications of what happens when the Prisoner’s Dilemma is played over and over again were first described before Trivers’s analysis in 1965 by a smart double act: Albert Chammah, who had emigrated from Syria to the United States to study industrial engineering, and Anatol Rapoport, a remarkable Russian-born mathematician-psychologist who used game theory to explore the limits of purely rational thinking and would come to dedicate himself to the cause of global peace. In their book, Prisoner’s Dilemma, they gave an account of the many experiments in which the game had been played.

      Around the time that Trivers made his contribution, another key insight into the game had come from the Israeli mathematician Robert J. Aumann, who had advised on cold war arms control negotiations in the 1960s and would go on to share the Nobel Prize in Economics in 2005. Aumann had analyzed the outcome of repeated encounters and demonstrated the prerequisites for cooperation in various situations—for instance, where there are many participants, when there is infrequent interaction, and when participants’ actions lack transparency.

      In the single shot game, the one that I analyzed earlier in the discussion of the payoff matrix of the Prisoner’s Dilemma, it was logical to defect. But Aumann showed that peaceful cooperation can emerge in a repeated game, even when the players have strong short-term conflicting interests. One player will collaborate with another because he knows that if he is cheated today, he can go on to punish the cheat tomorrow. It seemed that prospect of vengeful retaliation paves the way for amicable cooperation. By this view, cooperation can emerge out of nothing more than the rational calculation of self-interest. Aumann named this insight the “folk theorem”—one that had circulated by word of mouth and, like so many folk songs, has no original author and has been embellished by many people. In 1959, he generalized it to games between many players, some of whom might gang up on the rest.

      This theorem, though powerful, does not tell you how to play the game when it is repeated. The folk theorem says there is a strategy that can induce a rational opponent to cooperate, but it does not say what is a good strategy and what is a bad one. So, for example, it could show that cooperation is a good response to the Grim strategy. That strategy says that I will cooperate as long as you cooperate, but if you defect once then I will permanently switch to defection. In reality, such strategies are far from being the best way to stimulate cooperation in long-drawn-out games.

      To find out how to play the game, thinkers in the field had to wait for a novel kind of tournament, one that would shed light on all the nuances of the repeated Prisoner’s Dilemma. This was developed by Robert Axelrod, a political scientist at the University of Michigan, who turned the results into a remarkable book, The Evolution of Cooperation, which opens with the arresting line “Under what conditions will cooperation emerge in a world of egoists without central authority?” In his direct prose, Axelrod clearly described how he had devised a brilliant new way to tease out the intricacies of the Dilemma.

      He organized an unusual experiment, a virtual tournament in a computer. The “contestants” were programs submitted by scientists so they could be pitted against each other in repeated round-robin Prisoner’s Dilemma tournaments. This was the late 1970s and at that time the idea was breathtakingly novel. To put his tournaments in context—commercial, coin-operated video games had only appeared that same decade. But Axelrod’s idea was no arcade gimmick. Unlike humans, who get bored, computers can tirelessly play these strategies against each other and unbendingly stick to the rules.

      Researchers

Скачать книгу