top of page
Search

Can Game Theory Combat Discrimination?

CAN GAME THEORY COMBAT DISCRIMINATION?



When and why do people cooperate or compete? Researchers at the RAND Corporation studied this question in the 1950s using what was then a new decision science called game theory. Game theory was developed during World War II by the Hungarian mathematical physicist and leading Manhattan Project contributor John von Neumann and the Austrian economist Oskar Morgenstern. It was immediately used by operations researchers for military logistics, and to develop a science of military decision making. By the late 1950s it was applied to nuclear deterrence, with the future Nobel laureate economist Thomas Schelling publishing The Strategy of Conflict in 1960.

RAND Corporation researchers developed one particular game called the Prisoner’s Dilemma. In this scenario, two coconspirators have been jailed by a clever district attorney who offers them each a choice of confessing their crime or remaining silent. If both remain silent, they will both go free. If both confess, they will both receive five-year sentences. However, if one confesses and the other stays silent, then the confessor will go free and receive a handsome reward while the other receives a 10-year sentence. For those not schooled in strategic logic, it may seem obvious that both should stay silent. And yet, according to game theory, it is rational for each to confess. Hence both end up with a worse outcome than if the two had cooperated.

RAND researchers were fascinated with the Prisoner’s Dilemma game, which social scientists would subsequently apply to everything from nuclear deterrence and international-relations anarchy to the social contract, free riding, and public-goods distribution. One of the people involved in developing the Prisoner’s Dilemma was Anatol Rapoport, who wrote about it in his book of the same name. The RAND researchers helped to establish game theory’s orthodox position that mutual defection from cooperation is the rational standard of behavior in the Prisoner’s Dilemma. Rapoport disagreed. He thought it obvious that mutual cooperation is a superior outcome, and that it is rational for each individual to cooperate in order that the two may jointly achieve a good result. He thought that the two approaches to the Prisoner’s Dilemma, one upholding the rationality of strategic self-interest and the other looking toward mutualistic cooperation, represented the two vying global ideologies of capitalism and communism.

ASSUMING SELFISH BEHAVIOR AND BUILDING INSTITUTIONS TO ACCOMMODATE SELF-SERVING WORKERS AND CLIENTS CAN BECOME A SELF-FULFILLING PROPHECY. As documented in his coauthored book Prisoner’s Dilemma, Rapoport conducted experiments on undergraduates at the University of Michigan to investigate how pairs would play repeating encounters with the Prisoner’s Dilemma structure. Working as a mathematical psychologist, he sought to understand how individuals would choose to either cooperate or defect. These experiments showed that typically two players initially cooperated but then went on to try defecting, which resulted in a mutually destructive outcome. After this, players tended to recover by learning to cooperate. Rapoport had an idiosyncratic interpretation of game theory. He argued that the orthodox position holding it rational to end up with a mutually destructive outcome is a paradox that “forces a reexamination of our concept of rational decision.” He argued that strategic rationality is a limited paradigm that shows we need a richer concept of rational action. What would the world be like if we all accepted the orthodox game-theory perspective that it is fundamentally rational to be antisocial in choosing one’s own gain, only to end up in a position of mutual compromise? To be fair to mainstream political scientists and economists, it seemed wise to build society on a platform of egoistic individualism rather than assume everyone’s angelic better nature. However, as I have argued throughout my work, assuming selfish behavior and building institutions to accommodate self-serving workers and clients can become a self-fulfilling prophecy.

I discovered game theory and the Prisoner’s Dilemma as a graduate student in the 1990s. I read William Poundstone’s book, also called Prisoner’s Dilemma, which treats this game as a prism through which to understand the Cold War. Whereas Rapoport contrasted the individualistic and collectivist solutions to the dilemma as the basis for the ideologies of capitalism and communism, Poundstone viewed the game as the Rosetta Stone for resolving numerous perplexing problems confronting humankind. Given the orthodox conclusion that it is rational for everyone to defect, the challenge was to identify means to avert the bad outcome that all individuals defect. As a journalist, Poundstone reported on the numerous dilemmas social scientists identified as being Prisoner’s Dilemmas.

For example, consider nuclear brinkmanship and the Cold War nuclear arms race. Researchers modeled both of these problems as a Prisoner’s Dilemma. When it comes to the reciprocal fear of surprise attack, many American game theorists argued that both the US and the USSR would most prefer to launch a devastating first-strike attack on the other nation, before they were able to retaliate. Using the Prisoner’s Dilemma, Thomas Schelling argued that if both nations achieved a secure second-strike capability, they would balance each other’s power. This stance became referred to as Mutual Assured Destruction (MAD). Game theorists reasoned that both the US and the USSR would most prefer to win the arms race, leaving the other nation in a submissive position. MAD could also resolve this security dilemma if it were the case that beyond a certain number of thermonuclear warheads, any more would be superfluous for maintaining secure second-strike deterrence. The Prisoner’s Dilemma was also used to model Thomas Hobbes’s problem of achieving social order from anarchy in a state of nature. Here the idea is different. Theorists argued that if a government were strong enough to punish individuals for failing to cooperate, then that society would be able to achieve social order.

As a graduate student, I studied the arguments that used the Prisoner’s Dilemma to identify solutions to social problems. Rapoport’s numerous books on game theory opposed the mainstream work in social science. Where the mainstream strove to identify solutions consistent with game theory, Rapoport consistently argued that a better future lay in overcoming the limitations inherent in game theory.

In the 1960s, Rapoport engaged Thomas Schelling in this debate. Schelling popularized game theory as a means to think about military strategy, and also all human decision-making in everyday interactions. His book Strategy of Conflict led Rapoport to respond with his book Strategy of Conscience. It struck me that Rapoport was making an important point: that orthodox strategic rationality is only a limited perspective on rational decision-making. Rapoport argues that game theory, with its relentless promotion of strategic self-interest, undermines civil institutions and social practices built out of individuals’ voluntary cooperation. Schelling understood this but was unmoved. He found Rapoport naive because he “wants to redefine rationality so that it is collectively rational to make cooperative choices even though individual incentives go the other way.”1

Rapoport believes that people can act out of conscience, which would take into consideration the impact of their actions on others. Schelling counters that it would be foolish to assume that others will act out of conscience. To this point, the two theorists agree. But where they disagree, and where Schelling does not have a compelling counterargument, is that Rapoport worries that game theory teaches individuals to be detached and to act without moral scruples or conscience. Schelling quotes Rapoport’s contention that game theory can provoke “actions which in themselves can induce a reorientation in thinking about international relations, that is, actions which bring about changes in the political climate.”2 BY EXPERIMENTING WITH THE “PRISONER’S DILEMMA,” ANATOL RAPOPORT ARGUED THAT PEOPLE CAN LEARN TO BE CONDITIONALLY ALTRUISTIC. This disagreement between Rapoport and Schelling has been fundamental to my research for both my books, Rationalizing Capitalist Democracy: Cold War Origins of Rational Choice Liberalism (2003) and Prisoners of Reason: Game Theory and Neoliberal Political Economy (2016). Rapoport understands that learning game theory leads people to accept that it is rational to act without considerations of morality or conscience. Schelling counters that we cannot assume that people will act morally or scrupulously. However, he does not confront Rapoport’s main point: game theory teaches that it is irrational to act with moral conscience. There is a big difference between urging citizens and diplomats not to act on the naive belief that others will be generous, kind, and cooperative, and insisting that it is rational to be narrowly self-interested and uncooperative.

Rapoport was brilliant in articulating how game theorists constructed a limited view of rationality and then limited their understanding of human motivation to be consistent with this perspective. His experiments on the Prisoner’s Dilemma showed people’s reluctance to be narrowly self-interested and their tendency to learn to cooperate. Based on this research, he crafted a proverbial middle-way solution to the Prisoner’s Dilemma. Calling it tit for tat, he argued that people can learn to be conditionally altruistic. This means that they cooperate with others and continue to do so unless the other person fails to cooperate, at which point they also withdraw cooperation.

Rapoport’s tit for tat solution is famous for being a recognized exit to situations in which everyone may be tempted to cheat or free ride but can learn to reciprocate support of a cooperative project. He developed this solution with a computer program that he entered into a computerized tournament in which varying strategies were paired off against each other to see which would win. Tit for tat was the tournament winner and is acknowledged in Robert Axelrod’s well-known book The Evolution of Cooperation. Game theory has been used to model all types of social interactions, from marriage and family planning to bargaining and public policies. As though a new chapter of the debate between Thomas Schelling and Anatol Rapoport began, I used the wisdom of both my predecessors to solve a puzzle of why, despite the numerous applications of game theory, so far there is no benchmark model of systemic discrimination.

Gender-based and ethnic discrimination have presented challenges to the fair distribution of resources within Western institutions since at least the Enlightenment and its legacy of capitalist democracy. Knowing Rapoport’s work well, and how his tit for tat strategy laid the groundwork for finding a path toward cooperation, I built a computerized simulation of a slightly different game than the Prisoner’s Dilemma. In the latter game, everyone prefers to withdraw from cooperation rather than be the only person who cooperates. In the model I created, called Hawk Dove Binary, the consequences for both confronting each other when failing to cooperate are worse than simply being the sucker whom others cheat and free ride upon.

Hawk Dove invites a constructive solution, much like Prisoner’s Dilemma. It is even amenable to the same type of solution, such as tit for tat, provided that all actors are alike. However, in the particular case in which there is a marker making each individual a member of a group, such as being white or nonwhite, or being male or female, the Hawk Dove game has a peculiar property. It shows that without anyone intending to be sexist or racist, when individuals play the game out of narrow self-interest, over rounds of repeated play in a population facing new partners regularly, a global pattern of discrimination emerges. One type of actor always submits to and is dominated by the other type of actor.

This challenge of averting unfair outcomes resulting from discriminatory treatment is as urgent as was the problem of cooperation faced by the 1960s and 1970s game theorists. Hopefully my knowledge of Rapoport’s contributions, and generally of this type of dilemma, will contribute to a potential remedy. Rapoport’s tit for tat means to build institutions that bring out individuals’ cooperative sides may offer some insights into how to overcome persistent discriminatory laws and social norms. This article was commissioned by Caitlin Zaloom.

  1. Thomas C. Schelling, “Strategy and Conscience by Anatol Rapoport,” American Economic Review, vol. 54, no. 6 (1964), p. 1087.

  2. Schelling, “Strategy and Conscience by Anatol Rapoport,” p. 1088


16 views0 comments

Recent Posts

See All
bottom of page