As will hold for the following tables, the most preferred outcome is indicated with a 4, and the least preferred outcome is indicated with a 1., Actor As preference order: DC > CC > DD > CD, Actor Bs preference order: CD > CC > DD > DC. From that moment on, the tenuous bonds keeping together the larger band of weary, untrusting hunters will break and the stag will be lost. Does a more optimistic/pessimistic perception of an actors own or opponents capabilities affect which game model they adopt? NUMBER OF PAGES 65 14. [32] Paul Mozur, Beijing Wants A.I. [51] An analogous scenario in the context of the AI Coordination Problem could be if both international actors have developed, but not yet unleashed an ASI, where knowledge of whether the technology will be beneficial or harmful is still uncertain. As new technological developments bring us closer and closer to ASI[27] and the beneficial returns to AI become more tangible and lucrative, a race-like competition between key players to develop advanced AI will become acute with potentially severe consequences regarding safety. [56] look at three different types of strategies governments can take to reduce the level of arms competition with a rival: (1) a unilateral strategy where an actors individual actions impact race dynamics (for example, by focusing on shifting to defensive weapons[57]), (2) a tacit bargaining strategy that ties defensive expenditures to those of a rival, and (3) a negotiation strategy composed of formal arms talks. At the same time, a growing literature has illuminated the risk that developing AI has of leading to global catastrophe[4] and further pointed out the effect that racing dynamics has on exacerbating this risk. Actor As preference order: DC > DD > CC > CD, Actor Bs preference order: CD > DD > CC > DC. In the US, the military and intelligence communities have a long-standing history of supporting transformative technological advancements such as nuclear weapons, aerospace technology, cyber technology and the Internet, and biotechnology. [14] IBM, Deep Blue, Icons of Progress, http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/. See Carl Shulman, Arms Control and Intelligence Explosions, 7th European Conference on Computing and Philosophy, Bellaterra, Spain, July 24, 2009: 6. As stated before, achieving a scenario where both actors perceive to be in a Stag Hunt is the most desirable situation for maximizing safety from an AI catastrophe, since both actors are primed to cooperate and will maximize their benefits from doing so. Especially as prospects of coordinating are continuous, this can be a promising strategy to pursue with the support of further landscape research to more accurately assess payoff variables and what might cause them to change. Beding (2008), but also in international relations (Jervis 1978) and macroeconomics (Bryant 1994). 0000016685 00000 n Here, both actors demonstrate varying uncertainty about whether they will develop a beneficial or harmful AI alone, but they both equally perceive the potential benefits of AI to be greater than the potential harms. in . This table contains an ordinal representation of a payoff matrix for a Prisoners Dilemma game. The story is briey told by Rousseau, in A Discourse on Inequality: "If it was a matter of hunting a deer, everyone well realized that he must remain faithful to his post; but if a hare happened to pass within reach Here if they all act together they can successfully reproduce, but success depends on the cooperation of many individual protozoa. Downs et al. For example, suppose we have a prisoner's dilemma as pictured in Figure 3. The academic example is the Stag Hunt. However, a hare is seen by all hunters moving along the path. In times of stress, individual unicellular protists will aggregate to form one large body. Payoff matrix for simulated Stag Hunt. The coincident timing of high-profile talks with a leaked report that President Trump seeks to reduce troop levels by half has already triggered a political frenzy in Kabul. SCJ Int'l L. & Bus. Combining both countries economic and technical ecosystem with government pressures to develop AI, it is reasonable to conceive of an AI race primarily dominated by these two international actors. If one side cooperates with and one side defects from the AI Coordination Regime, we can expect their payoffs to be expressed as follows (here we assume Actor A defects while Actor B cooperates): For the defector (here, Actor A), the benefit from an AI Coordination Regime consists of the probability that they believe such a regime would achieve a beneficial AI times Actor As perceived benefit of receiving AI with distributional considerations [P_(b|A) (AB)b_Ad_A]. But cooperation is not easy. 0000016501 00000 n The corresponding payoff matrix is displayed as Table 14. Using game theory as a way of modelingstrategicallymotivated decisions has direct implications for understanding basic international relations issues. The remainder of this subsection briefly examines each of these models and its relationship with the AI Coordination Problem. endstream endobj 76 0 obj <>stream They can cheat on the agreement and hope to gain more than the first nation, but if the both cheat, they both do very poorly. In testing the game's effectiveness, I found that students who played the game scored higher on the exam than students who did not play. b For instance, if the expected punishment is 2, then the imposition of this punishment turns the above prisoner's dilemma into the stag hunt given at the introduction. The 18th century political philosopher Jean-Jacques Rousseau famously described a dilemma that arises when a group of hunters sets out in search of a stag: To catch the prized male deer, they must cooperate, waiting quietly in the woods for its arrival. [54] In a bilateral AI development scenario, the distribution variable can be described as an actors likelihood of winning * percent of benefits gained by winner (this would be reflected in the terms of the Coordination Regime). Course blog for INFO 2040/CS 2850/Econ 2040/SOC 2090, Link: http://www.socsci.uci.edu/~bskyrms/bio/papers/StagHunt.pdf. Although the development of AI at present has not yet led to a clear and convincing military arms race (although this has been suggested to be the case[43]), the elements of the arms race literature described above suggest that AIs broad and wide-encompassing capacity can lead actors to see AI development as a threatening technological shock worth responding to with reinforcements or augmentations in ones own security perhaps through bolstering ones own AI development program. The closestapproximationof this in International Relations are universal treaties, like the KyotoProtocolenvironmental treaty. For Rousseau, in his famous parable of the stag hunt, war is inevitable because of the security dilemma and the lack of trust between states. Understanding the Stag Hunt Game: How Deer Hunting Explains Why People This variant of the game may end with the trust rewarded, and it may result with the trusting party alone receiving full penalty, thus, leading to a new game of revenge. Stag hunt definition: a hunt carried out to find and kill stags | Meaning, pronunciation, translations and examples This situation is often seen as a useful analogy for many kinds of social cooperation, such as international agreements on climate change.[1]. For example, in a scenario where the United States and Russia are competing to be the one to land on the moon first, the stag hunt would allow the two countries to work together to achieve this goal when they would have gone their separate ways and done the lunar landing on their own. Stag hunt - Wikipedia We can see through studying the Stag Hunt game theory that, even though we are selfish, we still are ironically aiming to for mutual benefit, and thus we tend to follow a such a social contract. On the other hand, Glaser[46] argues that rational actors under certain conditions might opt for cooperative policies. Structural Conflict Prevention refers to a compromosde of long term intervention that aim to transform key socioeconomic, political and institional factors that could lead to conflict. This is visually represented in Table 2 with each actors preference order explicitly outlined. which can be viewed through the lens of the stag hunt in for an example the countrys only international conference in International Relations from, Scenario Assurance game is a generic name for the game more commonly known as Stag Hunt. The French philosopher, Jean Jacques Rousseau, presented the following Is human security a useful approach to security? [11] McKinsey Global Institute, Artificial Intelligence: The Next Digital Frontier, June 2017, https://www.mckinsey.com/~/media/McKinsey/Industries/Advanced%20Electronics/Our%20Insights/How%20artificial%20intelligence%20can%20deliver%20real%20value%20to%20companies/MGI-Artificial-Intelligence-Discussion-paper.ashx: 5 (estimating major tech companies in 2016 spent $20-30 billion on AI development and acquisitions). [8] Elsa Kania, Beyond CFIUS: The Strategic Challenge of Chinas Rise in Artificial Intelligence, Lawfare, June 20, 2017, https://www.lawfareblog.com/beyond-cfius-strategic-challenge-chinas-rise-artificial-intelligence (highlighting legislation considered that would limit Chinese investments in U.S. artificial intelligence companies and other emerging technologies considered crucial to U.S. national security interests). [29] There is a scenario where a private actor might develop AI in secret from the government, but this is unlikely to be the case as government surveillance capabilities improve. Read about me, or email me. A day passes. It involves a group of . might complicate coordination efforts. [4] Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford University Press, 2014). Read the following questions. [10] AI expert Andrew Ng says AI is the new electricity | Disrupt SF 2017, TechCrunch Disrupt SF 2017, TechCrunch, September 20, 2017, https://www.youtube.com/watch?v=uSCka8vXaJc. 0000003954 00000 n You note that the temptation to cheat creates tension between the two trading nations, but you could phrase this much more strongly: theoretically, both players SHOULD cheat. One significant limitation of this theory is that it assumes that the AI Coordination Problem will involve two key actors. Actor As preference order: DC > CC > CD > DD, Actor Bs preference order: CD > CC > DC > DD. To begin exploring this, I now look to the literature on arms control and coordination. Prisoners Dilemma, Stag Hunt, Battle of the Sexes, and Chicken are discussed in our text. Evaluate this statement. Examples of the stag hunt [ edit] The original stag hunt dilemma is as follows: a group of hunters have tracked a large stag, and found it to follow a certain path. Advanced AI technologies have the potential to provide transformative social and economic benefits like preventing deaths in auto collisions,[17] drastically improving healthcare,[18] reducing poverty through economic bounty,[19] and potentially even finding solutions to some of our most menacing problems like climate change.[20]. c Payoff variables for simulated Prisoners Dilemma. If they are discovered, or do not cooperate, the stag will flee, and all will go hungry. A sudden drop in current troop levels will likely trigger a series of responses that undermine the very peace and stability the United States hopes to achieve. Meanwhile, the escalation of an arms race where neither side halts or slows progress is less desirable to each actors safety than both fully entering the agreement. The remainder of this subsection looks at numerical simulations that result in each of the four models and discusses potential real-world hypotheticals these simulations might reflect. Additionally, Koubi[42] develops a model of military technological races that suggests the level of spending on research and development varies with changes in an actors relative position in a race. For example, Jervis highlights the distinguishability of offensive-defensive postures as a factor in stability. @scR^}C$I3v95p6S'34Y1rar{SQ!#fzHBM6 K4m|OOpa7rB'~Y(A|'vh=ziN/quu~6,{Q [13] And impressive victories over humans in chess by AI programs[14] are being dwarfed by AIs ability to compete with and beat humans at exponentially more difficult strategic endeavors like the games of Go[15] and StarCraft. Due to the potential global harms developing AI can cause, it would be reasonable to assume that government actors would try impose safety measures and regulations on actors developing AI, and perhaps even coordinate on an international scale to ensure that all actors developing AI might cooperate under an AI Coordination Regime[35] that sets, monitors, and enforces standards to maximize safety. 0000003027 00000 n Formally, a stag hunt is a game with two pure strategy Nash equilibriaone that is risk dominant and another that is payoff dominant. On the other hand, real life examples of poorly designed compensation structures that create organizational inefficiencies and hinder success are not uncommon. Finally, in the game of chicken, two sides race to collision in the hopes that the other swerves from the path first. Although most authors focus on the prisoner's dilemma as the game that best represents the problem of social cooperation, some authors believe that the stag hunt represents an equally (or more) interesting context in which to study cooperation and its problems (for an overview see Skyrms 2004). [31] Executive Office of the President National Science and Technology Council: Committee on Technology, Preparing for the Future of Artificial Intelligence, Executive Office of the President of the United States (October 2016), https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf; Artificial Intelligence, Automation, and the Economy Executive Office of the President of the United States (December 2016), https://obamawhitehouse.archives.gov/sites/whitehouse.gov/files/documents/Artificial-Intelligence-Automation-Economy.PDF. Furthermore, a unilateral strategy could be employed under a Prisoners Dilemma in order to effect cooperation. startxref Half a stag is better than a brace of rabbits, but the stag will only be brought down with a . PRICE CODE 17. [21] Jackie Snow, Algorithms are making American inequality worse, MIT Technology Review, January 26, 2018, https://www.technologyreview.com/s/610026/algorithms-are-making-american-inequality-worse/; The Boston Consulting Group & Sutton Trust, The State of Social mobility in the UK, July 2017, https://www.suttontrust.com/wp-content/uploads/2017/07/BCGSocial-Mobility-report-full-version_WEB_FINAL-1.pdf. But for the argument to be effective against a fool, he must believe that the others with whom he interacts are notAlwaysfools.Defect. > <>stream [12] Apple Inc., Siri, https://www.apple.com/ios/siri/. [25] For more on the existential risks of Superintelligence, see Bostrom (2014) at Chapters 6 and 8. This table contains a representation of a payoff matrix. When there is a strong leader present, players are likely to hunt the animal the leader chooses. This is what I will refer to as the AI Coordination Problem. In these abstractions, we assume two utility-maximizing actors with perfect information about each others preferences and behaviors. [56] Downs et al., Arms Races and Cooperation., [57] This is additionally explored in Jervis, Cooperation Under the Security Dilemma.. 0 Moreover, the AI Coordination Regime is arranged such that Actor B is more likely to gain a higher distribution of AIs benefits. The Afghan Stag Hunt - Lawfare 0000000016 00000 n [46] Charles Glaser, Realists as Optimists: Cooperation as Self-Help, International Security 19, 3(1994): 50-90. the 'inherent' right to individual and collective self-defence recognized by Article 51 of the Charter and enforcement measures involving the use of force sanctioned by the Security Council under Chapter VII thereof. Battle of the sexes (game theory) - Wikipedia As a result, a rational actor should expect to cooperate. As such, Chicken scenarios are unlikely to greatly affect AI coordination strategies but are still important to consider as a possibility nonetheless. PDF The Stag Hunt - University of California, Irvine In Just War Theory, what is the doctrine of double effect? This is taken to be an important analogy for social cooperation. 0000001656 00000 n The stag is the reason the United States and its NATO allies grew concerned with Afghanistan's internal political affairs in the first place, and they remain invested in preventing networks, such as al-Qaeda and the Islamic State, from employing Afghan territory as a base. Depending on which model is present, we can get a better sense of the likelihood of cooperation or defection, which can in turn inform research and policy agendas to address this. The United States is in the hunt, too. Under the assumption that actors have a combination of both competing and common interests, those actors may cooperate when those common interests compel such action. For example, Stag Hunts are likely to occur when the perceived harm of developing a harmful AI is significantly greater than the perceived benefit that comes from a beneficial AI . No payoffs (that satisfy the above conditions including risk dominance) can generate a mixed strategy equilibrium where Stag is played with a probability higher than one half. An individual can get a hare by himself, but a hare is worth less than a stag. Human security is an emerging paradigm for understanding global vulnerabilities whose proponents challenge the traditional notion of national security by arguing that the proper referent for security should be the individual rather than the state. Finally, in a historical survey of international negotiations, Garcia and Herz[48] propose that international actors might take preventative, multilateral action in scenarios under the commonly perceived global dimension of future potential harm (for example the ban on laser weapons or the dedication of Antarctica and outer space solely for peaceful purposes). David Hume provides a series of examples that are stag hunts. As the infighting continues, the impulse to forego the elusive stag in favor of the rabbits on offer will grow stronger by the day. In short, the theory suggests that the variables that affect the payoff structure of cooperating or defecting from an AI Coordination Regime determine which model of coordination we see arise between the two actors (modeled after normal-form game setups). The complex machinations required to create a lasting peace may well be under way, but any viable agreementand the eventual withdrawal of U.S. forces that would entailrequires an Afghan government capable of holding its ground on behalf of its citizens and in the ongoing struggle against violent extremism. Here, we have the formation of a modest social contract. (lljhrpc). This essay first appeared in the Acheson Prize 2018 Issue of the Yale Review of International Studies. In the Prisoner's Dilemma, in contrast, despite the fact that both players cooperating is Pareto efficient, the only pure Nash equilibrium is when both players choose to defect. The paper proceeds as follows. Several animal behaviors have been described as stag hunts. Stag Hunt is a game in which the players must cooperate in order to hunt larger game, and with higher participation, they are able to get a better dinner. A common example of the Prisoners Dilemma in IR is trade agreements. Together, the likelihood of winning and the likelihood of lagging = 1. Your application of the Prisoners Dilemma (PD) game to international trade agreements raises a few very interesting and important questions for the application of game theory to real-life strategic situations. PDF The Stag Hunt and the Evolution of Social Structure - Cambridge Table 13. Similar strategic analyses can be done on variables and variable relationships outlined in this model. On the face of it, it seems that the players can then 'agree' to play (c,c); though the agreement is not enforceable, it removes each player's doubt about the other one playing c". Another proposed principle of rationality ("maximin") suggests that I ought to consider the worst payoff I could obtain under any course of action, and choose that action that maximizes . 0000002790 00000 n In this section, I briefly argue that state governments are likely to eventually control the development of AI (either through direct development or intense monitoring and regulation of state-friendly companies)[29], and that the current landscape suggests two states in particular China and the United States are most likely to reach development of an advanced AI system first. Table 1. Individuals, factions and coalitions previously on the same pro-government side have begun to trade accusations with one another. Meanwhile, both actors can still expect to receive the anticipated harm that arises from a Coordination Regime [P_(h|A or B) (AB)h_(A or B)]. [6], Aumann proposed: "Let us now change the scenario by permitting pre-play communication. Language links are at the top of the page across from the title. [20] Will Knight, Could AI Solve the Worlds Biggest Problems? MIT Technology Review, January 12, 2016, https://www.technologyreview.com/s/545416/could-ai-solve-the-worlds-biggest-problems/. There is no certainty that the stag will arrive; the hare is present. Prisoner's Dilemma - Stanford Encyclopedia of Philosophy This is visually represented in Table 3 with each actors preference order explicitly outlined. They will be tempted to use the prospect of negotiations with the Taliban and the upcoming election season to score quick points at their rivals expense, foregoing the kinds of political cooperation that have held the country together until now. Most events in IR are not mutually beneficial, like in the Battle of the Sexes. Game Theory Metaphors | SpringerLink Robert J Aumann, "Nash Equilibria are not Self-Enforcing," in Economic Decision Making: Games, Econometrics and Optimisation (Essays in Honor of Jacques Dreze), edited by J. J. Gabszewicz, J.-F. Richard, and L. Wolsey, Elsevier Science Publishers, Amsterdam, 1990, pp. As a result, it is important to consider deadlock as a potential model that might explain the landscape of AI coordination.