0000006229 00000 n Carol M. Rose argues that the stag hunt theory is useful in 'law and humanities' theory. We find that individuals under the time pressure treatment are more likely to play stag (vs. hare) than individuals in the control group: under time constraints 62.85% of players are stag -hunters . [11] McKinsey Global Institute, Artificial Intelligence: The Next Digital Frontier, June 2017, https://www.mckinsey.com/~/media/McKinsey/Industries/Advanced%20Electronics/Our%20Insights/How%20artificial%20intelligence%20can%20deliver%20real%20value%20to%20companies/MGI-Artificial-Intelligence-Discussion-paper.ashx: 5 (estimating major tech companies in 2016 spent $20-30 billion on AI development and acquisitions). Structural Conflict Prevention refers to a compromosde of long term intervention that aim to transform key socioeconomic, political and institional factors that could lead to conflict. Perhaps most alarming, however, is the global catastrophic risk that the unchecked development of AI presents. Table 13. [5] As a result, it is becoming increasingly vital to understand and develop strategies to manage the human process of developing AI. Payoff matrix for simulated Prisoners Dilemma. endstream endobj 1 0 obj <> endobj 2 0 obj [/PDF/Text] endobj 3 0 obj <> endobj 8 0 obj <> endobj 9 0 obj <>stream Last Resort, Legitimate authority, Just cause, high probablity of succession, right intention, proportionality, casualities. Payoff matrix for simulated Chicken game. One example payoff structure that results in a Prisoners Dilemma is outlined in Table 7. international relations-if the people made international decisions stag hunt, chicken o International relations is a perfect example of an [19] UN News, UN artificial intelligence summit aims to tackle poverty, humanitys grand challenges, United Nations, June 7, 2017, https://news.un.org/en/story/2017/06/558962-un-artificial-intelligence-summit-aims-tackle-poverty-humanitys-grand. Absolute gains will engage in comparative advantage and expand the overall economy while relative . [22] Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, Machine Bias, ProPublica, May 23, 2016 https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. startxref If all the hunters work together, they can kill the stag and all eat. In game theory, the stag hunt, sometimes referred to as the assurance game, trust dilemma or common interest game, describes a conflict between safety and social cooperation. As new technological developments bring us closer and closer to ASI[27] and the beneficial returns to AI become more tangible and lucrative, a race-like competition between key players to develop advanced AI will become acute with potentially severe consequences regarding safety. I introduce the example of the Stag Hunt Gamea short, effective, and easy-to-use activity that simulates Jean-Jacques Rousseau's political philosophy. As such, it will be useful to consider each model using a traditional normal-form game setup as seen in Table 1. Photo Credit: NATO photo by Capt. \wb94W(F}pYY"[17/x(K+jf+M)S_3ZP7~Nj\TgTId=/o7Mx{a[ K} Language links are at the top of the page across from the title. 0000002790 00000 n [11] In our everyday lives, we store AI technology as voice assistants in our pockets[12] and as vehicle controllers in our garages. [28] Armstrong et al., Racing to the precipice: a model of artificial intelligence development.. Actor As preference order: DC > DD > CC > CD, Actor Bs preference order: CD > DD > CC > DC. PRICE CODE 17. The current landscape suggests that AI development is being led by two main international actors: China and the United States. 0000000016 00000 n Table 1. I thank my advisor, Professor Allan Dafoe, for his time, support, and introduction to this papers subject matter in his Global Politics of AI seminar. [51] An analogous scenario in the context of the AI Coordination Problem could be if both international actors have developed, but not yet unleashed an ASI, where knowledge of whether the technology will be beneficial or harmful is still uncertain. But, at various critical junctures, including the countrys highly contentious presidential elections in 2009 and 2014, rivals have ultimately opted to stick with the state rather than contest it. The stag is the reason the United States and its NATO allies grew concerned with Afghanistans internal political affairs in the first place, and they remain invested in preventing networks, such as al-Qaeda and the Islamic State, from employing Afghan territory as a base. However, both hunters know the only way to successfully hunt a stag is with the other's help. Any individual move to capture a rabbit will guarantee a small meal for the defector but ensure the loss of the bigger, shared bounty. The following subsection further examines these relationships and simulates scenarios in which each coordination model would be most likely. 0000001840 00000 n Additionally, Koubi[42] develops a model of military technological races that suggests the level of spending on research and development varies with changes in an actors relative position in a race. [18] Deena Zaidi, The 3 most valuable applications of AI in health care, VentureBeat, April 22, 2018, https://venturebeat.com/2018/04/22/the-3-most-valuable-applications-of-ai-in-health-care/. [5] Stuart Armstrong, Nick Bostrom, & Carl Shulman, Racing to the precipice: a model of artificial intelligence development, AI and Society 31, 2(2016): 201206. A day passes. 0000006962 00000 n 0000018184 00000 n Both actors see the potential harms from developing AI to be significant greater than the potential benefits, but expect that cooperating to develop AI could still result in a positive benefit for both parties. (5OP,&|#5Y9/yU'4x r+g\t97ASNgQ+Oh iCcKzCx7<=nZefYt|.OPX:'.&|=_Ml_I{]+Mr`h+9UeovX.C; =a #/ q_/=02Q0U>#|Lod. 9i Instead, each hunter should separately choose the more ambitious and far more rewarding goal of getting the stag, thereby giving up some autonomy in exchange for the other hunter's cooperation and added might. But what is even more interesting (even despairing) is, when the situation is more localized and with a smaller network of acquainted people, most players still choose to hunt the hare as opposed to working together to hunt the stag. On the face of it, it seems that the players can then 'agree' to play (c,c); though the agreement is not enforceable, it removes each player's doubt about the other one playing c". Here, I also examine the main agenda of this paper: to better understand and begin outlining strategies to maximize coordination in AI development, despite relevant actors varying and uncertain preferences for coordination. In the US, the military and intelligence communities have a long-standing history of supporting transformative technological advancements such as nuclear weapons, aerospace technology, cyber technology and the Internet, and biotechnology. If the regime allows for multilateral development, for example, the actors might agree that whoever reaches AI first receives 60% of the benefit, while the other actor receives 40% of the benefit. A relevant strategy to this insight would be to focus strategic resources on shifting public or elite opinion to recognize the catastrophic risks of AI. For example, one prisone r may seemingly betray the other , but without losing the other's trust. While there is certainly theoretical value in creating a single model that can account for all factors and answer all questions inherent to the AI Coordination Problem, this is likely not tractable or useful to attempt at least with human hands and minds alone. . To reiterate, the primary function of this theory is to lay out a structure for identifying what game models best represent the AI Coordination Problem, and as a result, what strategies should be applied to encourage coordination and stability. For example, Jervis highlights the distinguishability of offensive-defensive postures as a factor in stability. Here, both actors demonstrate varying uncertainty about whether they will develop a beneficial or harmful AI alone, but they both equally perceive the potential benefits of AI to be greater than the potential harms. Using game theory as a way of modelingstrategicallymotivated decisions has direct implications for understanding basic international relations issues. [6] See infra at Section 2.2 Relevant Actors. Type of game model and prospect of coordination. The reason is because the traditional PD game does not fully capture the strategic options and considerations available to each player. [45] Colin S. Gray, House of Cards: Why Arms Control Must Fail, (Cornell Univ. Published by the Lawfare Institute in Cooperation With, Lawfare Resources for Teachers and Students, Documents Related to the Mueller Investigation, highly contentious presidential elections, Civil Liberties and Constitutional Rights. Here, we have the formation of a modest social contract. Schelling and Halperin[44] offer a broad definition of arms control as all forms of military cooperation between potential enemies in the interest of reducing the likelihood of war, its scope and violence if it occurs, and the political and economic costs of being prepared for it.. Image: The Intelligence, Surveillance and Reconnaissance Division at the Combined Air Operations Center at Al Udeid Air Base, Qatar. The remainder of this subsection looks at numerical simulations that result in each of the four models and discusses potential real-world hypotheticals these simulations might reflect. One example payoff structure that results in a Chicken game is outlined in Table 11. Additionally, the defector can expect to receive the additional expected benefit of defecting and covertly pursuing AI development outside of the Coordination Regime. Catching the stagthe peace and stability required to keep Afghanistan from becoming a haven for violent extremismwould bring political, economic, and social dividends for all of them. As discussed, there are both great benefits and harms to developing AI, and due to the relevance AI development has to national security, it is likely that governments will take over this development (specifically the US and China). An approximation of a Stag Hunt in international relations would be an international treaty such as the Paris Climate Accords, where the protective benefits of environmental regulation from the harms of climate change (in theory) outweigh the benefits of economic gain from defecting. But who can we expect to open the Box? Despite the large number of variables addressed in this paper, this is at its core a simple theory with the aims of motivating additional analysis and research to branch off. Read about me, or email me. [12] Apple Inc., Siri, https://www.apple.com/ios/siri/. 8,H7kcn1qepa0y|@. Although Section 2 describes to some capacity that this might be a likely event with the U.S. and China, it is still conceivable that an additional international actor can move into the fray and complicate coordination efforts. An hour goes by, with no sign of the stag. The matrix above provides one example. THE STAG HUNT THE STAG HUNT T HE Stag Hunt is a story that became a game. For example, Stag Hunts are likely to occur when the perceived harm of developing a harmful AI is significantly greater than the perceived benefit that comes from a beneficial AI . Advanced AI technologies have the potential to provide transformative social and economic benefits like preventing deaths in auto collisions,[17] drastically improving healthcare,[18] reducing poverty through economic bounty,[19] and potentially even finding solutions to some of our most menacing problems like climate change.[20]. In order to assess the likelihood of such a Coordination Regimes success, one would have to take into account the two actors expected payoffs from cooperating or defecting from the regime. These remain real temptations for a political elite that has survived decades of war by making deals based on short time horizons and low expectations for peace. Under the assumption that actors have a combination of both competing and common interests, those actors may cooperate when those common interests compel such action. Gardner's vision, the removal of inferior, Christina Dejong, Christopher E. Smith, George F Cole. This is why international tradenegotiationsare often tense and difficult. Table 7. I discuss in this final section the relevant policy and strategic implications this theory has on achieving international AI coordination, and assess the strengths and limitations of the theory outlined above in practice. What should Franks do? Stag hunt definition: a hunt carried out to find and kill stags | Meaning, pronunciation, translations and examples In recent years, artificial intelligence has grown notably in its technical capacity and in its prominence in our society. Moreover, they also argue that pursuing all strategies at once would also be suboptimal (or even impossible due to mutual exclusivity), making it even more important to know what sort of game youre playing before pursuing a strategy[59]. But for the argument to be effective against a fool, he must believe that the others with whom he interacts are notAlwaysfools.Defect. In Exercises 252525 through 323232, f(x)f(x)f(x) is a probability density function for a particular random variable XXX. In this example, each player has a dominantstrategy. Name four key thinkers of the theory of non-violent resistance, Gandhi, martin luther king, malcon X, cesar chavex. > In this section, I briefly argue that state governments are likely to eventually control the development of AI (either through direct development or intense monitoring and regulation of state-friendly companies)[29], and that the current landscape suggests two states in particular China and the United States are most likely to reach development of an advanced AI system first. 2 Examples of states include the United States, Germany, China, India, Bolivia, South Africa, Brazil, Saudi Arabia, and Vietnam. %%EOF [32] Paul Mozur, Beijing Wants A.I. In the Prisoner's Dilemma, in contrast, despite the fact that both players cooperating is Pareto efficient, the only pure Nash equilibrium is when both players choose to defect. If they both work to drain it they will be successful, but if either fails to do his part the meadow will not be drained. Such a Coordination Regime could also exist in either a unilateral scenario where one team consisting of representatives from multiple states develops AI together or a multilateral scenario where multiple teams simultaneously develop AI on their own while agreeing to set standards and regulations (and potentially distributive arrangements) in advance. Not wanting to miss out on the high geopolitical drama, Moscow invited Afghanistans former president, Hamid Karzai, and a cohort of powerful elitesamong them rivals of the current presidentto sit down with a Taliban delegation last week. a The best response correspondences are pictured here. They can cheat on the agreement and hope to gain more than the first nation, but if the both cheat, they both do very poorly. %PDF-1.3 % [5] They can, for example, work together to improve good corporate governance. might complicate coordination efforts. However, a hare is seen by all hunters moving along the path. This table contains an ordinal representation of a payoff matrix for a Chicken game. [58] Downs et al., Arms Races and Cooperation, 143-144. The article states that the only difference between the two scenarios is that the localized group decided to hunt hares more quickly. This table contains an ordinal representation of a payoff matrix for a game in Deadlock. SUBJECT TERMS Game Theory, Brinkmanship, Stag Hunt, Taiwan Strait Issue, Cuban Missile Crisis 16. International Relations, Moreover, each actor is more confident in their own capability to develop a beneficial AI than their opponents. Despite this, there still might be cases where the expected benefits of pursuing AI development alone outweigh (in the perception of the actor) the potential harms that might arise. Stag Hunt is a game in which the players must cooperate in order to hunt larger game, and with higher participation, they are able to get a better dinner. A major terrorist attack launched from Afghanistan would represent a kind of equal opportunity disaster and should make a commitment to establishing and preserving a capable state of ultimate value to all involved. How does the Just War Tradition position itself in relation to both Realism and Pacifism? This section defines suggested payoffs variables that impact the theory and simulate the theory for each representative model based on a series of hypothetical scenarios. }}F:,EdSr Discuss. When there is a strong leader present, players are likely to hunt the animal the leader chooses. [16], On one hand, these developments outline a bright future. Therefore, if it is likely that both actors perceive to be in a state of Prisoners Dilemma when deciding whether to agree on AI, strategic resources should be especially allocated to addressing this vulnerability. Table 4. December 5, 2010 at 2:49 pm. Examples of the stag hunt [ edit] The original stag hunt dilemma is as follows: a group of hunters have tracked a large stag, and found it to follow a certain path. [21] Moreover, racist algorithms[22] and lethal autonomous weapons systems[23] force us to grapple with difficult ethical questions as we apply AI to more society realms. If they are discovered, or do not cooperate, the stag will flee, and all will go hungry. For the cooperator (here, Actor B), the benefit they can expect to receive from cooperating would be the same as if both actors cooperated [P_(b|B) (AB)b_Bd_B]. In international relations, examples of Chicken have included the Cuban Missile Crisis and the concept of Mutually Assured Destruction in nuclear arms development. the 'inherent' right to individual and collective self-defence recognized by Article 51 of the Charter and enforcement measures involving the use of force sanctioned by the Security Council under Chapter VII thereof. Beding (2008), but also in international relations (Jervis 1978) and macroeconomics (Bryant 1994). Combining both countries economic and technical ecosystem with government pressures to develop AI, it is reasonable to conceive of an AI race primarily dominated by these two international actors. At the same time, a growing literature has illuminated the risk that developing AI has of leading to global catastrophe[4] and further pointed out the effect that racing dynamics has on exacerbating this risk. The payoff matrix would need adjusting if players who defect against cooperators might be punished for their defection. In this paper, I develop a simple theory to explain whether two international actors are likely to cooperate or compete in developing AI and analyze what variables factor into this assessment. The complex machinations required to create a lasting peace may well be under way, but any viable agreementand the eventual withdrawal of U.S. forces that would entailrequires an Afghan government capable of holding its ground on behalf of its citizens and in the ongoing struggle against violent extremism. [4] Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford University Press, 2014). genocide, crimes against humanity, war crimes, and ethnic cleansing. (e.g., including games such as Chicken and Stag Hunt). The Stag Hunt represents an example of compensation structure in theory. Payoff variables for simulated Prisoners Dilemma. 0000002169 00000 n Continuous coordination through negotiation in a Prisoners Dilemma is somewhat promising, although a cooperating actor runs the risk of a rival defecting if there is not an effective way to ensure and enforce cooperation in an AI Cooperation Regime. The corresponding payoff matrix is displayed as Table 10. Using the payoff matrix in Table 6, we can simulate scenarios for AI coordination by assigning numerical values to the payoff variables. 0000003265 00000 n A person's choice to bind himself to a social contract depends entirely on his beliefs whether or not the other person's or people's choice. Still, predicting these values and forecasting probabilities based on information we do have is valuable and should not be ignored solely because it is not perfect information. (lljhrpc). This makes the risk twofold; the risk that the stag does not appear, and the risk that another hunter takes the kill. hunting stag is successful only if both hunters hunt stag, while each hunter can catch a less valuable hare on his own. [8] If truly present, a racing dynamic[9] between these two actors is a cause for alarm and should inspire strategies to develop an AI Coordination Regime between these two actors. One example addresses two individuals who must row a boat. If participation is not universal, they cannot surround the stag and it escapes, leaving everyone that hunted stag hungry. 'The "liberal democratic peace" thesis puts the nail into the coffin of Kenneth Waltz's claim that wars are principally caused by the anarchical nature of the international system.' c Gray[36] defines an arms race as two or more parties perceiving themselves to be in an adversary relationship, who are increasing or improving their armaments at a rapid rate and structuring their respective military postures with a general attain to the past, current, and anticipated military and political behaviour of the other parties.. This is what I will refer to as the AI Coordination Problem. The stag hunters are likely to interact with other stag hunters to seek mutual benefit, while hare hunters rarely care with whom they interact with since they rather not depend on others for success. Finally, there are a plethora of other assuredly relevant factors that this theory does not account for or fully consider such as multiple iterations of game playing, degrees of perfect information, or how other diplomacy-affecting spheres (economic policy, ideology, political institutional setup, etc.) So far, the readings discussed have commented on the unique qualities of technological or qualitative arms races. As of 2017, there were 193 member-states of the international system as recognized by the United Nations. [9] That is, the extent to which competitors prioritize speed of development over safety (Bostrom 2014: 767). The stag may not pass every day, but the hunters are reasonably certain that it will come. Understanding the Stag Hunt Game: How Deer Hunting Explains Why People are Socially Late. [47] look at different policy responses to arms race de-escalation and find that the model or game that underlies an arms race can affect the success of policies or strategies to mitigate or end the race. [30] Greg Allen and Taniel Chan, Artificial Intelligence and National Security. Report for Harvard Kennedy School: Belfer Center for Science and International Affairs, July 2017, https://www.belfercenter.org/sites/default/files/files/publication/AI%20NatSec%20-%20final.pdf: 71-110. But the moral is not quite so bleak. It is not clear whether the errors were deliberate or accidental. We have recently seen an increase in media acknowledgement of the benefits of artificial intelligence (AI), as well as the negative social implications that can arise from its development. 0000003027 00000 n These talks involve a wide range of Afghanistans political elites, many of whom are often painted as a motley crew of corrupt warlords engaged in tribalized opportunism at the expense of a capable government and their own countrymen. Finally, if both sides defect or effectively choose not to enter an AI Coordination Regime, we can expect their payoffs to be expressed as follows: The benefit that each actor can expect to receive from this scenario is solely the probability that they achieve a beneficial AI times each actors perceived benefit of receiving AI (without distributional considerations): P_(b|A) (A)b_Afor Actor A and P_(b|B) (B)b_Bfor Actor B. For example, can the structure of distribution impact an actors perception of the game as cooperation or defection dominated (if so, should we focus strategic resources on developing accountability strategies that can effectively enforce distribution)? [52] Stefan Persson, Deadlocks in International Negotiation, Cooperation and Conflict 29, 3(1994): 211244. Finally, Table 13 outlines an example payoff structure that results in a Stag Hunt. Game Theory 101: The Complete William Spaniel shows how to solve the Stag Hunt using pure strategy Nash equilibrium. Whoever becomes the leader in this sphere will become the ruler of the world., China, Russia, soon all countries w strong computer science. Meanwhile, the escalation of an arms race where neither side halts or slows progress is less desirable to each actors safety than both fully entering the agreement. [7] E.g. An example of the payoff matrix for the stag hunt is pictured in Figure 2. Some observers argue that a precipitous American retreat will leave the countryand even the capital, Kabulvulnerable to an emboldened, undeterred Taliban given the limited capabilities of Afghanistans national security forces. Evidence from AI Experts (2017: 11-21), retrieved from http://arxiv.org/abs/1705.08807. Here, both actors demonstrate high uncertainty about whether they will develop a beneficial or harmful AI alone (both Actors see the likelihood as a 50/50 split), but they perceive the potential benefits of AI to be slightly greater than the potential harms. Most events in IR are not mutually beneficial, like in the Battle of the Sexes. [43] Edward Moore Geist, Its already too late to stop the AI arms race We must manage it instead, Bulletin of the Atomic Scientists 72, 5(2016): 318321. [1] Kelly Song, Jack Ma: Artificial intelligence could set off WWIII, but humans will win, CNBC, June 21, 2017, https://www.cnbc.com/2017/06/21/jack-ma-artificial-intelligence-could-set-off-a-third-world-war-but-humans-will-win.html. First-move advantage will be decisive in determining the winner of the race due to the expected exponential growth in capabilities of an AI system and resulting difficulty of other parties to catch up. The theory outlined in this paper looks at just this and will be expanded upon in the following subsection. Formally, a stag hunt is a game with two pure strategy Nash equilibriaone that is risk dominant and another that is payoff dominant. The 18th century political philosopher Jean-Jacques Rousseau famously described a dilemma that arises when a group of hunters sets out in search of a stag: To catch the prized male deer, they must cooperate, waiting quietly in the woods for its arrival. 0000002252 00000 n In this article, we employ a class of symmetric, ordinal 2 2 games including the frequently studied Prisoner's Dilemma, Chicken, and Stag Hunt to model the stability of the social contract in the face of catastrophic changes in social relations. For example, most land disputes, like the ongoing Chinese andJapanesedisputeover the Senkaku Islands, must be resolved bycompromisingin other areas of policy in order to achieve the goal. Author James Cambias describes a solution to the game as the basis for an extraterrestrial civilization in his 2014 science fiction book A Darkling Sea. Namely, the probability of developing a harmful AI is greatest in a scenario where both actors defect, while the probability of developing a harmful AI is lowest in a scenario where both actors cooperate. The corresponding payoff matrix is displayed as Table 8. Moreover, the AI Coordination Regime is arranged such that Actor B is more likely to gain a higher distribution of AIs benefits. Table 4. [39] D. S. Sorenson, Modeling the Nuclear Arms Race: A Search for Stability, Journal of Peace Science 4 (1980): 16985.
Where Were The Inspector Alleyn Mysteries Filmed,
Articles S