Killer Robots: The Future Of Lethal Autonomous Weapons

by Akhil Deo

Introduction

In march 2016, an artificial intelligence (AI) programme defeated a professional player at a complex board game called Go.[1] Such a feat was previously considered impossible for several years to come and was hailed as an important milestone in AI technology. Already AI is slated to be the next big step in terms of technological advancement. All major technology giants such as Google, Apple, Uber, and Amazon are currently developing AI technology in diverse fields such as automobile, personal computing, drones, medical technology etc.

As with all technologies, governments around the world have begun to take note of the potential AI has in armed conflict. In 2014 the International Committee of the Red Cross (ICRC) communicated to the United Nations that already critical functions in weapons systems have begun to operate autonomously.[2] Many countries have already begun charting out detailed plans to further equip their militaries with AI. For example the United States has officially planned to bring autonomous systems into service by 2040.[3]

This trend has also confronted strong criticism. In July 2015, thousands of prominent AI and robotics experts, as well as other scientists, including the likes of Stephen Hawing, Elon Musk and Noam Chomsky, endorsed an “Open Letter” on autonomous weapons, arguing that “[t]he key question for humanity today is whether to start a global AI arms race or to prevent it from starting.”

This essay seeks to introduce the concept of an autonomous weapon, the basic international law regime that might govern it and the position that India has taken in international forums on autonomous weapons.

What do autonomous weapons look like and how do you define them?

Some level of autonomy is already present in the weapon systems of major world powers. Notably this includes Israel’s ‘Iron Dome’, which has received immense attention from arms importers around the world for its ability to stop incoming missiles..[4] Similarly, The Phalanx missile system developed by the United States Navy can sense incoming anti- ship missiles and “autonomously [perform] its own search, detect, evaluation, track, engage and kill assessment  functions.”[5]  In 2013, the Russian developer MiG reportedly signed an agreement to develop an unmanned combat air vehicle (UCAV) called Skat, which was capable of navigating in autonomous modes.[6] The Siwss GDF anti aircraft missile is capable, without further human intervention, of operating  radar to identify targets, attacking them, and reloading.[7]

In 2012, a U.S. Department of Defense directive defined an autonomous weapon system as “a weapon system that, once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.”[8] UN Special Rapporteur Christof Heyns describes autonomous weapons systems as robots that gather information about their surroundings by means of sensors, which is then processed so that a decision can be taken, which is finally executed by the installed components (for example, weapons or means of transport)[9]

There are several ways that schorals, governments and international organizations have chosen to define AI. Some have even used a sliding scale test, which ranges from full human control over a weapon or systems to no human control. The key element is the ability to perform a command without any supervision or intervention from a human. It is clear that as technology develops the definitions will have to change to keep pace.

International law framework governing autonomous weapons.[10]

[The major arguments in this section are taken from a report published by the Human Rights Watch and an article in the Harvard National Security Journal, both of which are cited below]

The Geneva Convention and its Additional Protocols (AP) form the bulk of what constitutes international humanitarian law and the law of armed conflict. However, customary international law and other legal treaties do also form a part of public international law in relation to war.

Based on these authorities, there are two questions that arise with respect to autonomous weapons systems and their conformity with international law. First, are autonomous weapons prohibited per se, i.e, because of their autonomous nature? and Second, if legal, will they be able to conform with the principles international humanitarian law governing armed hostilities?

Are autonomous weapons prohibited per se?

Being a weapon that is still not fully developed, Article 36 of the AP-I requires contracting parties to determine whether or not the employment of a new weapon would be prohibited by international law.

There are two fundamental rules that dictate whether a new method  or instrument of warfare is prohibited:

(a)The method or weapon might cause superfluous injury or unnecessary suffering- Article 35(2) of AP I states that “[i]t is prohibited to employ weapons, projectiles and material and methods of warfare of a nature to cause superfluous injury or unnecessary suffering.”

(b)The weapon or method are by nature indiscriminate- Article 51(4) of AP-I  prohibits indiscriminate attacks that are defined as including attacks “which employ a method or means of combat which cannot be directed at a specific military objective; or … which employ a method or means of combat the effects of which cannot be limited” as required by AP I and which consequently are of a nature to strike military objectives and civilians or civilian objects without distinction.

In its commentary on Article 36, the ICRC highlighted that “The use of long distance, remote control weapons, or weapons connected to sensors positioned in the field, leads to the automation of the battlefield in which the soldier plays an increasingly less important role…. [A]ll predictions agree that if man does not master technology, but allows it to master him, he will be destroyed by technology.” The Human Rights Watch warns in its report Loosing Humanity that “[a]n initial evaluation of fully autonomous weapons shows . . . such robots would appear to be incapable of abiding by key principles of international humanitarian law”

However, Michael Schmitt, who is a professor of international law at the United States Naval War College,  argues in the Harvard National Security Journal that  ” autonomy is unlikely to present unnecessary suffering and superfluous injury issues since the rule addresses a weapon system’s effect on the targeted individual, not the manner of engagement (autonomous). In relation to the the principle of discrimination he states that “The prohibition on weapon systems that are indiscriminate because they cannot be aimed at a lawful target should not be confused with the ban on use of discriminate weapons in an indiscriminate fashion.”

Are autonomous weapons prohibited by use?

If autonomous weapons are to be deployed in armed conflict, they would have to additionally comply with the rules that relate to armed hostilities, the most basic of these are the rules of distinction, proportionality and precaution.

(a) The rule of distinction- Article 48 of the AP-I requires that parties “shall at all times distinguish between the civilian population and combatants and between civilian objects and military objectives and accordingly shall direct their operations only against military objectives.” and that in case of doubt, a person is considered to be a civilian (Article 50(1) of the AP I)

(b) The rule of proportionality- Art. 51(5)(b) of the AP-I dictates that loss of civilian life, injury to civilians or damage to civilian objects (or combination thereof) that may be expected from an attack must not be excessive in relation to the concrete and direct military advantage anticipated.

(c) The rule of precaution- Article 57 of Additional Protocol I sets forth the rule of precaution, which reflects customary international law, requiring an attacker to exercise “constant care . . . to spare the civilian population, civilians and civilian objects.” The article goes on to state what this care might constitute as follows:

  • Do everything feasible to verify that the objectives to be attacked are lawful military objectives and that it is not prohibited to attack them.
  • Take all feasible precautions, when they choose the means and methods of attack, to avoid and in any event to minimize incidental loss of civilian life, injury to civilians, and damage to civilian objects.
  • Refrain from deciding to launch an attack that may be expected to cause incidental loss of civilian life, injury to civilians, or damage to civilian objects (or a combination of these harms), which is excessive in relation to the concrete and direct military advantage anticipated.

The crux of the argument that the Human Rights Watch, which strongly advocates a ban on autonomous weapons,  puts forward is that:

1. An AI will not be able to distinguish between combatants and civilians because states now a day’s fight asymmetric urban wars and that “States likely to field autonomous weapons first—the United States, Israel, and European countries—have been fighting predominately counterinsurgency and unconventional wars in recent years. In these conflicts, combatants often do not wear uniforms or insignia. Instead they seek to blend in with the civilian population and are frequently identified by their conduct, or their “direct participation in hostilities.”

Further, the report argues that an AI cannot identify intention adequately, illustrating that “a frightened mother may run after her two children and yell at them to stop playing with toy guns near a soldier.”  A human soldier could identify with the mother’s fear and the children’s game and thus recognize their intentions as harmless, while a fully autonomous weapon might see only a person running toward it and two armed individuals.

2.An assessment of proportionality is highly subjective and varies on a case to case basis. The report argues that the autonomous robot’s incomplete understanding of its external environment resulting from software limitations, would inevitably lead to “faulty behavior.” Further it notes that case law developed by various international courts and tribunals all use legal standards that invoke human judgment- for example- the “reasonable military commander standard” or the “reasonably well informed person” standard.

On the other hand, Michael Schmitt argues that the legality of autonomous weapons and the scope of their application will depend on a complex set of factors and states that:

1.The fact that the rule of distinction requires humans to decide on whether or not a person is a civilian or military target is difficult to translate into an algorithm. He warns however that simply because it is not possible to do today does not mean it is not theoretically unachievable and one should not adopt an a-priori position. The test therefore will be whether or not it is possible to programme the algorithm in a manner that is capable of distinguishing combatants from not combatants.

2. With respect to proportionality, he argues that “The key is human interaction with the system. In theory, human operators could program these and other factors into an autonomous weapon system. Should they set unreasonably high thresholds of doubt (that is, the point where the systems will not attack), the system would violate the prohibition on indiscriminate attacks.” He further argues that “both the system’s capabilities and the environment in which it will operate have to be considered. For instance if an insufficiently precise weapon was used in a city it may violate the rules of war, but if it was used in sea, where the parties are almost certainly going to be combatants it would not.

Thus it can be seen that there is very little consensus on a broad range of questions, including even definitional issues. It is these difficulties that have also been reflected in various international forums when countries have tried to negotiate the ground rules for operating autonomous weapons.

Trends in global negotiation and India’s position

Over the past two years several high profile international meetings have convened to discuss the challenges that autonomous weapons pose to the international law regime and to discuss whether or not there was any wisdom in banning them. In May 2014, representatives from 87 states participated in the first Convention on Conventional Weapons (CCW) informal meeting of experts to consider questions related to emerging technologies in the area of lethal autonomous weapons systems.

Most recently this informal group of experts met in April 2016 at Geneva and reached an understanding that:

  • a state will bear the legal and political responsibility and establish accountability for action by any weapon system used by the state’s forces in accordance with applicable International Law, in particular International Humanitarian Law;
  • views on appropriate human involvement with regard to lethal force and the issue of delegation of its use are of critical importance to the further consideration of LAWS amongst the High Contracting Parties and should be the subject of further consideration;
  • civil society organizations, industry, researchers and scientific organizations should continue to play an important role in exploring the prospective issue in accordance with the established procedural rules of the CCW;
  • the discussion on emerging technologies in the area of LAWS is one of the priorities for the CCW and should be continued, while not prejudging discussions in other relevant fora.

It was further recommended that the Fifth Review Conference of states parties to the CCW (which is scheduled to take place in December 2016) “may decide to establish an open-ended Group of Governmental Experts (GGE)” on AWS.

As of today, only two countries in the world, the US and the UK, have developed coherent national policies on the development and use of Autonomous weapons. A detailed report by the Harvard Law School summarizes the positions countries around the world have adopted, with several states finding autonomous currently unlawful under International Humanitarian Law and calling for a permanent ban on such measures.[11] Most positions however are uncertain and intermediate, calling for reviews, temporary bans, increased regulation etc.

The position that India has taken is that there is a need for “increased systemic controls on international armed conflict in a manner that does not widen the technology gap amongst states or encourage the increased resort to military force in the expectation of lesser casualties or that use of force can be shielded from the dictates of public conscience.[12] India was also of the opinion that there were many definitional issues as well, such as “meaningful human control” and that more deliberation was required to decide the same. At the same time, India has also launched its own initiatives in developing autonomous or ‘robotic’ weapons with a “very high level of intelligence to enable them to differentiate between a threat and a friend,” to be deployed in areas such as the Line of Control.

Arun Mohan Sukamar finds that this position is consistent with India’s geostrategic limitations, stating that  “with the South China Sea being considered as a site for the deployment of lethal robots, the region’s stability is likely to be placed under increased stress.” He warns however that all developed nations including the US and China are increasingly likely to continue with their programmes irrespective of the pace of international law, stating that  “In particular, the role of LAWS in perpetuating low intensity conflict in South Asia should raise concerns for India and its current conventional superiority in the region.”

Conclusion

The question of autonomy, more specifically lethal autonomy, does not simply raise legal concerns. There are larger questions of ethics and morality at play- should the decision to take a life come from a programmed computer algorithm? Will the proliferation of autonomous weapons increase the incentive for war because of the low risk to human life? To what extent will human supervision exist over such autonomy?

None of these answers are simple, and unfortunately the political and legal solutions will always take significantly longer to materialize than the technology itself will. It is necessary therefore for countries to focus on predicting the pace of these technologies and their future impact. Eric Jensen argues that “The historical fact that the law of armed conflict (LOAC) has always lagged behind current methods of warfare does not mean that it always must.… [T]he underlying assumption that law must be reactive is not an intrinsic reality inherent in effective armed conflict governance. Rather, just as military practitioners work steadily to predict new threats and defend against them, LOAC practitioners need to focus on the future of armed conflict and attempt to be proactive in evolving the law to meet future needs.[13]

Apart from this, there is the question of geopolitical security for India, with China likely to embrace such technology. Most major powers around the world are investing heavily in developing autonomous weapons systems, including the US, the UK, Israel etc.

Therefore, It is clear that the development of AI and autonomous technologies and the debate has many elements to it, the ethical- moral implications, the legal challenges,  geopolitical strategy, international political will and much more.


Akhil Deo is a final year student at HNLU


endnotes

[1] Danielle Muoio, Why Go is So Much Harder for AI to Beat Than Chess, Tech Insider, March 10, 2016, http://www.techinsider.io/why-google-ai-game-go-is-harder-thanchess-2016-3.

[2] United Nations, General Assembly, 69th session, First Committee, statement by the ICRC, New York, 14  October 2014, https://www. icrc.org/en/document/weapons-icrc-statement-united-nations-2014

[3] US Department of Defense, Unmanned Systems Integrated Roadmap FY2013–2038, 2013, http://www.defense.gov/pubs/DOD-USRM-2013. pdf .

[4] Inbal Orpaz, How does Iron Dome Operate?, HAARETZ, (Nov. 19, 2012), http://www.haaretz.com/news/features/how-does-the-irondome-work.premium-1.47898

[5] “The US Navy Fact File: MK 15- Phalanx Close-In Weapons System (CIWS),” Accessed May 11, 2016. http://www.navy.mil/navydata/fact_display.asp?cid=2100&tid=487&ct=2.

[6] John Reed, Meet Skat, Russia’s Stealthy Drone, Foreign Policy, June 3, 2013, http:// foreignpolicy.com/2013/06/03/meet-skat-russias-stealthy-drone.

[7] Noah Shachtman, Robot Cannon Kills 9, Wounds 14, Wired (Oct. 18, 2007), https://www. wired.com/2007/10/robot-cannon-ki

[8] DEP’T OF DEF., DIRECTIVE 3000.09, AUTONOMY IN WEAPON SYSTEMS 13–14 (Nov. 2, 2012)

[9] UN General Assembly, Report of the Special Rapporteur on extrajudicial, summary or arbitral executions, Christof Heyns, A/HRC/23/47

[10] The major arguments in this section are heavily borrowed from two sources. the first is Human Rights Watch & IHRC, Loosing Humanity: The Case Against Killer Robots (2012) and the second is Micheal N Schmitt, Autonomous Weapon Systems and International Humanitarian Law: A Reply to the Critics, Harvard National Security Journal Features (2013)

[11] Harvard Law School, War-Algorithm Accountability (2016) At p.60.

[12] “Statement by PR to CD at the CCW Informal Meeting of Experts on Lethal Autonomous Weapon Systems, April 11, 2016,” Permanent Mission of India to Conference on Disarmament ,http://meaindia.nic.in/cdgeneva/?4829?000.

[13]  Eric Talbot Jensen, The Future of the Law of Armed Conflict: Ostriches, Butterflies, and Nanobots, 35 Mich. J. Int’l Law 253, 254 (2014).

Leave a Reply

Your email address will not be published. Required fields are marked *