Governing Lethal Autonomous Weapon Systems

| December 13, 2017
Facebook Twitter Email
Print Friendly, PDF & Email

 

UN Geneva. CREDIT: shinji_w via Flickr.

A lethal robotic future is underway. And, if the recently-released short film “Slaughterbots” is any indication, that future is bleak; the film depicts swarms of miniaturized artificial intelligence-powered robots finding and killing innocent civilians at a harrowing speed. The film’s launch coincided with the United Nations Group of Governmental Experts (GGE) meeting on Lethal Autonomous Weapon Systems (LAWS), which took place November 13–17, 2017, under the auspices of the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons (CCW). Building on three prior informal discussions, this first formal meeting presented an opportunity to take concrete steps toward the creation of new global governance and international law to curb the fast development and use of LAWS. However, despite broad participation, there was little movement toward preventing the dark future of “Slaughterbots” from ultimately becoming reality.

The International Committee for Robot Arms Control, a group of scientists advocating for the peaceful use of robotics since 2009, defines a lethal autonomous weapon system as one that, once launched, “can select targets and apply force without meaningful human control.” These, along with the challenges posed by artificial intelligence in general, present one of the most complex and pressing global governance problems today, perhaps surpassed only by the issue of climate change. LAWS will be further made possible with advances in artificial intelligence and robotics.

The CCW is an international humanitarian law framework treaty that has prohibited blinding laser weapons and has regulated other indiscriminately injurious weapons, such as landmines. At first glance it seems to be an appropriate forum for these discussions because it has significant buy-in from 125 High Contracting Parties, includes all the major weapons producers, and welcomes civil society to observe.

However, the CCW is not problem-free, and Human Rights Watch representative Steve Goose went so far as to say at the UN meeting that “the CCW is failing.” This is because all decisions must be taken by consensus and, for now, no such consensus on LAWS is in sight given the lack of political will among states. Goose’s position was supported by Nobel Peace Prize laureate Jody Williams, who was also present at the meeting. Her group of Nobel laureates called on His Holiness Pope Francis to lend his moral authority to promote global action on the issue.

As a result, the end of this first historic meeting brought agreement on only the lowest common denominator. There was no agreement on a political declaration and no path toward a new regulatory international treaty. All that states could agree upon was to continue the conversation next year, and this is only if the dwindling finances of the CCW permit.

 

Many positions on LAWS

 

Three groups of states with divergent positions emerged during the UN meeting. The first group consisted of China, Russia, and the United States, which wanted more discussion to continue around the proper definition of LAWS. These countries reject a politically binding code of conduct or a legally binding treaty. They say that “weapons reviews” of new systems will suffice to determine the legality of use for new, more autonomous systems. However, in reality these reviews are not sufficient, as they are not applied by every state and are not transparent.

The second group was led by some EU countries, France and Germany in particular. Their positions converged around the path toward a politically binding agreement where human control and autonomy in machines serve as a foundation for future discussions.

The last group was the Non-Aligned Movement, which advanced strong views toward a prohibition treaty and moratoria on the use and production of autonomous systems (about 100 of the 125 High Contracting Parties to the CCW subscribed to this view). This group articulated concern that the use of LAWS will fundamentally change the nature of relations in war and peace. Brazil played a dynamic role during the meeting, repeatedly criticizing the slow progress. According to Ambassador Guilherme Patriota, this is the fourth year of unstructured discussions, signaling an unambitious process that seems not yet poised to effectively address the growing concerns over increased autonomy in weapons systems. With clear statements, Brazil articulated the need for new international regulation, asserting that member states must not sign a blank check for the development of autonomous systems that will disrespect international humanitarian law and human rights. Along with others, Brazil sought to potentially move the discussions out of the CCW, into another forum, even though many agree that the CCW is the correct place.

The chair of the meeting, Ambassador Amandeep Singh Gill of India, had the formidable task of reconciling these divergent positions. He was experienced in prior negotiations and also had the benefit of the institutional memory provided by the UN Office for Disarmament in Geneva, which supported him throughout.

After four years of discussions, more than 100 states would now like to see a legally-binding instrument that creates clear governance of the use of lethal autonomous weapons. Among these, 22 countries now support a preventive prohibition on the use of LAWS. Brazil, Iraq, and Uganda joined the list of supporters at the November meeting. Their position is buttressed by the authoritative information generated by three influential and independent groups: one is the United Nations Institute for Disarmament Research (UNIDIR). This is an independent think tank that pioneers the generation of knowledge for the UN member states. The second is the Stockholm International Peace Research Institute (SIPRI). SIPRI published the most recent authoritative and comprehensive report about the types, purposes, users, development status, and autonomous capabilities of these systems. At the very least, China, France, Germany, India, Israel, Italy, Japan, Norway, Russia, South Korea, Sweden, the United Kingdom, and the United States are reportedly developing more autonomy in existing lethal weapon systems.

The third group is the International Committee of the Red Cross (ICRC), which calls for broad agreement that human control must be retained when using force and calls on states to work to establish limits on autonomy in weapons systems. The ICRC’s position gives credence to the work of civil society, but takes a different stance: it does not call for a new treaty. Rather, it calls on states merely to work on retaining human control on the critical functions of all weapons systems.

 

Who Is Leading?

 

Beyond states, three groups lead the formation of new global norms to prevent a fast-developing race toward a lethal robotic future: the technology giants, scientists, and civil society. Their actions may serve as the basis for new global governance to prevent the destabilization of existing peace and security frameworks in international law.

 

The Technology Giants

Companies like Alphabet, Amazon, Apple, Baidu, Deep Mind, Facebook, and Microsoft are leading extraordinary developments in the technology of autonomy. Some of these tech giants are already devising codes of ethics for the creation and use of artificial intelligence. Tesla and Toyota are also leading advancements with self-driving cars.

It is interesting to note the role of Clearpath, a Canadian defense manufacturer: In August 2014, Clearpath became the first robotics company to announce that they will not be making “killer robots.” That is, the company will not manufacture products that will eliminate humans from the decision-making loop. The private sector technology giants have made clear that a preventive prohibition on LAWS will not inhibit the development and progress of the beneficial uses of artificial intelligence, as the military applications are distinct from the civilian ones.

 

Scientists

Elon Musk and 115 other experts also wrote an open letter to the United Nations meeting at the CCW. Their action started in 2015, when many leading scientists argued that the militarization and use of lethal artificial intelligence would be a highly destabilizing development, as such weapons represent the third major transformation in the nature of war (after gunpowder and nuclear weapons). They cautioned that the deployment of such weapons would alter the future of peace and security for the worse, and they advised that a lethal autonomous arms race could be prevented now by action at the United Nations. It is an important signal when the scientists and industry leaders call for a halt in the development of their technology based on what they fear could go wrong. An earlier letter, written in 2015, has been signed by 3,721 artificial intelligence and robotics researchers and 20,467 others to date.

 

Civil Society

Civil society is also working to catalyze global action based on states’ duties to preserve humanity’s role in war decisions. The Campaign to Stop Killer Robots was launched in 2013 and is a prominent steady voice. It is a global coalition of sixty-four nongovernmental organizations, including some Nobel Peace Prize laureates, in twenty-eight countries that calls for a preemptive ban on LAWS. The main ideas they advocate for are “meaningful human control” over individual attacks where lethal force is applied and a new international treaty to govern robotic killing. According to them, humans must always bear the ultimate control and take responsibility. Therefore, attacks cannot be delegated to machines.

The involvement of these civil society groups makes this matter a humanitarian issue rather than an arms control question. The negative geopolitical, strategic, and destabilizing humanitarian implications far outweigh any military benefits. These groups warn that systems are already deployed with evolving and alarming degrees of autonomy. Three examples are:

 

  • The Brimstone, a UK-built anti-tank weapon, which is currently one of the most advanced “fire and forget” missiles. Development started in the 1990s and the system has been in service since 2005.
  • The Norwegian Joint Strike Missile, which is advertised as having “autonomous target recognition with identification of targets to ship class level,” and also employs a library of potential targets.
  • Israel’s short-range air defense system, the Iron Dome, which is a mobile land-based counter-rocket, artillery, and mortar system that fires interceptor missiles. Its radar detects an incoming missile, and algorithms autonomously determine when to launch a defensive missile, with no human intervention needed.

 

These examples mean that we may soon pass the tipping point for a preventive prohibition. An artificial intelligence arms race will soon be underway. These are technologies with civil and military applications that are largely accessible to everyone. Autonomy has already been added to different capabilities in positioning, navigating, and targeting, and the list will continue to expand.

An Israeli Iron Dome battery. The Iron Dome system was first deployed in 2011. CREDIT: natanflayer via Flickr.

 

The Future Is here

 

The work of the Campaign to Stop Killer Robots, UNIDIR, SIPRI, and the ICRC over the last five years represents the authoritative knowledge basis necessary for states to act now. For these groups, the weaponization of increasingly autonomous technologies is what is at stake. Finally, a new group is adding its voice to the discussion: the International Panel for the Regulation of Autonomous Weapons, which has already published two scientific reports and will publish four more.

An artificial intelligence arms race will have a disintegrating effect on the commonly agreed global norms of international law, especially those on the restraint of the use of military force and on the protection of civilians during both times of war and in peace. LAWS will likely reduce the threshold for the use of force, making war more likely. The reality is that the institutions of peace and security are already starting to unravel. Armed unmanned aerial vehicles have enabled significant violations of the essential global norm against the use of force. Cyber-attacks also blur the lines between war and peace and will only grow in number and sophistication. The introduction of LAWS into countries’ military planning will only make this picture more complicated.

For the negotiations to progress toward a more concrete and ambitious path, a few developments would be welcome. First, the ICRC could endorse the call for a preventive prohibition of any weapon systems that eliminate meaningful human control over the critical functions and the decisions to kill. In the past, the ICRC has taken significantly transformative stances against landmines, cluster bombs, and more recently toward a ban of nuclear weapons. Its position has enormous moral clout and it would make a difference at this critical juncture. Second, the forum for negotiations should not be the CCW, but one where decisions are taken by majority rather than unanimity. Finally, one of the permanent five (P5) members of the UN Security Council, such as France, could embrace the role of the champion state towards a prohibition of any weapon systems that eliminate meaningful human control over the critical functions and the decisions to kill. Working with middle powers in Europe and Latin America, a P5 member could form a group of like-minded states that would work toward a legally binding instrument with commonly agreed global norms to protect the future of peace.

Facebook Twitter Email

Tags:

Category: Online Exclusive, The Ethics of War and Peace

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Privacy Preference Center

Necessary

Saves the status of privacy policy agreement.

gdpr

Analytics

These are used to track user interaction and detect potential problems. These help us improve our services by providing analytical data on how users use this site.

_ga, _gid, _gat