Add to Technorati Favorites

Weekly Index
Research Sites
CALENDAR

  • Features
  • Categories
  • Resources
  • About

search

Last 100 Entries
« Challenge: From the Small Wars Council | Main | Weapons Bans and Autonomous Battlefield Robots »
Thursday
02Apr

Response to Brynen

This began as a response to Rex Brynen's post, but it got too long so now here it is at the top of the Symposium. (Bwah ha ha!!) Brynen argues that it is no puzzle that humanitarian law organizations are non-plussed by the question of autonomous weapons, because in fact the weapons enhance humanitarianism rather than threaten it. Ken makes a similar case. Let me outline why I disagree. 

I'm not a lawyer like Ken, but rather a political scientist, so my reading of this will be partly through my assessment of the law and partly with an eye to political puzzles and inconsistencies in the application of the law. 

Article 36 of Additional Protocol I to the Geneva Conventions requires states to build and deploy only weapons that are compliant with humanitarian law, the most basic principles of which are proportionality and discrimination. Discrimination, in the convention, has two components relevant to choice of weapons outlined in Article 51(3b and 3c). First, whether a weapon can be targeted in such a way as to distinguish combatants from civilians and correspondingly minimize casualties among the latter. Second, whether it can be controlled. Brynen concedes this when he points out that:

“If you deploy a weapons system that is, by design or predictable defect, prone to kill indiscriminately, you are in grave breach of international conventions.”

Well. Not “grave breach” perhaps, but that’s a technicality. Question is, how do autonomous weapons stack up against these criteria? Brynen thinks just fine:

“To date we’ve seen no evidence that [AWS decrease discrimination].”

He then points out that precision weapons have generally reduced collateral damage. He’s right on that score, but he (and Ken I think) are relying on a conflation of weapons aimed with a human in the loop and fully autonomous systems. Let me make it clear that I am only referring to the latter.

Fully autonomous weapons – where no human being is involved in the initial decision to fire – are very different from the other types of weapons Brynen cites where humans make the decision to fire and the weapon simply helps hone in on the precise target. Such decisions are made by humans (at least in many countries) generally after carefully weighing tactical necessity against ethics and war law - something we are simply not able to program machines to do. In other words, AWS are most like the very weapons Brynen points out are banned – AP landmines – which activate autonomously according to a sensor but cannot easily tell a child from a weapons bearer. The political puzzle for me is that landmines are so obviously opprobrious in the minds of human rights gatekeepers, but other types of AWS are not.

In my view, AWS cannot be compliant with the Geneva Conventions precisely because their ability to make independent targeting decisions cannot be limited by humans once deployed: their design mitigates against it. They therefore do not meet the second criteria in Article 51(3) of the Conventions, at least on my interpretation.

However, suppose parties to the Geneva Conventions modified that rule. Suppose the litmus test were not whether a weapon could be controlled, or whether it could discriminate perfectly, but simply whether such a system committed fewer war crimes than a human being on average? Researchers at Georgia Tech believe an algorithm could be designed to this effect, but I for one am skeptical – mostly because of the muddiness of the civilian/combatant distinction even for humans. To try to render this complexity into a program that will get it right enough of the time requires too great a leap of faith for me - especially when evidence to the contrary already exists, including the robot cannon in South Africa that killed 9 people when it malfunctioned. Peter's description of similar incidents in the book only reinforces these concerns.

I may be wrong. At any rate,  this interpretation of these trends seems to me, and various norm entrepreneurs as at least as plausible as a happy go lucky future in which such weapons humanize war (to use a terribly ironic figure of speech). And given that, it’s politically interesting that human security organizations by and large have not taken this position.

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (2)

Charli is right to focus our IHL debate on AWS. In Peter's book, remotely-controlled weapons and those with genuine targeting autonomy are both discussed as "robotic", but I would concur that the latter pose greater challenges.

It should be noted, however, that even if humans are not "in the loop" with regard to the decision to fire, they are in the loop--even with AWS—with regard to the command decision of when, when, and how to deploy and use the weapons system involved. Almost all anti-tank and naval mines are AWS, and with the exception of some of the latter are incapable of much target discrimination. The international community has not chosen to place any additional convention restrictions on use of the former, and while there are restrictions on the latter under Hague Convention VIII (1907) they are fairly easily circumvented in practice (through warnings to civilian shipping).

There are two reasons for this, I think. The first is a theoretical assumption that the act of deployment involves a degree of discrimination. If you place AT mines in an off-road area, for example, there is little chance that they will be triggered by civilian traffic. Deploy the same mines on a highway subject to heavy civilian traffic, however, and you have a potential war crime. Given this, international law regulates the deployment, not the weapon.

The second reason is a practical question of lethality. AP landmines were prohibited not just because of discrimination reasons (although that was the normative logic for doing so), but also because of lethality—the numbers of civilians being killed was very large indeed. Had their social cost not been so high, the discrimination issue alone would not have been enough to force international action on the issue in the face of their clear military utility.

A second issue concerns what we consider the category of AWS to include. Does it include the increasing number of fire-and-forget weapons systems (such as naval SSMs, and a growing number of air and artillery submunitions) that have an autonomous ability to search for targets once they reach a preprogrammed area? Does it include weapons which, by their very nature, are capable of changing targets mid-course as a function of their guidance technology (some IR-seeking AAM would fall into this category, for example)? While such weapons have a human in the loop with regard to the initial firing decision, that same human has no ability to assure that the target that what the weapon eventually "locks on" is indeed the intended target, or even a permissable one.

Again, IHL presumes that the key issue with regard to such weapons is the context in which they are used. Firing a naval SSM against the suspected location of an enemy battlegroup at sea, away from shipping lanes, would not generally be considered a violation of the laws of war, even if you couldn't absolutely guarantee targeting or fully discriminate between targets at the range at which you detected and acquired them. Firing an SSM in search-and-target mode into a busy shipping lane would be a very different thing.

As for the case of the "robot cannon in South Africa that killed 9 people when it malfunctioned," I believe that Peter has an incorrect account of what happened (p. 196). In fact, the SADF investigation found that a simple spring pin sheared, causing the interface between the hand/motor actuator selector lever and the traverse gearbox to break during engagement and thereby making it impossible to control the weapon. Ironically, therefore, it wasn't that the robotic/autonomous part of the Oerlikon GDF-005 malfunctioned, but rather the reverse: neither the automated controls nor humans on the ground were able to control what was essentially simple, broken machinery.

Apr 3, 2009 at 1:34 | Unregistered CommenterRex Brynen

Rex, thanks for this thoughtful response - and for the clarification about the SADF case.

To answer your question about AWS - I think you make an important point about context, and I could see the rules developing along the lines of a clarification regarding use rather than a complete ban. However,your analogy to naval SSMs firing at battlegroups v. shipping lanes only proves the point that these weapons cannot be trusted to discriminate themselves between civilian and noncivilian targets, and therefore humans need to make initial targeting decisions. Contexts in which the issues of autonomy and discrimination would arise less at seatthen and more in stability and support operations, or where AWS are being used as sentries in border areas.

Apr 3, 2009 at 16:39 | Unregistered CommenterCharli Carpenter

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Post:
|
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>