Add to Technorati Favorites

Weekly Index
Research Sites
CALENDAR

  • Features
  • Categories
  • Resources
  • About

search

Last 100 Entries
« Culture Change (In the Near Future)? | Main | Harvard KSG on 'Unmanned and Robotic Warfare' »
Wednesday
01Apr

Response to Brynen

In reading Rex Brynen's response to Charli Carpenter, I found myself thinking about the implications of robots with AI on the battlefield and their potential to commit atrocities and "go berserk."  But, at any rate, I'm not sure I entirely agree with Rex. Rex states that "Even with regard to AI-driven weapons, it doesn't pose much of an intellectual challenge to see how these fit within current international legal principles: if you deploy a weapons system that is, by design or predictable defect, prone to kill indiscriminately, you are in grave breach of international conventions."

This, however, might be overly simplistic, because he is assuming "design or predictable defect."  But things like this aren't always predictable.  Sometimes there are glitches and bugs.  Sometimes human error leads to cmesses, such as when an American Air Force pilot dropped a bomb on Canadian soldiers doing night-time manoeuvres because he thought he was under attack. But more to the point here is that computers always have bugs in them.  There are coding errors, or things just generally don't work, and this can lead to gross errors on the battlefield, such as the atomising of "Tall Man" Daraz Khan and his buddies in Afghanistan in February 2002 by a Predator drone. [Singer, p. 397].

So, I think that there is a distinction here between deploying something that is knowingly defective and deploying something that has a glitch that arises, for example, when the robot wanders into a house in Iraq looking for insurgents and mistakes a non-combatant for a combatant, when usually the robot is programmed to tell the difference. 

This raises the question of who is responsible?  Is it the soldier who deployed the robot?  Is it his captain?  Is it the commander of the unit?  Is the general who ordered the robots?  Is it the company that produced the robot?  Is it the programmer who installed the code?  Singer quotes a roboticist at DARPA: "It depends on the situation.  But if it happens too frequently, then it is just a product recall issue."  [p. 400].

Maybe it's how chillingly cold he sounds that concerns me, and this is an emotional response to the death of innocents being chalked up to product recall, but it also strikes me that the question of responsibility shifts here, given that robots, as technologically advanced as they are, are still the product of human activity and ingenuity, and are programmed by humans.  And it also seems to me that this is different than in the case of, for example, anti-personnel land mines. 

Anyway, food for thought.

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (1)

Thanks for the thoughts, Matthew!

I can imagine the public relations issues that will arise when the first AI robot "goes berserk" and kills friendly forces or innocent civilians. However, I'm not sure that it will really be an order of magnitude different in either terms of war crimes liability or command (or product) responsibility from weapons malfunctions through the ages: guidance systems that drop a bomb on the wrong house, early warning systems that misidentify, IFF systems that fail to function properly, and so forth.

Apr 1, 2009 at 16:10 | Registered CommenterRex Brynen

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Post:
|
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>