CATEGORIES
Tag Cloud
search

Featured Posts
DISCUSSIONS
Event reviews

Add to Technorati Favorites

 

EDITOR'S BOOK SELECTION
EDITOR'S VIDEO SELECTION


Seedmagazine.com The Seed Salon



READER SURVEY
This form does not yet contain any fields.
    « Future Warfare at Chatham House | Main | From the Lake of Fire: Responsible Technology »
    Sunday
    05Oct

    Autonomous Weapons and Asymmetric Conflict

    It has been projected that roughly one third of military weapons could be robotic by 2015 . Over 4,000 armed robots have been deployed in Iraq and Afghanistan ; South Korea and Israel have both deployed armed robot border guards; other nations, including China, Russia, India, Singapore and the UK are increasingly developing these technologies. While systems like the Predator drone and the armed SWORDS recon units deployed in Iraq are not fully autonomous, as they are remotely-operated, it seems clear that the direction of military research is to gradually remove human operators from the loop entirely. For example, a recent article by Ronald Arkin quotes a division commander from the Israeli Defense Forces, who have deployed stationary robotic gun-sensor platforms along the Gaza border, as saying:  

    'At least in the initial phases of deployment, we’re going to have to keep a man in the loop,' implying the potential for more autonomous operations in the future.

    These developments have generated a number of concerns. Critics wonder whether machines can be designed to make ethical targeting decisions; how responsibility for mistakes is to be allocated and punished; and whether the ability to wage war without risking soldiers’ lives will remove incentives at peaceful conflict resolution. Proponents argue, of course, that “robot soldiers” can be designed to be likelier than human soldiers to behave justly in war.

    Back in May, law professor Kenneth Anderson put forth a different moral position, pointing out how in many respects the rationale for autonomous weapons is precisely to respond through a law of war framework in situations of asymmetric war. Such situations, he argues, are characterized by “lawfare” :

    systematic behavioral violations of the rules of war, violations of law undertaken and planned through advance study of the laws of war in order to predict how law-abiding military forces will behave and exploit their compliance; and where such violations are intended as a behavioral counter to superior military forces, including superior, yet law-compliant technology and weapons systems.
    Anderson had even earlier compared the development of autonomous weapons systems, with all their moral uncertainty, to the counterfactual : that a superpower would respond in kind to the types of humanitarian law violations commonly employed by a weaker enemy.
    The US, for good moral reasons, has given up the possibility of reprisals against civilians or other people hors de combat, such as captured enemy fighters… the US therefore finds that it has few or no behavioral levers with respect to the behavior of an enemy fighting using illegal methods. In such a case, one response is the attempt to compensate through technology – by limiting the exposure of one’s soldiers in particular to death, injury or capture and replacing them with machines.
    Anderson goes on his post to consider the likely reaction of the human rights and humanitarian law community to such developments, and I concur with his impression that the moral questions already being asked about such weapons are likely to increase if ever they are deployed and result in so much as a single civilian casualty – a process I’m tracing and will likely report about at Duck of Minerva, where I blog on advocacy networks and global norm development. Here, though, I’d like to prompt a discussion about the ethics of such weapons development despite such concerns, given that existing opponents of AWS technology call essentially for a precautionary principle against such weapons. Anderson has argued:
    It would be disastrous if this [outside criticism] led to the curtailment of these weapons, their development and deployment – disastrous from the standpoint of the long term integrity of the laws of war in a period in which asymmetric warfare is tending to undermine their very foundations, because reciprocity has been largely lost.

    Now, having spent much of today conversing with Anderson at length I know him to be essentially agnostic in the debate over whether these weapons would be less ethical or more ethical than human soldiers. His position is also evolving as his continues to blog about robots at Opinio Juris. Yet I'd like to return to his original argument about the likely alternatives in an era of asymmetric war on this forum for a moment: they bear close consideration. But they also strike me as a little bit similar to the logic behind nuclear deterrence. 

    Nuclear deterrence, and the very existence of thermonuclear weapons, depends on the threat to commit a grossly unethical act: the weapons are by their nature indiscriminate and disproportionate. At the same time, it is possible to argue that their existence has reduced the incidence of bloody conventional war by promoting a culture of caution among the great powers. (How else to explain the quick resolution of the crisis in the Caucasus this summer?) Does that in any way invalidate the nuclear disarmament movement or make the weapons by themselves any less just? If prior to their development we were discussing whether or not they should be produced and stockpiled, what would be the correct answer?

    Of course autonomous weapons aren’t quite like nuclear weapons: whereas it’s quite clear that nuclear weapons by their nature would violate the laws of war if used (though their existence may ironically help prevent war), it’s not so obvious whether autonomous weapons would or would not be inherently unethical. But let us imagine that the critics of “battlefield robots” have a point. The question for this forum is: assuming valid ethical concerns over whether a particular category of weapons meets legal standards of discrimination and proportionality, to what extent should concerns over the likely political outcome of not developing them drive ethical discussions over whether they should be developed? Thoughts?


    PrintView Printer Friendly Version

    EmailEmail Article to Friend

    References (2)

    References allow you to track sources for this article, as well as articles that were written in response to this article.
    • Response
      Professor Charli Carpenter (of UMass Amherst Political Science Department) and I had a lovely conversation over the weekend about battlefield robots. Well, actually it was an interview for a project of hers, so she let me do pretty much all the talking. She has now posted some thoughts of her own, ...
    • Response
      So, what do you get when the British Army messes around with a few ISO shipping containers? Well, stack three, outfit the top one with bulletproof windows, then, rig a couple of daylight and thermal imaging cameras and a remotely controlled weapon station (RWS) that allows for some heavy dude machine ...

    Reader Comments

    There are no comments for this journal entry. To create a new comment, use the form below.

    PostPost a New Comment

    Enter your information below to add a new comment.

    My response is on my own website »
    Author Email (optional):
    Author URL (optional):
    Post:
    |
     
    Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>