Add to Technorati Favorites

Weekly Index
Research Sites

  • Features
  • Categories
  • Resources
  • About

search ctlab

Last 100 Entries
« Implications for command and control | Main | More Thoughts On Robots and IHL »

When Robots Are Not Just About Autonomy - Remote Platform Targeted Killing

For several years, my personal and scholarly interest in battlefield robots was about autonomy - autonomous firing systems of the battlefield.  As one friend said, "Ken, it's like you ran out of Heinlein and Asimov books and had to invent something."  I think those autonomy questions are fascinating from a legal and moral perspective, and I'll say something about them down the road.  But one of my other scholarly interests is the law and ethics of targeted killing - and I was studying it when Singer's utterly absorbing, utterly fascinating book appeared.  It shifted my interest from a debate about technology some ways down the road - very important, to be sure, but down the road - to the application of a certain robotics technology today, on-going in conflict today.  I've read Wired for War in part with an eye to trying to understand the big picture, the overall outline, of the issues in robots in war and, for what it's worth, and not sure whether Singer would agree, it is this:

  • Stand-off, remote targeting in real time - i.e., Predator strikes - robotics in the sense of drones controlled off-battlefield by humans.  But Predators are just the beginning, because missiles are too much firepower - needed instead are genuinely personal weapons - a little flying thingie that is remote piloted up to the head of the Taleban commander and then blown up.  Not a Predator missile, but a single person kill.  The issue of robotics is the issue of targeted killing. What makes it special is that it is simultaneously discrete and remote: in the past, discrimination usually meant a person getting as close as possible for a kill, but the promise of robotics in targeted killing is targeting discrimination by a non-human, stand-off platform.  
  • Surveillance - when you eventually have thousands of small, insect-like, flying gadgets gathering information everywhere, identifying targets, gathering intelligence on the battlefield and off, and feeding information for precisely targeted, discrete targeted killing, using drones, but outfitted with weapons once again aimed at single individuals.
  • The calculus of making war - when you put few humans on the battlefield.  Does it make war easier or harder to undertake? Does it matter?
  • Autonomous weapons firing systems - in the future.
  • Counters to autonomous weapons firing systems - counters which might have to be created and deployed before your own autonomous firing system, for reasons of humanitarianism and your assessment of law, is ready to be deployed in the field.

Autonomous weapons systems is just one of these issues, and there are probably more.  Today, however, the main issue is targeted killing using drone aircraft.  I assume, if I read Wired for War correctly, that eventually we will start managing to get the drone down to tiny sizes, with the ability to target and kill a single individual.  But even leaving that aside as a future technology, there is a powerful strategic rationale driving the marriage of robotics and targeted killing.  Suppose we ask the question of what drives each individually?:

  • Robotics.  The attraction is that they are a stand-off, remote platform.  They avoid 'boots on the ground' in situations where that might be politically infeasible or undesirable, or simply because a larger commitment strains military resources, any number of reasons. 
  • Targeted killing.  The attraction is that targeted killing is more discriminate than killing in larger numbers, and that in the case of counterterrorism, the enemy is not in large numbers, massed or otherwise, but individuals or small groups, and dispersed, often among civilian populations.  

The humanitarian advance represented by robotics is sometimes oddly slid over in what I, at least, tend to see as an insistence that there is something inherently wrong with one side having the advantages, of technology, geography, etc.  I've had these odd conversations with several human rights lawyers in the last two years in which, to my claim that these technologies improve target discrimination and, after all, that ought to be the holy grail of humanitarian weapons development, the response is that this is somehow "forcing" the un-teched to respond by using civilians as cover.   I myself think that response comes close to say that the technology is 'unsporting', but it seems to be a sort of meme, at least in my conversations.  

Still, to Charli Carpenter's question about why robots have not become a campaigning issue for the human rights groups - part of the answer is not just the remoteness of the 'autonomy' issue, but also that, so far as I can tell on the basis of purely personal anecdotes, human rights folks are not really sure what to think.  Some recognize that more discriminating targeting is supposed to be a weapons technology goal, and others think that getting it down to the level of individual targeting makes it too easy to decide to deploy.  And of course you might think both at the same time; they are not mutually exclusive. I will perhaps come back to that point in a later post; in some ways it hooks up to a response to the formation of the ICRC clear back in its founding in the 19th century (Caroline Moorehead mentions this in her history of the Red Cross), that efforts to humanize war make war easier to undertake and reduce the disincentives to war (there was also an original Star Trek episode in the 1960s dedicated to this very proposition, A Taste of Armageddon ).  

But now I want to make a different kind of observation about robots and targeted killing.  The combination of stand-off platforms and targeted killing is partly driven, for the US at least, by that not-very-useful term, 'lawfare'.  How? The incentives to capture in US counterterrorism have been shrinking over the last couple of years.  Whether this is good or bad, or moral or immoral - leaving law and policy aside normatively for the moment - the costs of capturing, detaining, and interrogating someone have gone up a lot within the US system of counterterrorism.  Considered without reference to norms, the disincentives to capture and the incentives to kill have gone up a lot.  And remote platform killing by missile removes many potentially messy questions of surrender, including the human rights monitors' investigations into whether you were offered surrender, accepted surrender, didn't accept - war crime, after all, or simple murder, depending - so that operationally, targeted killing by remote control has certain advantages, clarity and definiteness not least among them.

My point, then, is that if one looks down the list of topics in battlefield robotics, autonomy has garnered much attention.  But one of the many salutary effects of Singer's book is to draw attention to what is going on now and is not sci-fi.  The Obama campaign ran on Predators - it seemed to see it as the conflict-lite approach, and in many respects I don't disagree.  But there is a powerful logic of strategic as well as legal incentives toward the combination of standoff platform and targeted killing in counterterrorism and counterinsurgency, and the Obama administration is discovering that these, too, are very persuasive reasons.  So let me just end this post with two comments in US newspapers in the last couple of days taking this strategic logic largely for granted.  First, Graham Allison and John Deutch - stalwart American liberals, each - taking for granted the importance of the Predator campaign in Pakistan:

"The counterterrorism strategy in Pakistan that has emerged since last summer offers our best hope for regional stability and success in dealing a decisive blow against al Qaeda and what Vice President Joe Biden calls "incorrigible" Taliban adherents. But implementing these operations requires light U.S. footprints backed by drones and other technology that allows missile attacks on identified targets. The problem is that the U.S. government no longer seems to be capable of conducting covert operations without having them reported in the press."

The other is from a Wall Street Journal news story on the Obama administration review of the Predator campaign in Pakistan:

"U.S. and Pakistani intelligence officials are drawing up a fresh list of terrorist targets for Predator drone strikes along the Pakistan-Afghanistan border, part of a U.S. review of the drone program, according to officials involved.
Pakistani officials are seeking to broaden the scope of the program to target extremists who have carried out attacks against Pakistanis, a move they say could win domestic support. The Obama administration is weighing the effectiveness of the program against the risk that its unpopularity weakens an important ally."

This is the first time I have seen in print (though I have been informally told) that Pakistani officials want the US to broaden the campaign beyond AQ and Taleban to target people gunning for the government of Pakistan.  The Obama administration thinks robotic standoff killing has been an effective policy:

President Barack Obama concluded that the drones have been an effective weapon against al Qaeda since President George W. Bush accelerated the missile strikes last year. U.S. officials have seen evidence of disruption as militants devote more time to operational security, choose to sleep in orchards instead of buildings, and take more care about the people with whom they interact, said a person familiar with the evidence.
Already, the campaign has apparently stepped up attacks on the network of Pakistani Taliban leader Baitullah Mehsud, who is believed to be behind the 2007 assassination of former Prime Minister Benazir Bhutto, who was Mr. Zardari’s wife. In the fourth of a series of recent attacks targeting Mr. Mehsud’s network, a drone attack Wednesday killed at least eight militants along the Pakistan-Afghan border, according to two Pakistani officials.
The intensified campaign could help win domestic support for the strikes because it shows that the drone attacks are targeting direct threats to Pakistan, said a Pakistani official.
There is a discussion about whether to expand the strikes to outside Pakistan’s tribal areas, such as the province of Baluchistan. U.S. intelligence officials say they believe many of the Taliban’s senior leaders, such as Mullah Omar, operate openly in the provincial capital of Quetta. The idea of going that far has prompted concern in Islamabad that such strikes will greatly increase the numbers of civilian casualties and further fuel unrest.

If I understand Wired for War correctly, the US is concluding as a general proposition that targeted killing using stand-off, remote controlled platforms is one of the best new tools in its arsenal (especially if, over time, the technology can move to ever more controlled, discrete killing, to get from the level of a missile to something that can kill a single individual and yet not require any human agent - commando or CIA agent - on the ground).  I count myself among those who believe these developments are among the best in decades for improving discrimination in targeting and thus improving humanitarian performance in war and counterterrorism.  There are lots of people in the world who disagree, and Singer might well be among them in some ways, although not in others - his argument is complicated, the facts he presents many and varied, and he acknowledges lots of difficult tradeoffs.  

But either way, as a descriptive matter, robotics is already firmly at the center of targeted killing by the US, and targeted killing is coming to be an important aspect of evolving US counterterrorism strategy in theatres of conflict.  

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (4)


I came across this paper this morning, and thought it might be of interest to this discussion, "Due Process and Targeted Killing of Terrorists."

"Targeted killing" is extra-judicial, premeditated killing by a state of a specifically identified person not in its custody. States have used this tool, secretly or not, throughout history. In recent years, targeted killing has generated new controversy as two states in particular - Israel and the United States - have struggled against opponents embedded in civilian populations. As a matter of express policy, Israel engages in targeted killing of persons it deems members of terrorist organizations who are involved in attacks on Israel. The United States, less expressly, has adopted a similar policy against al Qaeda - particularly in the border areas of Afghanistan and Pakistan, where the CIA has used unmanned Predator drones to fire Hellfire missiles to attempt to kill al Qaeda leaders. This campaign of Predator strikes has continued into the Obama Administration.

This Article explores the implications for targeted killing of the due process model that the Supreme Court has developed in Hamdi v. Rumsfeld and Boumediene v. Bush for detention of enemy combatants. Contrary to a charge leveled by Justice Thomas in his Hamdi dissent, this model does not break down in the extreme context of targeted killing. Instead, it suggests useful means to control this practice and heighten accountability. Our primary conclusion in this regard is that, under Boumediene, the executive has a due-process obligation to develop fair, rational procedures for its use of targeted killing no matter whom it might be targeting anywhere in the world. To implement this duty, the executive should, following the lead of the Supreme Court of Israel (among others), require an independent, intra-executive investigation of any targeted killing by the CIA. Such investigations should be as public as is reasonably consonant with national security. Even in a war-on-terror, due process demands at least this level of accountability for the power to kill suspected terrorists.

Mar 31, 2009 at 12:01 | Registered CommenterDrew Conway

Drew, thanks very much for this - I thought I manage to keep track of articles on this topic on SSRN but somehow missed this one. It is relevant to this discussion particularly as I have emphasized the way in which technology and certain legal and strategic incentives reinforce each other. I may not push too much harder on this here, as I want to also talk about autonomy issues and not pull us away from technology toward targeted killing as such. But, yes, I do think it's highly relevant.

Mar 31, 2009 at 12:15 | Unregistered CommenterKenneth Anderson

Wait this is a targeted assassination program right? Why call it targeted killings, when you have a perfectly good word already...

Apr 1, 2009 at 9:57 | Unregistered Commenterjoey

You write: "remote platform killing by missile removes many potentially messy questions of surrender, including the human rights monitors' investigations into whether you were offered surrender, accepted surrender, didn't accept - war crime, after all, or simple murder, depending - so that operationally, targeted killing by remote control has certain advantages, clarity and definiteness not least among them."

If to be compliant with the laws of war one must offer the enemy a chance to surrender, doesn’t this merely “clarify” that the use of such means is inherently unlawful?

There is a interesting question here about whether the assumption underlying war law – that wars are fought between states, not individuals; that soldiers are simply means, and as such should not be unduly punished for performing their national duty – are anachronistic in an era of asymmetric war. My two cents: I’m not sure that’s the case. Why is it any harder to see the humanity in a soldier recruited (through whatever means) to fight with a global movement, and thereby offer him the same status? On the other hand, if war law is primarily an activity constitutive of a civilized “us,” then this becomes particularly hard to sustain.

Apr 2, 2009 at 5:55 | Unregistered CommenterCharli Carpenter

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>