Add to Technorati Favorites

Weekly Index
Research Sites
CALENDAR

  • Features
  • Categories
  • Resources
  • About

search

Last 100 Entries
« Sci-Fi and the EMA: Evolution in Military Affairs | Main | Challenge: From the Small Wars Council »
Thursday
02Apr

The ethical dilemmas of the robotic revolution

After the more instrumental concerns of command and control, I would here like to consider the ethical issues raised by the development of robotic systems and the consequent removal from harm of military personnel (one of the recurrent claims made in Wired for War is that these machines are saving the lives of young Americans). Precision-guided munitions were seen to make possible the vision of an ethically superior form of warfare, in which the lofty humanitarian ideals invoked for a conflict such as Kosovo would not be tainted by messy collateral damage, or undermined by military casualties that domestic audiences will not countenance. Similarly, robotics appears to offer the tantalising prospect of freeing warfare from some of its most thorny ethical dilemmas. A contrario, I wish to argue that it is neither desirable nor feasible to expunge such dilemmas from war (indeed it is what makes war such a profoundly human activity, even in the midst of its sometimes searing inhumanity) and that robotics will in fact raise a whole gamut of new ethical problems.

One of the arguments made in favour of the ‘robotics revolution’ is that it will be possible to intervene around the world in trouble spots such as Sudan without the fear of military casualties which presently holds back such operations (Somalia was in this sense a watershed moment). Assuming for now that such a proposition is technically feasible, I believe it poses some major ethical questions. In a fascinating article on the Kosovo war, Paul Khan argues that the riskless form of war that was waged there was morally perilous. For Khan, war cannot be simply reduced to a cost-benefit analysis but in itself communicates certain messages about the society that wages it. In the case of Kosovo, the clear message sent by the practice of bombing from 40,000 feet and the refusal to commit ground troops was that the lives of the Kosovars on the behalf of which the West claimed to be intervening were worth less than those of NATO soldiers.

Riskless warfare in pursuit of human rights is, therefore, actually a moral contradiction. If the decision to intervene is morally compelling, it cannot be conditioned on political considerations that assume an asymmetrical valuing of human life […] Our uneasiness about a policy of riskless intervention in Kosovo arises out of an incompatibility between the morality of the ends, which are universal, and the morality of the means, which seem to privilege a particular community.


Kahn goes on to argue that “outside of our own community, the right to intervene, even in a good cause, is never clear” and that intervention on the grounds of universal rights therefore entails a claim that those we are intervening on the behalf of form part of a new enlarged community that exceeds existing boundaries. Such a claim must however be cashed out in the way we intervene in and cannot express a manifest asymmetry in the valuing of life without moral contradiction. Robotics threaten to further reinforce this asymmetry while communicating to the world that the US will defend values it views as universal only insofar as it does not entail a willingness to accept real sacrifice in upholding them.

The second ethical aspect I want to look is that which pertains to autonomous killer robot systems and how they may or may not be able to make ethical decisions on the battlefield. The capacity for ethical self-reflection is one of the key characteristics that distinguishes us from other living beings, and as an activity in which death is dealt out in the name of higher communal principles, war is shot through with ethical quandaries. The ability of an autonomous robotic system to act ethically thus seems like a necessary pre-requisite if we are ever to grant them a free rein in the decision to kill.

What then does it mean to be an ethical being? Can it simply be the automatic and unambiguous application of a certain number of rules of behaviour in accordance with some pre-established values? Can we formalise such behaviour into a set of programmable instructions? I confess to being highly sceptical of any attempt seeking to do so.
 
While we do of course elaborate certain ethical codes, these only offer general rough principles as every ethical situation will be unique and demand an individual choice to be made within the particular circumstances and setting of the situation. Ethics therefore cannot be reduced to a fixed set or rules to be applied to any given situation. Nor can a decision be made merely on the basis of a series of logical propositions. Jean-Paul Sartre famously narrated the moral dilemma faced by one of his students during the Second World War in occupied France. The said student felt that it was his duty to join the resistance forces and fight for the liberation of his country but was torn by the knowledge that if he did so the sick mother he was caring for alone would likely suffer and perhaps die. How is one to determine what the ethically superior decision is in this situation? Sartre demurred from advising any particular course of action to his student, arguing that the eventual choice constituted a quintessential existential commitment that would define the individual making it. I would argue that such dilemmas abound in war, in which lives are risked every day, uncomfortable choices confront all those involved, and predetermined rules provide only general guidance.  

All this seems to pose insuperable problems for the way in which logic-based computer systems operate. Some claim that one of the advantages of robots is that they do not suffer from the emotions which so debilitate human soldiers – but is emotion and empathy not one of necessary requirements for an ethical being? I am aware of course that here I am touching upon the thorny issue of the fundamental nature of human consciousness, and some scientists and philosophers would argue that it can be essentially broken down to discrete logical processes and thus will one day be emulated by artificial systems. I have profound doubts about such claims but I recognise it is still an open question, and any definitive statements about what machines may or may not one day be capable of is premature. 

So let us suppose for a moment that it will become possible to design artificial intelligences which display an ability to make the same complex ethical choices that humans do (and thus a exert a recognisable measure of free choice – and individual existential commitment as Jean-Paul would have it)? In what sense would the internal life of such machines actually differ from ours? Would such entities therefore not be entitled to all the rights we accord to humans? It might no longer be ethically permissible for us to use robots as cheap cannon fodder substitutes for their flesh and blood brethren. We might have to allow for robots that refuse to go to war or carry out certain orders, or try in court those that commit war crimes. 

Therefore either it will never be possible to endow AI with a consciousness akin to ours, in which case robots will always fall short of the ethical imperatives of war, or it will become possible, at which point the role of robots in war and their status within human societies will become highly problematic. Whatever the outcome, the introduction of unmanned killer robots in war will not be the panacea that frees us from making the difficult ethical choices that war entails.

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Post:
|
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>