Add to Technorati Favorites

Weekly Index
Research Sites

  • Features
  • Categories
  • Resources
  • About

search ctlab

Last 100 Entries
« Provocation: Wiring Terrorist Sanctuaries | Main | When Robots Are Not Just About Autonomy - Remote Platform Targeted Killing »

Implications for command and control

With Wired for War, Peter Singer has produced a major work on a fascinating and vital aspect of the on-going transformation of war in our era and which heralds a wider engagement with this topic (I can already think of a couple more forthcoming pieces on aspects of this subject). While he has been keen to make the work accessible to the widest audience possible, Singer has been careful to balance every gee-whiz moment and heady proclamation of an imminent ‘robotic revolution’ with more sober assessments of the limitations and attendant problems of the growing usage of automated and remote controlled machines on the battlefield. As such, the book offers a comprehensive and well-rounded tour of the subject and one that provides many springboards for further research and discussion, in this forum as in others. I have no doubt Wired for War will be a major reference work in security debates in the coming years. 

In this first post, I will address the issues of command and control raised by Wired for War since these are ones I have engaged with most in my recent book. In chapter 18, Singer offers a brief but fascinating consideration of the impact of both UAVs and automated decision aids on command practices, the effects of which I feel are likely to be counter-productive if not disastrous. 

We are firstly told that one of the consequences of the use of UAVs is the increasing temptation of generals and other officers high up the command chain to micro-manage operations by using the data feeds provided by these devices. Convinced that they have at their disposal a God’s eye view of the battlefield, the military brass thus dispossesses its subordinates of any autonomy to determine the conduct operations (or even of any grounds to a claim to superior situational awareness since by definition they have only the bug’s eye view on the ground). The evidence and analysis here support my own concerns that the “flattening of the chain of command” and the cutting out of the middle layers of command called for by the advocates of network-centric warfare would likely not mean decentralisation but greater interference of the upper ranks in the activities of the lower ones. Singer also provides many good reasons why this is undesirable.

The tension between hierarchical micro-management and distributed command around an overarching mission objective is one that courses through the history of war and that van Creveld has most insightfully analysed in Command in War, weighing in powerfully for the latter. One of the characteristics of the information technology that is being so intensively deployed in the military (and of which robots are an aspect) is that it can be harnessed for either of these approaches, with the outcome determined not primarily by technology but by doctrine and operational practice. I would here like to connect this question to the issue of swarming discussed earlier in the book (pp.229-236). Indeed the swarm, the decentralised interaction of autonomous agents uncovered by the sciences of chaos and complexity, presents the alternative mode of military organisation to that of centralisation. Insurgents and terrorist networks have already made very effective use of swarming and without any elaborate information and communication technologies. To what extent the US military embraces the swarm is in my view one of the most central questions for the future of armed forces as a whole, not simply for robots. 

The second issue broached by Singer is that of automated systems using artificial intelligence to assist or even replace decision-makers at the command level. The head of Strategic Command tells us that “the decision cycle of the future is going to be microseconds” (p.357), a statement indicative of the widespread belief that it is more important to act as rapidly as possible rather than to buy oneself more time to think about how one is going to act. And what ‘thinks’ faster than a computer?

Any mention of a decision cycle in the US military is today invariably accompanied by a reference to the OODA loop (observe, orient, decide, act), first formulated by the military strategist John Boyd. Yet it is almost always deployed in an impoverished form, reducing the OODA loop to a sequential decision cycle that must be run through as quickly as possible. In doing so one entirely misunderstands what Boyd saw as the most important phase of the loop: orientation. In his mind, orientation was where incoming information is analysed using existing frameworks of interpretation – this much is commonly understood. However, Boyd crucially argued that this was also where these frameworks would be assessed and taken apart and reinvented if necessary on the basis of this new information. Furthermore orientation would in fact guide all the other aspects of the loop (see Boyd’s full diagram of the OODA loop). Boyd believed existing frameworks or models of reality suffered from an inevitable entropy, all the more since war is an activity which involves a sentient opposition who will seek to mislead and negate any successful tactic or strategy. 

While AI may have made progress over the years, I suspect it remains (as it most likely will for the foreseeable future) incapable of significantly recasting its modes of operation or the schemas of interpretation in the way Boyd saw as necessary. In all probability, such decisionary systems are based on the recognition of certain pre-determined patterns and signatures in the behaviour and identity of opposing forces to which it applies fixed models and routines in order to produce command decisions. Here we have again what amounts to the imposition of hierarchical command on all subordinate troops, this time on the basis of pre-determined closed models and algorithms that have been programmed into such systems. While there may be some limited uses for decision aids (notably in the area of logistics), any attempt to generalise them on the tactical or strategic levels will be founded on a deep misunderstanding of the nature of war, one that reduces it to a mere process of production to be optimised and neglects the complex interaction between opposing wills. Rapidity of decision and action will be of limited benefit if the behaviour produced is utterly predictable. A resourceful adversary will eventually find ways to exploit this predictability and operate within the US military’s OODA loop.  

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (1)

Few in the military really know or understand the OODA Loop. They are still stuck in FM 3-0, Operations, and FM 5-0, Plans and Orders. Even with all the latest in technology, the entire planning and analysis process is an if-then approach that relies HEAVILY on trigger points, decision points and enemy course of action templates. There may be situational awareness with the technology, but there is absolutley nothing that can be called situaltional understanding. They do things "complicated" very well; things that require timing and movement of units and logistics; they DO NOT do well things that are complex; social dynamics and planning operations that affect and effect multiple lines of effort and operations in COIN.

Apr 1, 2009 at 12:56 | Unregistered CommenterDr. Terry Tucker

PostPost a New Comment

Enter your information below to add a new comment.
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>