Article

Autonomous weapon systems - Q & A

A challenge to human control over the use of force.

Technological advances in weaponry mean that decisions about the use of force on the battlefield could increasingly be taken by machines operating without human intervention. Here, we examine the potential implications of such a profound change in the way war is waged, and caution against the use of such weapons unless respect for international humanitarian law can be guaranteed.

How could autonomous weapon systems, operating independently, distinguish between a combatant and a civilian?  Would they be capable of cancelling an attack that risks disproportionate effects on civilians? And who would be held responsible and accountable for a violation of international humanitarian law?

Owing to the many unresolved questions, the ICRC has called on States to properly assess the potential human cost and international humanitarian law implications of these new technologies of warfare. And in March 2014 the ICRC convened an international expert meeting  to facilitate discussion of these issues. (Read the Expert meeting report )

What are autonomous weapons?

Autonomous weapon systems (also known as lethal autonomous weapons or “killer robots”) independently search for, identify and attack targets without human intervention.  There are already some weapon systems in use today that have autonomy in their ‘critical functions’ of identifying and attacking targets.  For example, some defensive weapon systems have autonomous modes to intercept incoming missiles, rockets, artillery shells, or aircraft at close range. So far these weapons tend to be fixed in place and operate autonomously for short time-periods, in narrow circumstances (e.g. where there are relatively few civilians or civilian objects), and against limited types of targets (i.e. primarily munitions or vehicles).  However, in the future autonomous weapon systems could operate outside tightly constrained spatial and temporal limits, encountering a variety of rapidly changing circumstances and possibly targeting humans directly.

Is a drone a type of autonomous weapon?

Autonomous weapon systems fire without human intervention, in contrast to the unmanned air systems (also known as drones or remotely piloted aircraft) in use today.  Drones may have other autonomous features (such as auto-pilot and navigation) but they require human operators to select targets and activate, direct and fire their weapons.

There have been calls for a moratorium or a ban on the development, production and use of autonomous weapon systems. Does the ICRC support these calls?

The ICRC has not joined these calls for now. However, the ICRC is urging States to consider the fundamental legal and ethical issues related to the use of autonomous weapon systems before they are further developed or deployed in armed conflict, as required by international humanitarian law. The ICRC is concerned over the potential human cost of autonomous weapon systems and whether they are capable of being used in accordance with international humanitarian law.

What does international humanitarian law say about autonomous weapons?

There is no specific rule for autonomous weapon systems. However, the law says that States must determine whether the use of any new weapon or means or method of warfare that it develops or acquires would be prohibited by international law in some or all circumstances, as required by Additional Protocol I to the Geneva Conventions.

In other words, the longstanding rules of international humanitarian law governing the conduct of hostilities, in particular the rules of distinction, proportionality and precautions in attack, apply to all new weapons and technological developments in warfare, including autonomous weapon systems. Carrying out such legal reviews is of crucial importance in light of the development of new weapons technologies.

The central challenge for any State developing or acquiring an autonomous weapon system is to ensure it is capable of operating in compliance with all these rules. For example, it is not clear how such weapons could discriminate between a civilian and a combatant, as required by the rule of distinction. Indeed, such a weapon might also have to distinguish between active combatants and those hors de combat or surrendering, and between civilians taking a direct part in hostilities and armed civilians, such as law enforcement personnel or hunters, who remain protected against direct attack.

An autonomous weapon system will also have to operate in compliance with the rule of proportionality, which requires that the incidental civilian casualties expected from an attack on a military target not be excessive when weighed against the anticipated concrete and direct military advantage. Finally, an autonomous weapon system will have to operate in a way that enables application of the required precautions in attack designed to minimize civilian casualties.

Assessments of current and foreseeable technology indicate it is unlikely that these decision-making capabilities could be programmed into a machine. Therefore, today there are serious doubts about the ability of autonomous weapon systems to comply with international humanitarian law in all but the narrowest of scenarios and the simplest of environments.

What might be the implications of using autonomous weapon systems in armed conflict?  

Some proponents of autonomous weapon systems argue that they could be programmed to operate more ‘cautiously’ and accurately than human beings, and therefore be used to limit unintended civilian casualties. On the other hand, critics counter that autonomous weapon systems will always lack the human judgement necessary for lawful use of force, and that their use is more likely to result in much greater human cost.

These weapon systems also raise serious ethical questions, and their widespread deployment would represent a paradigm shift in the conduct of hostilities. The fundamental question for all of us is whether the principles of humanity and the dictates of public conscience can allow machines to make life-and-death decisions.

Who is responsible if the use of an autonomous weapon system results in a violation of international humanitarian law?

As a machine, an autonomous weapon system could not be held responsible for a violation of international humanitarian law. This raises the question, beyond the responsibility of those deploying these systems, of who would be legally responsible if the operation of an autonomous weapon system results in a war crime: the engineer, the programmer, the manufacturer or the commander who activates the weapon? If responsibility cannot be determined as required by international humanitarian law, is it legal or ethical to deploy such systems?

What should be the focus of future discussions among States?

With increasing autonomy there is a risk of substituting human decision-making with that of machines, and thereby eroding human control over the use of force. While there is recognition that humans must retain ultimate control, more detailed deliberation is needed about what constitutes adequate, meaningful, or appropriate human control over the use of force. 

The ICRC has recommended that States examine autonomy in the ‘critical functions’ of existing and emerging weapon systems, and share this information, to gain a better understanding.  Future discussions must address a key question: at what point, and in which circumstances, do we risk losing meaningful human control over the use of force?

As many questions remain unanswered, the ICRC is calling on States to ensure that autonomous weapon systems are not employed if compliance with international humanitarian law cannot be guaranteed.