Toward a Normative Model of Meaningful Human Control over Weapons Systems

被引:12
作者
Amoroso, Daniele [1 ]
Tamburrini, Guglielmo [2 ]
机构
[1] Univ Cagliari, Dept Law, Int Law, Cagliari, Italy
[2] Univ Napoli Federico II, Philosophy Sci & Technol, Naples, Italy
关键词
autonomous weapons systems; meaningful human control; human dignity; just war theory; accountability;
D O I
10.1017/S0892679421000241
中图分类号
B82 [伦理学(道德学)];
学科分类号
摘要
The notion of meaningful human control (MHC) has gathered overwhelming consensus and interest in the autonomous weapons systems (AWS) debate. By shifting the focus of this debate to MHC, one sidesteps recalcitrant definitional issues about the autonomy of weapons systems and profitably moves the normative discussion forward. Some delegations participating in discussions at the Group of Governmental Experts on Lethal Autonomous Weapons Systems meetings endorsed the notion of MHC with the proviso that one size of human control does not fit all weapons systems and uses thereof. Building on this broad suggestion, we propose a "differentiated"-but also "principled" and "prudential"-framework for MHC over weapons systems. The need for a differentiated approach-namely, an approach acknowledging that the extent of normatively required human control depends on the kind of weapons systems used and contexts of their use-is supported by highlighting major drawbacks of proposed uniform solutions. Within the wide space of differentiated MHC profiles, distinctive ethical and legal reasons are offered for principled solutions that invariably assign to humans the following control roles: (1) "fail-safe actor," contributing to preventing the weapon's action from resulting in indiscriminate attacks in breach of international humanitarian law; (2) "accountability attractor," securing legal conditions for international criminal law (ICL) responsibility ascriptions; and (3) "moral agency enactor," ensuring that decisions affecting the life, physical integrity, and property of people involved in armed conflicts be exclusively taken by moral agents, thereby alleviating the human dignity concerns associated with the autonomous performance of targeting decisions. And the prudential character of our framework is expressed by means of a rule, imposing by default the more stringent levels of human control on weapons targeting. The default rule is motivated by epistemic uncertainties about the behaviors of AWS. Designated exceptions to this rule are admitted only in the framework of an international agreement among states, which expresses the shared conviction that lower levels of human control suffice to preserve the fail-safe actor, accountability attractor, and moral agency enactor requirements on those explicitly listed exceptions. Finally, we maintain that this framework affords an appropriate normative basis for both national arms review policies and binding international regulations on human control of weapons systems.
引用
收藏
页码:245 / 272
页数:28
相关论文
共 64 条
  • [1] Advisory Council on International Affairs (AIV) and Advisory Committee on Issues of Public International Law (CAVV), AUT WEAP SYST NEED M
  • [2] Autonomous Weapon Systems and Strategic Stability
    Altmann, Jurgen
    Sauer, Frank
    [J]. SURVIVAL, 2017, 59 (05) : 117 - 142
  • [3] Altmann Jurgen, SURVIVAL
  • [4] [Anonymous], DEFINING EMERGING NO
  • [5] [Anonymous], ANN FIN REP M HIGH C
  • [6] [Anonymous], KILLER ROBOTS UK GOV
  • [7] [Anonymous], 2021, Opinion on the Integration of Autonomy into Lethal Weapon Systems
  • [8] [Anonymous], Ibid. Further restraints include limits on the types of targets limits on the duration, geographical scope and scale of use, and limits on situations of use
  • [9] [Anonymous], NO SUCH THING KILLER
  • [10] [Anonymous], GROUP GOVT EXP LETH