135 7 CONCLUSION the AIV&CAVV that we adhere to in this research: ‘A weapon that, without human intervention, selects and engages targets matching certain predefined criteria, following a human decision to deploy the weapon on the understanding that an attack, once launched, cannot be stopped by human intervention.’(AIV & CAVV, 2015, p. 11; Broeks et al., 2021, p. 11). The definition of the AIV & CAVV explicitly mentions that targets should match predefined criteria and that the weapon will be deployed following a human decision. These two aspects are missing in the value-neutral definition presented by Taddeo and Blanchard (2022). However, from a military perspective these two aspects are imperative for responsible use of Autonomous Weapon Systems. Although the definition of Taddeo and Blanchard (2022) is a valuable addition to the academic and political debate on Autonomous Weapon Systems, we still adhere in our research to the definition of the AIV &CAVV for the reasons mentioned above. Operationalising Meaningful Human Control In recent years, several scholars have been working on operationalising the concept of Meaningful Human Control. Amoroso and Tamburrini (2021) created a normative framework for Meaningful Human Control. They suggest a differentiated approach and to abandon the search for a one-size-fits-all solution. Three roles are described by them in order for human control over weapon systems to be meaningful: 1) ‘…human operators must play the role of fail-safe actor, preventing malfunctioning weapons from resulting in direct attacks against civilian populations or excessive collateral damages’, 2) ‘…human control must function as an accountability attractor, securing legal conditions for criminal responsibility ascription in case a weapon follows a course of action that is in breach of international law.’ and 3) ‘…human control operates as a moral agency enactor, ensuring that decisions affecting the life, physical integrity, and property of people involved in armed conflicts, including combatants, are not taken by artificial agents.’(Amoroso & Tamburrini, 2021, p. 258). They state that rules are needed to bridge the gap between specific weapon systems and their uses on one hand and the ethical and legal principles on the other hand. These rules could be represented as “ifthen” statements: the ‘if’ statement includes properties regarding the what and where of the mission and how it will perform its task, and the ‘then’ statement establishes the human-machine share control that is legally required on the use of a given weapon system (Amoroso & Tamburrini, 2021). Based on the taxonomy of Sharkey (2016) the authors (Amoroso & Tamburrini, 2021, p. 261) propose five basic levels (L) of human-machine interactions to use as ‘then’ part of
RkJQdWJsaXNoZXIy MjY0ODMw