Proefschrift

136 7 CHAPTER 7 the bridge rules: L1: A human engages with and selects targets, and initiates any attack. L2: A program suggests alternative targets and a human chooses which to attack. L3: A program selects targets and a human must approve them before the attack. L4: A program selects and engages targets, but is supervised by a human who retains the power to override its choices and abort the attack. L5: A program selects targets and initiates an attack on the basis of the mission goals as defined at the planning/activation stage, without further human involvement. As an example, the following if-then rule is given: ‘“IF the weapons system is programmed to perform an exclusively antimateriel defensive function (what property) AND is deployed in a sufficiently structured scenario (where property), THEN (L4) human operators must be put in charge of supervising the weapon’s selection of targets and be given the power to override its choices.” (Amoroso & Tamburrini, 2021, p. 261). Another approach to operationalise Meaningful Human Control is presented by Umbrello (2021) in which he couples two different Levels of Abstraction (LoA) to achieve Meaningful Human Control over an Autonomous Weapon System. In this he combines systems thinking and systems engineering as conceptual tools to frame the commonalities between these two LoAs. The author views a broader decision-making mechanism before deployment in which different agents have different levels of control over a specific part of the process. The concept of Meaning Human Control should reflect this and be positioned within the larger distributed network of decision-making. The systems thinking LoA helps conceptualizing the procedural processes such as operational planning and target identification while the systems engineering LoA aids with understanding both the tracing design as well as tracking the responsiveness of autonomous systems to the relevant moral reasons of the relevant agents. By this, it is possible to design for complex emergent behaviours and boundaries of systems. To achieve Meaningful Human Control, both LoA’s are required and need to be coupled (Umbrello, 2021). A third approach of operationalising Meaningful Human Control is that of Cavalcante Siebert et al. (2023) who are proposing four actional properties for AI-based systems under Meaningful Human Control to bridge the gap between philosophical theory and engineering practice. Building on the two necessary conditions for meaningful human control - tracking and tracing - distinct by Santoni de Sio & Van den Hoven (2018), the properties Cavalcante Siebert et al. (2023, p. 251) propose are: Property 1: The human-AI system has an explicit moral operational design domain (moral ODD) and the AI agent adheres to the boundaries of this domain.

RkJQdWJsaXNoZXIy MjY0ODMw