Proefschrift

20 CHAPTER 1 1 and Cavalcante Siebert et al. (2023), who build on the two necessary conditions for Meaningful Human Control- tracking and tracing – distinct by Santoni de Sio & Van den Hoven (2018), to create actional properties for the design of AI systems in which each of the properties human and artificial agents interact. In their reflection on their work the authors highlight that ‘Meaningful human control is necessary but not sufficient for ethical AI.’ (Cavalcante Siebert et al., 2023, p. 252). The authors amplify this by stating that for a human-AI system to align with societal values and norms, Meaningful Human Control must entail a larger set design objectives which can be achieved by transdisciplinary practices. In our opinion, Meaningful Human Control alone will not suffice as requirement to minimize unintended consequences of Autonomous Weapon Systems due to several reasons. Firstly, the concept of Meaningful Human Control is potentially controversial and confusing as human control is defined and understood differently in various literature domains (see section 2.11 for an overview of the concept of control in different domains). Secondly, standard concepts of control in engineering and the military domain entail a capacity to directly cause or prevent an outcome that is not possible to achieve with an Autonomous Weapon System, because once an autonomous weapon is launched you cannot intervene by human action. And finally, specific literature on Meaningful Human Control over Autonomous Weapon Systems does not offer a consistent usable concept. We believe that a different approach is needed to minimize unintended consequences of Autonomous Weapons Systems. Therefore, we propose an additional perspective that focusses on human oversight instead of Meaningful Human Control. Several scholars are describing the concept of human oversight in Autonomous Weapon Systems and AI in general. HRW and IHRC (2012) state that human oversight on robotic weapons is required to guarantee adequate protection of civilians in armed conflicts and they fear that when humans only retain a limited, or no, oversight role, that they could be fading out the decision-making loop. Taddeo and Floridi (2018) describe that human oversight procedures are necessary to minimize unintended consequences and to compensate unfair impacts of AI. The European Commission mentions Human Agency and Oversight as one of the Ethics Guidelines for Trustworthy AI (European Commission, 2019). However, current human oversight mechanisms are lacking effectiveness (HRW & IHRC, 2012) and might gradually erode to become meaningless or even impossible (Williams, 2015). Marchant et al. (2011) note that several governance mechanisms can be applied to achieve human oversight of Lethal Autonomous Robots. Oversight incorporates the governance mechanisms of institutions and is therefore broader than merely Meaningful Human Control. We propose a human oversight mechanism from a governance perspective to ensure accountability and responsibility in the deployment of Autonomous Weapon Systems in order to minimize unintended consequences. In the remainder of this chapter,

RkJQdWJsaXNoZXIy MjY0ODMw