Proefschrift

19 1 INTRODUCTION The concerns described above highlight that responsibility, accountability and human control are values often mentioned in the societal and academic debate on autonomous systems. Responsibility can be forward-looking to actions to come and/ or backwardlooking to actions that have occurred. Accountability is a form of backward-looking responsibility that refers to the ability and willingness of actors to provide information and explanations about their actions and defines mechanisms for corporate and public governance to hold agents and organisations accountable in a forum. Responsibility contributes to minimizing unintended consequences by anticipating on actions and unintended consequences to come and taking measures to prevent or mitigate them. Accountability can decrease unintended consequences in providing information and explanations by actors of their previous actions in order for other actors to learn from them and prevent mistakes and unintended consequences of their own. We found little empirical research that supports the concerns mentioned above or that provide insight in how responsibility and accountability regarding the deployment of Autonomous Weapon Systems are perceived by the general public and military. The Open Robots Ethics initiative surveyed the public opinion in a poll in 2015 (Open Roboethics initiative, 2015) and issued a report. However, the results were not published in an academic journal and the survey was not extensive enough to draw substantive conclusions. The notion of Meaningful Human Control is often mentioned as a requirement in the debate on Autonomous Weapon Systems to ensure accountability and responsibility over these type of weapon systems. The U.K.-based NGO Article 36 is credited for putting the concept of “Meaningful Human Control” at the centre of the discussion on Autonomous Weapon Systems by mentioning it in several reports and policy papers since 2013 (Amoroso & Tamburrini, 2021). Since then, the concept of Meaningful Human Control is often mentioned as requirement (Adams, 2001; Heather M Roff & Moyes, 2016; Vignard, 2014) to ensure accountability and responsibility for the deployment of Autonomous Weapon Systems, but this concept is not-well defined in literature and quantifying the level of control needed is hard (Schwarz, 2018). Adams (2001) noticed as early as 2001 that the role of the human changed from being an active controller to that of a supervisor and that direct human participation in decisions of AI systems would become rare. Some scholars are working on defining the concept of Meaningful Human Control in Autonomous (Weapon) Systems (Ekelhof, 2015; Horowitz & Scharre, 2015; Mecacci & Santoni De Sio, 2019; Santoni de Sio & Van den Hoven, 2018). In recent years, other scholars have been building on this work by operationalising the concept of Meaningful Human Control (see section 7.1. for emerging insights on operationalising Meaningful Human Control). Amoroso & Tamburrini (2021) bridge the gap between weapon usage and ethical principles based on ‘if-then’ rules, Umbrello (2021) proposes two Levels of Abstraction in which different agents have different levels of control over the decision-making process to deploy an Autonomous Weapon System,

RkJQdWJsaXNoZXIy MjY0ODMw