58 2 CHAPTER 2 In an attempt to overcome the conceptual impasse on the notion of Meaningful Human Control, Santoni de Sio and Van den Hoven (2018) tried to offer a deeper philosophical analysis of the concept, by connecting it more directly to some concepts coming from the philosophical debate on free will and moral responsibility, and in particular the concept of “guidance control” by Fischer and Ravizza (1998). By reinterpreting and adapting the two criteria for guidance control, they eventually identified two conditions that need to be satisfied for an autonomous system to be under Meaningful Human Control. The first condition is the tracking condition that entails that ‘the system should be able to respond to both the relevant moral reasons of the humans designing and deploying the system and the relevant facts in the environment in which the system operates…’. The second condition is the tracing condition according to which the actions of an Autonomous (Weapon) System should be traceable to a proper technical and moral understanding on the part of one or more relevant human person who designs or interacts with the system (Santoni de Sio & Van den Hoven, 2018, p. 1). Mecacci and Santoni De Sio (2019) operationalized this concept of Meaningful Human Control even further in order to specify design requirements. They focused on the tracking condition and offer a framework for which Meaningful Human Control as “reasonresponsiveness” which identifies agents and their different type of reasons in relation to the behaviour of an automated system. By this, Mecacci and Santoni De Sio (2019) go beyond engineering and human factors conceptions of control. In a way that directly connects Meaningful Human Control with the idea of social control over the technology, the authors reason that, in presence of appropriate technical and institutional design, a system can and should be under Meaningful Human Control by more than one agent and even by super-individual agents such as a company, society or state. These complex relationships of “reason-responsiveness” are modelled in a framework that looks at the distance of different forms of human reasoning to the behaviour of a system. This scale of distance allows for classifying different type of agents and their contexts, values and norms. Mecacci and Santoni De Sio’s (2019) framework shows that the narrow focus of engineering and human factors control needs to be widened to allow a development of autonomous technologies that are sufficiently responsive to ethical and societal needs. In recent years, other scholars have been working on operationalising the concept of Meaningful Human Control (see section 7.1. for emerging insights on operationalising Meaningful Human Control). Amoroso and Tamburrini (2021) created a normative framework for Meaningful Human Control. They suggest a differentiated approach and to abandon the search for a one-size-fits all solution. They state that rules are needed to bridge the gap between specific weapon systems and their uses on one hand and the ethical and legal principles on the other hand. Another approach is that of Umbrello (2021) in which he couples two different Levels of Abstraction (LoA) to achieve Meaningful Human Control over an Autonomous Weapon System. In this, he
RkJQdWJsaXNoZXIy MjY0ODMw