Proefschrift

39 2 EXTENSIVE LITERATURE REVIEW 3. Autonomy level 3 – Task Autonomy: ‘A human operator specifies a general task and the platform processes a course of action and carries it out under its own supervision. The operator typically has the means to oversee the system, but this is not necessary for the operation.’ 4. Autonomy level 4 – Full Autonomy: ‘A system with full autonomy would create and complete its own tasks without the need for any human input, with the exception of the decision to build such a system. The human is so far removed from the loop that the level of direct influence is negligible. These systems might display capacities that imitate or replicate the moral capacities of sentient human beings (though no stand on this matter shall be taken here)’ (Galliott, 2015, p. 7). This classification is in our opinion a good attempt in classifying the degree of autonomy of Autonomous Weapon Systems, but we have some reservations from an engineering point of view. Galliott (2015) himself states that it would be possible to merge the second and third level of autonomy, because both are a semi-autonomous operational level. We agree with his statement, but this is not the main issue we have with these definitions. We believe that it is odd to start list of autonomy levels with a category of non-autonomous systems. More importantly, in the fourth level of autonomy the author states that: ‘these systems might display capacities that imitate or replicate the moral capacities of sentient human beings’. It seems he refers to the definition of strong or general AI, in that a computer has cognitive states and programs can explain human cognition (Searle, 1980). To state that an autonomous system possesses moral capacities shows in our opinion a lack of technical knowledge on current AI systems as these are not more than computers that display Interactivity, Autonomy and Adaptability features (Floridi & Sanders, 2004). As it remains to be seen if AI capable of ‘moral capacities of sentient human beings’ (Galliott, 2015, p. 7) will ever be developed, we believe that the classification Galliott (2015) provides is not realistic with the current state of technology. The classification of Royakkers and Orbons (2015) is based on a combination of the system’s usage (e.g. ground, underwater, air) and in lesser degree the level of supervision (e.g. teleoperated or autonomous) and it displays good insight in the current and (near) future military technology. The classification of Galliott (2015) describes the degree of human supervision of the weapon system and by this takes a human-centric approach. A human-centric approach provides a good starting point to study the broader concept of Human Oversight. For this, we will explore human values and value theories to get a grasp of what people find important in life.

RkJQdWJsaXNoZXIy MjY0ODMw