152 SUMMARY SUMMARY Autonomous Weapon Systems are weapons systems equipped with Artificial Intelligence (AI). They are increasingly deployed on the battlefield. Autonomous systems can have many benefits in the military domain, yet the nature of Autonomous Weapon Systems might also lead to security risks and unpredictable activities. Next to this, the lack of human dignity, which is linked to life-or-death decision-making, is mentioned as concern with the use of Autonomous Weapon Systems. At the same time, many scholars express concerns that Autonomous Weapon Systems will lead to an “accountability gap”; circumstances in which no human can be held responsible and accountable for the decisions, actions and effects of Autonomous Weapon Systems. The aforementioned concerns display that responsibility, accountability and human control are values often mentioned in the societal and academic debate on Autonomous Weapon Systems. To the best of our knowledge, empirical studies on the extent how responsibility and accountability of the deployment of Autonomous Weapon Systems are perceived by common people and experts are missing. The notion of “Meaningful Human Control” is often mentioned as a condition in the debate on Autonomous Weapon Systems to ensure accountability and responsibility over these type of weapon systems. In our opinion, Meaningful Human Control alone will not suffice as requirement to minimize unintended consequences of Autonomous Weapon Systems due to several reasons. Firstly, the concept of Meaningful Human Control is potentially controversial and confusing as human control is defined and understood differently in various literature domains. Secondly, standard concepts of control in engineering and the military domain entail a capacity to directly cause or prevent an outcome that is not possible to achieve with an Autonomous Weapon System, because once an autonomous weapon is launched you cannot intervene by human action. And finally, specific literature on Meaningful Human Control over Autonomous Weapon Systems does not offer a consistent usable concept. We believe that a different approach is needed to minimize unintended consequences of Autonomous Weapons Systems. Therefore, we propose to rather focus on human oversight instead of Meaningful Human Control. This leads to the following research objective: To improve the allocation of accountability and responsibility by designing a framework and implementation concept such that criteria for Human Oversight are identified, represented and validated in order to minimize unintended consequences in the deployment of Autonomous Weapon Systems. To achieve this research objective, we applied the Value-Sensitive Design (VSD) method
RkJQdWJsaXNoZXIy MjY0ODMw