Proefschrift

46 2 CHAPTER 2 hierarchy has been applied to various cases, for example AI for Social Good (AI4SG) (Umbrello & Van de Poel, 2021) and smart home systems (Umbrello, 2020). This translation might prove to be quite difficult as insight is needed in the intended use and context of the value which is not always clear from the start of a design project. Also, as artefacts are often used in an unintended way or context, new values are being realized or a lack of values is discovered (van Wynsberghe & Robbins, 2014). An example of this are drones that were initially designed for military purposes, but are now also used by civilians for filming events and even as background lights during the 2017 Super Bowl halftime show. The value of safety is interpreted differently for military users that use drones in desolated regions compared to that of 300 drones flying in formation over football stadium in a populated area. The different context and usage of a drone will lead to a different interpretation of the value safety and could lead to more strict distance norms for flight safety which in turn could be further specified in alternate design requirements for rotors and software for proximity alerts, to name two examples. The application of a value hierarchy to Autonomous Weapon Systems can for example be illustrated by Figure 4 in which the value of accountability is translated into norms for `transparency of decision-making’ and `insight into the algorithm’ (Verdiesen, 2017). This translation will allow users to get an understanding of the decision choices the Autonomous Weapon System makes in order to trace and justify its actions. The norms for transparency of decision-making lead to specific design requirements. In this case a feature to visualise the decision-tree, but also to present the decision variables the Autonomous Weapon Systems used, such as trade-offs in collateral damage percentages of different attack scenarios to provide insight into the proportionality of an attack. The Autonomous Weapon System should also be able to present the sensor information, for example imagery of the site, in order to show that it discriminated between combatants and non-combatants. To get insight into the algorithm, an Autonomous Weapon System should be designed with features that it normally will not contain. In this case these features would include a screen as user interface that shows the algorithm in a human readable form and the functionality to download the changes made by the algorithm as part of its machine learning abilities that can be studied by an independent party, such as a war tribunal of the United Nations if the legality of the actions of an Autonomous Weapon Systems are questioned. Kroes and van de Poel (2015) state that an objective measurement of values is not possible because the operationalization is done by means of second-order value judgments which seriously undermine the construct validity of the value measurement. Judgments are often considered subjective as their truth, or falsity, depend on feelings or attitudes of the person who judges (Searle, 1995). To counter this lack of validity, the

RkJQdWJsaXNoZXIy MjY0ODMw