Proefschrift

100 4 CHAPTER 4 the results it seems that some of the participants unconsciously changed the order of the acceptability of the alternatives. Before the value deliberation, the participants were asked in part 2 of the survey for each of the alternatives: which values are relevant for this option? This is step 4- make values explicit- in the value deliberation process (Figure 14). They could check a predefined list of the values: fairness, suffering, accountability, responsibility, safety, harm, human dignity, meaningful human control, predictability, privacy, trust, reliability, proportionality, blame, robustness, explainability. These values were selected based on (Verdiesen, Santoni de Sio, & Dignum, 2019) and the pilot study in which the participants indicated which values they missed in the predefined list. The values that were highlighted as relevant for the alternatives were: safety, meaningful human control, proportionality, accountability, responsibility, predictability, reliability and explainability. As part of the evaluation (step 8 in Figure 14) participants were asked which values they missed on the predefined value list. Distinction, necessity, precaution, human autonomy, accuracy, human competences, relational and sociability between human and robot, mental and emotional health of the troops, usability and security were mentioned. Figure 15: Overview results scenario ranking

RkJQdWJsaXNoZXIy MjY0ODMw