Proefschrift

32 2 CHAPTER 2 2.1 DECISION-MAKING PROCESSES IN AI Decision-making processes in Artificial Intelligence (AI) have been studied for over two decades and is quite well delineated in AI and engineering literature (see Table 1 in appendix C for an overview). Decision-making is defined as a process in which: ‘an entity is in a situation, receives information about that situation, and selects and then implements a course of action.’ (Miller, Wolf, & Grodzinsky, 2017, p. 390). Adams (2001) noticed as early as 2001 that the role of the human changed from being an active controller to that of a supervisor, and that direct human participation in decisions of AI systems would become rare. The concept of adjustable autonomy, i.e., switching between autonomy levels, is mentioned often in literature to deal with changes in context, the need of the operator and the control humans can exert over the machine (Cordeschi, 2013; Côté, Bouzid, & Mouaddib, 2011; van der Vecht, 2009). As is noted by Cordeschi (2013), optimal choices in decision-making for humans and AI do not exist, therefore only satisficing choices can be made. It depends on the situation if humans or AI can make the most reliable decision. In order for an AI system to be able to make ethical decisions it is not necessary that its decision-making is similar to that of a human, but the system will need a mechanism such as a heuristic algorithm to analyse its past decisions and prepare for future decisions (Miller et al., 2017). However, moving from a technical debate to an ethical point of view, according to Kramer, Borg, Conitzer, and Sinnott-Armstrong (2017), the question is not only if we can build moral decision-making in AI, but also if ‘moral AI’ systems should be permitted at all to make decisions. While this is certainly an important question, it is interesting to note that, as a matter of fact, people’s moral intuitions about this issue appears to be highly dependent on their acquaintance with computers. It seems that the more people are familiar with computers, the more they prefer decisions made by computers over decisions made by humans (Araujo, Helberger, Kruikemeier, & De Vreese, 2020; Kramer, Borg, Conitzer, & Sinnott-Armstrong, 2017). Araujo et al. (2020) found that for high impact decisions, the potential fairness, usefulness and risk of specific decision-making automatically by AI compared to human experts was often on par or even better evaluated. Based on their research, Kramer et al. (2017) expect that the more people gain experience with computer decision-making and it becomes more visible, the more it will be accepted by the general public.

RkJQdWJsaXNoZXIy MjY0ODMw