171 APPENDIX C. OVERVIEW KEY INSIGHTS FROM LITERATURE REVIEW In this appendix the key insights from the literature review in chapter 2 are summarized in tables to provide an overview. Table 1: Overview of AI and engineering literature on decision-making processes in AI (section 2.1) Author (s) Key concepts Adams (2001) Humans will pretend to be in complete control while gradually moving towards systems in which human control becomes more abstract with lesser participation in the decision-making. The armed forces would like to have a ‘person in the loop’, but if this person has a meaningful role in operating the system then he or she becomes the most critical component of the system. Not only is the person difficult to replace, it is also the most vulnerable component for attack and this could be an incentive to not include a person in the system at all. The trend in development of systems is that the operator is taken ‘out of the loop’, changing it from an active controller to that of a supervisor that functions merely as a fail-safe function in case of a system malfunction. In the future, humans will make the strategic decisions regarding overall objectives of a conflict and have high level control but will be informed by automated systems and direct human participation will be rare. It may even come to a very extreme point where humans only make the policy decision to enter hostilities, but more likely human participation will be in the form of giving strategic directions to systems. Araujo et al. (2020) For high impact decisions, the potential fairness, usefulness and risk of specific decision-making automatically by AI compared to human experts was often on par or even better evaluated. Domain-specific knowledge, equality and self-efficacy were associated with more positive general attitudes about the usefulness, the fairness. Privacy concerns were negative associated regarding the risk of decisions made by AI. Cordeschi (2013) Autonomy of robots, meaning the ‘full automation of their decision processes’, create a paradox in reliability and autonomy in their decision-making. An increase in level of autonomy implies that designers or operators have less control over the machine. It might be impossible for humans to avert unintended consequences from the actions taken by a machine. Adaptive automation could be used for more efficient automation in which the level and type of automation can be varied depending on the operator’s needs or context. Optimal choices in decision-making do not exist, both for humans and AI, only “satisficing” choices can be made. AI cannot be expected to be fully reliable in wartime decision-making and decision-making in general. However, this is also true for human beings in decision-making cases, but human beings and machines can be more reliable than each other in certain decision-making situations. Côté et al. (2011) Humans can provide recommendations to an autonomous agent when full autonomy is not feasible or desired. This adjustable autonomy allows for human recommendations in an autonomous agent policy. In this adjustable autonomy concept the human can interact with an agent to achieve a mission and the agent can share control with an external entity. Kramer et al. (2017) As AI gets more integrated in our society, the question is not only if we can build moral decision-making in AI but also if ‘moral AI’ systems should be permitted at all to make decisions. People are asked if they favour decisions with important consequences made by computers or humans. It turned out that the more acquainted people are with computers, the more likely they were to prefer decisions made by computers over decisions made by humans. This preference was not based on characteristics such as age or people’s values, but mainly on previous experience with computer agents. It is expected that the more people gain experience with computer decision-making and it becomes more visible, the more it will be accepted by the general public. APPENDIX C
RkJQdWJsaXNoZXIy MjY0ODMw