33 2 EXTENSIVE LITERATURE REVIEW 2.2 ARCHITECTURES FOR ETHICAL DECISION-MAKING IN AI When computer programs of autonomous systems are implemented in the unpredictable real-world, the behaviour of these systems becomes non-deterministic and a range of possible outcomes can occur (Dennis, Fisher, Slavkovik, & Webster, 2016). To govern these unpredictable outcomes of autonomous systems in real-world scenarios, a mechanism is needed to influence the agent’s (ethical) decision-making. In engineering literature, two types of architectures for ethical decision-making of AI can be found (see Table 2 in appendix C for an overview). The first is based on an ‘ethical layer’ that governs the behaviour of the agent from outside the system. Arkin, Ulam, and Wagner (2012) designed and implemented an ‘ethical governor’ that consists of 2 processes; 1) ethical reasoning that transforms incoming perceptual, motor and situational awareness data into evidence, and 2) constraint application that uses the evidence to apply constraints based on Laws Of War and Rules Of Engagement to suppress unethical behaviour when applying lethal force. Dennis et al. (2016) proposes a hybrid architecture in which reasoning is done by a rational BDI [Beliefs, Desires and Intentions] agent. Based on this framework the agent selects plans from a given ethical policy which is the most ethical plan available based on its beliefs. Earlier work by Li et al. (2002) consists of a hierarchical control scheme developed to enable multiple Unmanned Combat Air Vehicles (UCAVs) to autonomously achieve demanding missions in hostile environments. The scheme consists of four layers: 1) a high-level path planner, 2) a low-level path planner, 3) a trajectory generator and 4) a formation control algorithm. More recently, Vanderelst and Winfield (2018) designed an additional or substitute framework for implementing robotic ethics as alternative for logic-based AI that currently dominates the field. They implemented ethical behaviour in robots by simulation theory of cognition in which internal simulations for actions and prediction of consequences are used to make ethical decisions. The method is a form of robot imagery and does not make use of verification of logical statements that is often used to check if actions are in accordance with ethical principles. The second type of architecture for ethical decision-making of AI is logic based. This type derives logical rules from natural language and applies the rules to the system to govern its ethical behaviour. Anderson, Anderson, and Berenz (2016) describe a case-supported principle-based behavior paradigm (CPB) to govern an elderly care robot’s behaviour. The system uses principles, that are abstracted from cases, that have consensus of ethicists, to choose its next action. It sorts the actions by weighing them according to ethical preferences, which are based on values, and selects the action that is highest ranked. Another formal approach is HERA (Hybrid Ethical Reasoning Agents) which is a software library to model autonomous moral decision-making (Lindner,
RkJQdWJsaXNoZXIy MjY0ODMw