Proefschrift

173 Table 2: Literature overview architectures for ethical decision-making in AI (section 2.2) Author (s) Key concepts Anderson et al. (2016) A case-supported principle-based behavior paradigm (CPB) is described to govern an elderly care robot’s behaviour. The system uses principles, that are abstracted from cases, that have consensus of ethicists, to choose its next action. It sorts the actions by weighing them according to ethical preferences, which are based on duty values, and selects the action that is highest ranked. This might resort in an exhaustive list of instances that are difficult or impossible to define and will therefore have to be defined in rules. The ethically relevant features of action preference can be reworded as satisficing or violating, and to minimizing or maximizing duties, of each feature. Explicit representation of principles can provide insight into a system’s actions and point to logical explanations for choosing one action over another. This also holds for cases where principles are derived from and their origin in cases can be used as justification for a system’s action. Arkin et al. (2012) To manage the ethical behaviour of robots, an overall ethical architecture is designed consisting of 1) an ethical governor, 2) an ethical adaptor, 3) models for robot trust and deception in humans, and 4) an approach for retaining dignity in human-robot relationships. The ethical governor evaluates the ethical appropriateness of a lethal response before it has been conducted. It consists of 2 processes; 1) ethical reasoning that transforms incoming perceptual, motor and situational awareness data into evidence, and 2) constraint application that uses the evidence to apply constraints based on Laws Of War and Rules Of Engagement to suppress unethical behaviour when applying lethal force. The ethical adaptor uses moral emotions, in this case primarily guilt, for the system to modify its behaviour based on the consequences of its action. The system will recognize if its action results in an increase of guilt by comparing the collateral-damage that actually occurred by that what was estimated before the release of the weapon. The availability of the weapons systems will be progressively restricted if the ethical adaptor perceives an increase of guilt. The models for robot trust and deception in humans are based on psychological models of the interdependence theory framework. This allows the robot to recognize situations in which deception can be used and how a false communication can be selected. The ethical ramifications of autonomous deception by robots needs further investigation. The development and maintaining dignity in human-robot relationships is explored and described in several ways. This can be done by studying emotions, biologically relevant models of ethical behaviours and applying logical constraints to restrict a system’s behaviour based on ethical norms and societal expectations. Bonnemains et al. (2018) A formal approach is developed to link ethics and automated reasoning in autonomous systems. The formal tool models ethical principles to compute a judgement of possible decisions in a certain situation and explains why this decision is ethically acceptable or not. The formal model can be used on utilitarian and deontological ethics and the Doctrine of Double effect to examine the results generated by these three different ethical frameworks. It is necessary to compute an ethical decision on more than one framework alone to consider different ethical views on a given situation. It was found that the main challenge lies in formalizing philosophical definitions in natural language and to translate them in generic computer programmable concepts that can be easily understood and that allows for ethical decisions to be explained. Dennis et al. (2016) A theoretical ethical decision-making framework for autonomous systems with a hybrid architecture is proposed. The reasoning of these autonomous systems is done by a rational BDI agent and based on this framework the agent selects plans from a given ethical policy which is the most ethical plan available based on its beliefs. The order of the rules applicable in a situation is provided by an ethical policy. The policy should incorporate the ethical views of the person(s) most affected by bad decision of the system. The framework is not a planner or method for generating plans but assumes that annotated plans are supplied to the agent. When no ethical plan is available the approach allows for selecting the least unethical plan to execute it. This is done by viewing ethical principles as soft constraints instead of a veto on actions. This allows the agent to violate an ethical principle but only under the condition that no ethical option is available. The chosen unethical option would be the ‘‘least of all evils’’. Verification techniques are available to prove correct behaviour of the agent. APPENDIX C

RkJQdWJsaXNoZXIy MjY0ODMw