Proefschrift

34 2 CHAPTER 2 Bentzen, & Nebel, 2017). HERA represents the robot’s possible actions together with the causal chains of consequences the actions initiate. Logical formulae are used to model ethical principles. The software library implements several ethical principles or interpretation of ethical principles, such as the principle of Double Effect, utilitarianism and a Pareto-inspired principle. The applied format is called a causal agency model. It reduces determining moral permissibility by checking if principle-specific logical formulae are satisfied in a causal agency model. Recent work of Bonnemains, Saurel, and Tessier (2018) demonstrates a formal approach is developed to link ethics and automated reasoning in autonomous systems. The formal tool models ethical principles to compute a judgement of possible decisions in a certain situation and explains why this decision is ethically acceptable or not. The formal model can be used on utilitarian and deontological ethics and the Doctrine of Double effect to examine the results generated by these three different ethical frameworks. They found that the main challenge lies in formalizing philosophical definitions in natural language and to translate them in generic computer programmable concepts that can be easily understood and that allows for ethical decisions to be explained. 2.3 AUTONOMY The notion of autonomy is a not well-defined and often misunderstood concept. Nowadays in the context of AI, autonomy is often a synonym for Machine Learning, an example can be found in Melancon (2020), but autonomy encompasses much more than that. Castelfranchi and Falcone (2003) define autonomy as a notion that involves relationships between three entities: a) the main subject x, b) the goal μ that must be obtained by the main subject x and c) a second subject γ upon the main subject x is autonomous. This is expressed in the statement: “x is autonomous about μ with respect to y”. For example, if x is an autonomous drone, its autonomy implies that the autonomous drone x can autonomously decide on the travel route (the goal μ) given a destination (i.e. GPS coordinates) set by its operator γ. Three type of autonomy relationships can be identified based on this description: (1) executive autonomy; x is autonomous in its means instead of it goals, which is the case of the example of the autonomous drone, (2) goal autonomy; x can set its goals on its own, and (3) social autonomy; x can execute its goals by itself without other agents (Castelfranchi & Falcone, 2003). Wooldridge and Jennings (1995, p. 116) also refer to autonomy in their list of four properties for defining an agent: ‘1) autonomy: agents operate without the direct intervention of humans or others, and have some kind of control over their actions and internal state (Castelfranchi, 1995), 2) social ability: agents interact with other agents (and possibly humans) via some kind of agent-communication language (Genesereth

RkJQdWJsaXNoZXIy MjY0ODMw