35 2 EXTENSIVE LITERATURE REVIEW & Ketchpel, 1994), 3) reactivity: agents perceive their environment (which may be the physical world, a user via a graphical user interface, a collection of other agents, the Internet, or perhaps all of these combined), and respond in a timely fashion to changes that occur in it; and 4) pro-activeness: agents do not simply act in response to their environment, they are able to exhibit goal-directed behaviour by taking the initiative.’ In their article on defining Autonomous Weapon Systems, Taddeo and Blanchard (2022) delineate and specify the difference between automatic/automated and autonomous agents. They state: ‘The ability of an artificial agent to change its internal states without the direct intervention of another agent marks (binarily) the line between automatic/ automated and autonomous. A rule-based artificial system and a learning one both qualify as autonomous following this criterion.’(Taddeo & Blanchard, 2022, p. 17). An automated system on the other hand can perform a complex and a predetermined task. A robot in a car manufacturing factory is an example of an automated system. The authors also state that it is increasingly more common that adaptability is a key characteristic for Autonomous Weapon Systems, which will be their potential to deal with complex and fast pacing scenarios, but also will also cause unpredictability, lack of control and transparency, and responsibility gaps (Taddeo & Blanchard, 2022). Taddeo & Blanchard (2022) base their delineation on the work of Floridi and Sanders (2004) who describe three criteria for intelligent systems: ‘(a) Interactivity means that the agent and its environment (can) act upon each other. Typical examples include input or output of a value, or simultaneous engagement of an action by both agent and patient – for example gravitational force between bodies. (b) Autonomy means that the agent is able to change state without direct response to interaction: it can perform internal transitions to change its state. So an agent must have at least two states. This property imbues an agent with a certain degree of complexity and independence from its environment. (c) Adaptability means that the agent’s interactions (can) change the transition rules by which it changes state. This property ensures that an agent might be viewed, at the given LoA [Level of Abstraction], as learning its own mode of operation in a way which depends critically on its experience. Note that if an agent’s transition rules are stored as part of its internal state, discernible at this LoA, then adaptability follows from the other two conditions.’ (Floridi & Sanders, 2004, pp. 357-358).
RkJQdWJsaXNoZXIy MjY0ODMw