[[/public:t-709-aies:AIES-25:main|DCS-T-709-AIES-2025 Main]] \\ [[/public:t-709-aies:AIES-25:lecture_notes|Link to Lecture Notes]] \\ \\ ====== Moral Theories II: Autonomy and Agency ====== \\ ===== Concepts ===== ==== Agency ==== * The ability to do something that counts as an action (Himma 2009). * Basically doing something. * Actions as doings. * So is simply existing enough? - Not quite. * Requires a certain mental state ==== Patiency ==== * Inanimate objects. * Something that is imposed upon the will of an agent. * A rock * A steering wheel * A trolley ==== Moral Agency ==== * Beings whose behaviour is subject to moral requirements. * Moral obligations * Accountability for one's actions * Agency is a prerequisite for moral agency * BUT: Not all agents are moral agents. ==== Moral Patiency ==== * Agents who do not meet the requirement of moral agency. * Someone who is owed at least one duty or obligation (Himma 2009). * Newborn infants. * Animals. ==== The Instrumentalist Theory of Technology ==== * Technology is a tool for humans. An extension of man. * E.g., the computer, the hammer, or the washing machine. * Any moral violations are clearly the responsibility of the developers or users. * This way, the computer cannot be used as the "scapegoat". === Problems === * Antropocentric/ Excluding * Historically, what counts as an agent or moral agent has been prone to change. * Does it hold that all technology is an extension of man? * Is it simply a tool that we use? * Does all technology have patience (as assumed under the IT)? * What about simple machines? * What about AI? * Can we extend agency to technology? ==== Autonomy as Requirement for Moral Agency ==== == Cognitive / Operational Autonomy == Revisiting from [[public:t-709-aies-2025:aies-2025:classification_control_autonomy|Session 3]] | What it is | The ability of an agent to act and think independently. \\ The ability to do tasks without interference or help from others or from outside itself. \\ Implies that the machine "does it alone". \\ Refers to the mental (control-) independence of agents - the more independent they are (of their designers, of outside aid, etc.) the more autonomous they are. Systems without it could hardly be considered to have general intelligence. | | Structural Autonomy | Refers to the process through which cognitive autonomy is achieved: Motivations, goals and behaviors as dynamically and continuously (re)constructed by the machine as a result of changes in its internal structure. | | Constitutive Autonomy | The ability of an agent to maintain its own structure (substrate, control, knowledge) in light of perturbations. | | "Complete" Autonomy? | Life and intelligence rely on other systems to some extent. The concept is usually applied in a relative way, for a particular limited set of dimension when systems are compared, as well as same system at two different times or in two different states. | | Reliability | Reliability is a desired feature of any useful autonomous system. \\ An autonomous machine with low reliability has severely compromised utility. Unreliability that can be predicted is better than unreliability that is unpredictable. | | Predictability | Predictability is another desired feature of any useful autonomous system. \\ An autonomous machine that is not predictable has severely compromised utility. | | Explainability | Explainability is a third desired feature of any useful autonomous system. \\ An autonomous machine whose actions cannot be explained cannot be reliably predicted. Without a reliable prediction, a machine cannot be trusted. | * “In Kant's metaphysics, autonomy refers to the fundamental condition of free will –the capacity of the will to follow moral laws which it gives to itself” (Winner 1977). * Kant contrasts this view with the concept of heteronomy. The rule of the will by external laws, or the deterministic laws of nature. * How does "Autonomous Technology" fit into this? * When we say technology is autonomous, is it then nonheteronomous, i.e., not governed by external law? ==== AI as autonomous moral agents? ==== AI pushes the boundaries of the definition of autonomous moral agents. However, assigning any form of responsibility or obligation to the entity itself is a perplexing and inconceivable endeavor. * Philosophers and legal scholars still struggle with this question. * Practical application of AI ethics seems to be taking a different stance. * Rather than assigning moral responsibility to the AI system, responsibility resides with developers and users of the technology. * However, ethical challenges are recognized and addressed in the development and implementation process with ethical frameworks.