[[/public:t-709-aies:AIES-24:main|DCS-T-709-AIES-2024 Main]] \\ [[/public:t-709-aies:AIES-24:lecture_notes|Link to Lecture Notes]] \\ \\ ====== AUTONOMY & MEANING ====== \\ \\ ==== Concepts ==== | Data | Measurement. | | Information | Data that can be / is used or formatted for a purpose. | | Knowledge | A set of interlinked information that can be used to plan, produce action, and interpret new information. | | Thought | The goal-driven processes of a situated knowledge-based system. | | Controller | A system that produces a set of set of control signals in light of a set of measurements. | | Situated Controller | An embodied control mechanism that is implemented and situated in a particular environment with a set of particular goals. | | Embodiment | The interface between a controller and its task-environment. \\ The physical substrate in which a controller has been implemented, and thus constrains. | \\ \\ ==== Three Pillars of Intelligence: Control, Classification, Learning ==== | Control | Is the ability to act with a purpose - affect the world in a way that has a target to be achieved. Anything from a thermostat controlling the temperature in a room to a human trying become rich and famous are examples of control systems. In the former case, measuring when the goal is achieved is easier than in the latter. | | Classification | To act, one needs to know what to act on. \\ To know what to act on, one needs to classify. \\ To classify, one needs to sense. \\ To sense, one needs measurement devices. | | Learning | The accumulation of information for the purpose of control and classification. In a world that cannot be perceived all-at-once yet contains regularity, learning is necessary to conserve energy. | \\ \\ ==== Control ==== | What it is | Systematic production of an effect in a world, through prolonged commitment to actuation based on measurement, in light of a goal (end state). | | Measurement | Recording and storage of values of particular variables over time. \\ Intelligent control involves loops of setting goals, measuring, acting, measuring again, and modifying the goals. | | Effect | Realization of a causal relation in the (physical) world. | | Transducer | A device that changes one type of energy to another, typically amplifying and/or dampening the energy in the process. | | Sensor | A transducer that changes one type of energy to another type. | | Action | An effect that a controller can inflict on its external environment. | | Actuator | A physical (or virtual/simulated) transduction mechanism that implements an action that a controller has committed to (e.g. gripper/hand). | | Control Connection | Causal connection between a control signal and an actuator. | | Adaptation | Modification of a control connection in light of a goal **g** that has not been achieved. Requires the control connection structure to change. | | Digital Controller | Separates the stages of measurement, analysis, and control. Makes adaptive control in machines more feasible than a mechanical connection. | | \\ Feedback | For a variable v, information of its value at time t1 is transmitted back to the controller through a feedback mechanism as v', where \\ v'(t) > v(t) \\ that is, there is a //latency// in the transmission, which is a function of the speed of transmission (encoding (measurement) time + transmission time + decoding (read-back) time). | | Latency | A measure for the size of the difference between v and v'. | | Jitter | The change in Latency over time. Second-order latency. | \\ \\ ==== Classification ==== | What it is | Systematic separation of some signals or patterns from other signals or patterns. | | Example | Artificial Neural Networks (ANNs). \\ Contemporary ANNs (e.g. Deep Neural Networks, Double-Deep Q-Learners, etc.) can only do classification. They can only do classification by going through a long continuous training session, after which the learning is turned off. | | \\ \\ \\ Example of misguided \\ use of classification | {{/public:t-720-atai:tesla-classification-fail1.jpg}} \\ {{public:t-720-atai:tesla-classification-fail1.mov|download video}} | | Unified Control & Classification | To be an intelligent agent, classification must be unified with control and learning to produce am //agent that can control its classification as turns out necessary to learn//. | \\ \\ ==== Learning ==== | What it is | A //**process**// that has the intent of //acquiring actionable information//, a.k.a. **knowledge**. | | \\ \\ \\ Key Features | Inherits key features of any process: \\ - **Purpose** (goals): To adapt, to respond in rational ways to problems / to achieve foreseen goals; this factor determines how the rest of the features in this list are measured. \\ - **Speed**: The speed of learning. \\ - **Data**: The data that the learning (and particular measured speed of learning) requires. \\ - **Quality**: How well something is learned. \\ - **Retention**: The robustness of what has been learned - how well it stays intact over time. \\ - **Transfer**: How general the learning is, how broadly what is learned can be employed for the purposes of adaptation or achievement of goals. \\ - **Meta-Learning**: A learner may improve its learning abilities - i.e. capable of meta-learning. \\ - **Progress Signal(s)**: A learner needs to know how its learning is going, and if there is improvement, how much. | | Measurements | To know any of the above some parameters have to be //measured//: All of the above factors can be measured in //many ways//. | | Major Caveat | Since learning interacts with (is affect by) the //task-environment and world// that the learning takes place in, as well as the nature of these in the learner's //subsequent deployment//, //none// of the above features can be assessed by //looking only at the learner//. | \\ \\ ====Learning Controllers: Unification of Classification, Control & Learning==== | What it is | Adaptive/intelligent system/controller, embodied and situated in a task-environment, that continually receives inputs/observations (measurements) from its environment and sends outputs/actions back (signals to its manipulators). \\ Some of the learner’s inputs may be treated specially — e.g. as feedback or a reward signal, possibly provided by a teacher or a specially-rigged training task-environment. Since action can only be evaluated as "intelligent" in light of what it is trying to achieve — we model intelligent agents as imperfect optimizers of some (possibly unknown) real-valued objective function. \\ Note that this working definition fits //experience-based// learning. | | Adaptation | Using acquired knowledge to better achieve goals. \\ Intelligent control involves loops of setting goals -> measuring -> acting -> measuring again -> modifying the goals -> (loop). | | Learning to Classify | In a world with variation, a learning controller must also learn to classify. \\ To learn to classify, one must learn **what** to control to classify appropriately. \\ Learning to control, therefore, requires learning at least two kinds of classification. | | Integrated Cognitive Control | The ability of a controller / cognitive system to steer its own structural development - architectural growth (cognitive growth). The (sub-) system responsible for meta-learning. | | Cognitive Growth | The structural change resulting from learning in a structurally autonomous cognitive system - the target of which is self-improvement. | \\ \\ ==== Meaning ==== | In Everyday Life | Something of great importance to people. \\ People seem to extract meaning from observing other people's actions, utterances, attitudes, situations, etc., and even their own thoughts. \\ Proper handling of meaning is generally considered to require intelligence. | | Why It Is Important | Meaning "glues together" knowledge, goals and understanding. \\ Meaning is at the foundation of intelligence. | | Definition | The meaning of a datum **d** to an agent **A** is defined by the effect that **d** has on the behavior of **A**. | | More Specifically | Given an agent **A** with a set of differentiable goals **G**, the meaning of a datum **d** to **A** consists of how **d** affects **A**'s the knowledge and goals, changing them, preventing them, . | | Producing Meaning | Meaning is produced through a //process of understanding// using reasoning over causal relations, to produce implications in the //now//. \\ As time passes, meaning changes and must be re-computed. | | Causal Relations | The relationship between two or more differentiable events such that one of them can (reasonably reliably) produce the other. \\ One event **E**, the //cause//, must come before another event **E'**, the //effect//, where **E** can (reasonably reliably) be used to produce **E'**. | | \\ Foundational Meaning | Foundational meaning is the meaning of anything to an agent - often contrasted with "semantic meaning" or "symbolic meaning", which is the meaning of symbols or language. \\ The latter rests on the former. \\ Meaning is generated through a process when causal-relational models are used to compute the //implications// of some action, state, event, etc. \\ Any meaning-producing agent extracts meaning when the implications //interact with its goals// in some way (preventing them, enhancing them, shifting them, ...). | \\ \\ ==== Understanding ==== | In Everyday Life | A concept that people use all the time about each other's cognition. With respect to achieving a task, given that the target of the understanding is all or some aspects of the task, more of it is generally considered better than less of it. \\  To consistently solve problems regarding a phenomenon **X** requires //understanding // **X**. \\ Understanding **X** means the ability to extract and analyze the //meaning// of any phenomena **P** related to **X**. | | What Does It Mean? | No well-known scientific theory exists. \\ Normally we do not hand control of anything over to anyone who doesn't understand it. All other things being equal, this is a recipe for disaster. | | Evaluating Understanding | Understanding any **X** can be evaluated along four dimensions: \\ 1. Being able to predict **X**, \\ 2. being able to achieve goals with respect to **X**, \\ 3. being able to explain **X**, and \\ 4. being able to "re-create" **X** ("re-create" here means e.g. creating a simulation that produces **X** and many or all its side-effects.) | | \\ \\ In AI | Understanding as a concept has been neglected in AI. \\ Contemporary AI systems do not //understand//. \\ The concept seems crucial when talking about human intelligence; the concept holds explanatory power - we do not assign responsibilities for a task to someone or something with a demonstrated lack of understanding of the task. Moreover, the level of understanding can be evaluated. \\ Understanding of a particular phenomenon **P** is the potential to perform actions and answer questions with respect to **P**. Example: Which is heavier, 1kg of iron or 1kg of feathers? || | Bottom Line | Can't talk about intelligence without talking about understanding. \\ Can't talk about understanding without talking about meaning. | \\ \\ ==== Cognitive / Operational Autonomy ==== | What it is | The ability of an agent to act and think independently. \\ The ability to do tasks without interference or help from others or from outside itself. \\ Implies that the machine "does it alone". \\ Refers to the mental (control-) independence of agents - the more independent they are (of their designers, of outside aid, etc.) the more autonomous they are. Systems without it could hardly be considered to have general intelligence. | | Structural Autonomy | Refers to the process through which cognitive autonomy is achieved: Motivations, goals and behaviors as dynamically and continuously (re)constructed by the machine as a result of changes in its internal structure. | | Constitutive Autonomy | The ability of an agent to maintain its own structure (substrate, control, knowledge) in light of perturbations. | | "Complete" Autonomy? | Life and intelligence rely on other systems to some extent. The concept is usually applied in a relative way, for a particular limited set of dimension when systems are compared, as well as same system at two different times or in two different states. | | Reliability | Reliability is a desired feature of any useful autonomous system. \\ An autonomous machine with low reliability has severely compromised utility. Unreliability that can be predicted is better than unreliability that is unpredictable. | | Predictability | Predictability is another desired feature of any useful autonomous system. \\ An autonomous machine that is not predictable has severely compromised utility. | | Explainability | Explainability is a third desired feature of any useful autonomous. \\ An autonomous machine whose actions cannot be explained cannot be reliably predicted. Without reliable prediction a machine cannot be trusted. | \\ \\ \\ \\ \\ \\ \\ 2024(c)K.R.Thórisson