Both sides previous revisionPrevious revisionNext revision | Previous revision |
public:t-709-aies-2024:aies-2024:autonomy-meaning [2024/10/23 21:38] – thorisson | public:t-709-aies-2024:aies-2024:autonomy-meaning [2024/10/23 21:45] (current) – [Meaning] thorisson |
---|
| Knowledge | A set of interlinked information that can be used to plan, produce action, and interpret new information. | | | Knowledge | A set of interlinked information that can be used to plan, produce action, and interpret new information. | |
| Thought | The goal-driven processes of a situated knowledge-based system. | | | Thought | The goal-driven processes of a situated knowledge-based system. | |
| | Controller | A system that produces a set of set of control signals in light of a set of measurements. | |
| Situated Controller | An embodied control mechanism that is implemented and situated in a particular environment with a set of particular goals. | | | Situated Controller | An embodied control mechanism that is implemented and situated in a particular environment with a set of particular goals. | |
| Embodiment | The interface between a learning controller and the task-environment. | | | Embodiment | The interface between a controller and its task-environment. \\ The physical substrate in which a controller has been implemented, and thus constrains. | |
| |
\\ | \\ |
| |
| What it is | Systematic production of an effect in a world, through prolonged commitment to actuation based on measurement, in light of a goal (end state). | | | What it is | Systematic production of an effect in a world, through prolonged commitment to actuation based on measurement, in light of a goal (end state). | |
| Controller | A system that produces a set of set of control signals in light of a set of measurements. | | |
| Measurement | Recording and storage of values of particular variables over time. \\ Intelligent control involves loops of setting goals, measuring, acting, measuring again, and modifying the goals. | | | Measurement | Recording and storage of values of particular variables over time. \\ Intelligent control involves loops of setting goals, measuring, acting, measuring again, and modifying the goals. | |
| Effect | Realization of a causal relation in the (physical) world. | | | Effect | Realization of a causal relation in the (physical) world. | |
| \\ \\ \\ Key Features | Inherits key features of any process: \\ - **Purpose** (goals): To adapt, to respond in rational ways to problems / to achieve foreseen goals; this factor determines how the rest of the features in this list are measured. \\ - **Speed**: The speed of learning. \\ - **Data**: The data that the learning (and particular measured speed of learning) requires. \\ - **Quality**: How well something is learned. \\ - **Retention**: The robustness of what has been learned - how well it stays intact over time. \\ - **Transfer**: How general the learning is, how broadly what is learned can be employed for the purposes of adaptation or achievement of goals. \\ - **Meta-Learning**: A learner may improve its learning abilities - i.e. capable of meta-learning. \\ - **Progress Signal(s)**: A learner needs to know how its learning is going, and if there is improvement, how much. | | | \\ \\ \\ Key Features | Inherits key features of any process: \\ - **Purpose** (goals): To adapt, to respond in rational ways to problems / to achieve foreseen goals; this factor determines how the rest of the features in this list are measured. \\ - **Speed**: The speed of learning. \\ - **Data**: The data that the learning (and particular measured speed of learning) requires. \\ - **Quality**: How well something is learned. \\ - **Retention**: The robustness of what has been learned - how well it stays intact over time. \\ - **Transfer**: How general the learning is, how broadly what is learned can be employed for the purposes of adaptation or achievement of goals. \\ - **Meta-Learning**: A learner may improve its learning abilities - i.e. capable of meta-learning. \\ - **Progress Signal(s)**: A learner needs to know how its learning is going, and if there is improvement, how much. | |
| Measurements | To know any of the above some parameters have to be //measured//: All of the above factors can be measured in //many ways//. | | | Measurements | To know any of the above some parameters have to be //measured//: All of the above factors can be measured in //many ways//. | |
| \\ Major Caveat | Since learning interacts with (is affect by) the //task-environment and world// that the learning takes place in, as well as the nature of these in the learner's //subsequent deployment//, //none// of the above features can be assessed by //looking only at the learner//. \\ This is addressed by the //[[/public:t-720-atai:atai-20:teaching|Pedagogical Pentagon]]//. | | | Major Caveat | Since learning interacts with (is affect by) the //task-environment and world// that the learning takes place in, as well as the nature of these in the learner's //subsequent deployment//, //none// of the above features can be assessed by //looking only at the learner//. | |
| |
\\ | \\ |
| Producing Meaning | Meaning is produced through a //process of understanding// using reasoning over causal relations, to produce implications in the //now//. \\ As time passes, meaning changes and must be re-computed. | | | Producing Meaning | Meaning is produced through a //process of understanding// using reasoning over causal relations, to produce implications in the //now//. \\ As time passes, meaning changes and must be re-computed. | |
| Causal Relations | The relationship between two or more differentiable events such that one of them can (reasonably reliably) produce the other. \\ One event **E**, the //cause//, must come before another event **E'**, the //effect//, where **E** can (reasonably reliably) be used to produce **E'**. | | | Causal Relations | The relationship between two or more differentiable events such that one of them can (reasonably reliably) produce the other. \\ One event **E**, the //cause//, must come before another event **E'**, the //effect//, where **E** can (reasonably reliably) be used to produce **E'**. | |
| Theory of Foundational Meaning | Foundational meaning is the meaning of anything to an agent. \\ Meaning is generated through a process when causal-relational models are used to compute the //implications// of some action, state, event, etc. \\ Any meaning-producing agent extracts meaning when the implications //interact with its goals// in some way (preventing them, enhancing them, shifting them, ...). | | | \\ Foundational Meaning | Foundational meaning is the meaning of anything to an agent - often contrasted with "semantic meaning" or "symbolic meaning", which is the meaning of symbols or language. \\ The latter rests on the former. \\ Meaning is generated through a process when causal-relational models are used to compute the //implications// of some action, state, event, etc. \\ Any meaning-producing agent extracts meaning when the implications //interact with its goals// in some way (preventing them, enhancing them, shifting them, ...). | |
| |
\\ | \\ |
| In Everyday Life | A concept that people use all the time about each other's cognition. With respect to achieving a task, given that the target of the understanding is all or some aspects of the task, more of it is generally considered better than less of it. \\ To consistently solve problems regarding a phenomenon **X** requires //understanding // **X**. \\ Understanding **X** means the ability to extract and analyze the //meaning// of any phenomena **P** related to **X**. | | | In Everyday Life | A concept that people use all the time about each other's cognition. With respect to achieving a task, given that the target of the understanding is all or some aspects of the task, more of it is generally considered better than less of it. \\ To consistently solve problems regarding a phenomenon **X** requires //understanding // **X**. \\ Understanding **X** means the ability to extract and analyze the //meaning// of any phenomena **P** related to **X**. | |
| What Does It Mean? | No well-known scientific theory exists. \\ Normally we do not hand control of anything over to anyone who doesn't understand it. All other things being equal, this is a recipe for disaster. | | | What Does It Mean? | No well-known scientific theory exists. \\ Normally we do not hand control of anything over to anyone who doesn't understand it. All other things being equal, this is a recipe for disaster. | |
| Pragmatic Theory of Understanding | Understanding involves the manipulation of causal-relational models, for the purposes of four categories of goals: \\ - Predicting \\ - Achieving goals \\ - Explaining \\ - (Re)creating | | |
| Evaluating Understanding | Understanding any **X** can be evaluated along four dimensions: \\ 1. Being able to predict **X**, \\ 2. being able to achieve goals with respect to **X**, \\ 3. being able to explain **X**, and \\ 4. being able to "re-create" **X** ("re-create" here means e.g. creating a simulation that produces **X** and many or all its side-effects.) | | | Evaluating Understanding | Understanding any **X** can be evaluated along four dimensions: \\ 1. Being able to predict **X**, \\ 2. being able to achieve goals with respect to **X**, \\ 3. being able to explain **X**, and \\ 4. being able to "re-create" **X** ("re-create" here means e.g. creating a simulation that produces **X** and many or all its side-effects.) | |
| In AI | Understanding as a concept has been neglected in AI. \\ Contemporary AI systems do not //understand//. \\ The concept seems crucial when talking about human intelligence; the concept holds explanatory power - we do not assign responsibilities for a task to someone or something with a demonstrated lack of understanding of the task. Moreover, the level of understanding can be evaluated. \\ Understanding of a particular phenomenon **P** is the potential to perform actions and answer questions with respect to **P**. Example: Which is heavier, 1kg of iron or 1kg of feathers? || | | \\ \\ In AI | Understanding as a concept has been neglected in AI. \\ Contemporary AI systems do not //understand//. \\ The concept seems crucial when talking about human intelligence; the concept holds explanatory power - we do not assign responsibilities for a task to someone or something with a demonstrated lack of understanding of the task. Moreover, the level of understanding can be evaluated. \\ Understanding of a particular phenomenon **P** is the potential to perform actions and answer questions with respect to **P**. Example: Which is heavier, 1kg of iron or 1kg of feathers? || |
| Bottom Line | Can't talk about intelligence without talking about understanding. \\ Can't talk about understanding without talking about meaning. | | | Bottom Line | Can't talk about intelligence without talking about understanding. \\ Can't talk about understanding without talking about meaning. | |
| |
| What it is | The ability of an agent to act and think independently. \\ The ability to do tasks without interference or help from others or from outside itself. \\ Implies that the machine "does it alone". \\ Refers to the mental (control-) independence of agents - the more independent they are (of their designers, of outside aid, etc.) the more autonomous they are. Systems without it could hardly be considered to have general intelligence. | | | What it is | The ability of an agent to act and think independently. \\ The ability to do tasks without interference or help from others or from outside itself. \\ Implies that the machine "does it alone". \\ Refers to the mental (control-) independence of agents - the more independent they are (of their designers, of outside aid, etc.) the more autonomous they are. Systems without it could hardly be considered to have general intelligence. | |
| Structural Autonomy | Refers to the process through which cognitive autonomy is achieved: Motivations, goals and behaviors as dynamically and continuously (re)constructed by the machine as a result of changes in its internal structure. | | | Structural Autonomy | Refers to the process through which cognitive autonomy is achieved: Motivations, goals and behaviors as dynamically and continuously (re)constructed by the machine as a result of changes in its internal structure. | |
| Constitutive Autonomy | The ability of an agent to maintain its own structure in light of perturbations. | | | Constitutive Autonomy | The ability of an agent to maintain its own structure (substrate, control, knowledge) in light of perturbations. | |
| "Complete" Autonomy? | Life and intelligence rely on other systems to some extent. The concept is usually applied in a relative way, for a particular limited set of dimension when systems are compared, as well as same system at two different times or in two different states. | | | "Complete" Autonomy? | Life and intelligence rely on other systems to some extent. The concept is usually applied in a relative way, for a particular limited set of dimension when systems are compared, as well as same system at two different times or in two different states. | |
| Reliability | Reliability is a desired feature of any useful autonomous system. \\ An autonomous machine with low reliability has severely compromised utility. Unreliability that can be predicted is better than unreliability that is unpredictable. | | | Reliability | Reliability is a desired feature of any useful autonomous system. \\ An autonomous machine with low reliability has severely compromised utility. Unreliability that can be predicted is better than unreliability that is unpredictable. | |