Both sides previous revisionPrevious revisionNext revision | Previous revision |
public:t-709-aies-2025:aies-2025:classification_control_autonomy [2025/08/24 16:47] – [Meaning] thorisson | public:t-709-aies-2025:aies-2025:classification_control_autonomy [2025/08/24 17:09] (current) – thorisson |
---|
| In Everyday Life | Something of great importance to people. \\ People seem to extract meaning from observing other people's actions, utterances, attitudes, situations, etc., and even their own thoughts. \\ Proper handling of meaning is generally considered to require intelligence. | | | In Everyday Life | Something of great importance to people. \\ People seem to extract meaning from observing other people's actions, utterances, attitudes, situations, etc., and even their own thoughts. \\ Proper handling of meaning is generally considered to require intelligence. | |
| Why It Is Important | Meaning "glues together" (coherently unifies) knowledge, goals and understanding. \\ Meaning is at the foundation of intelligence. | | | Why It Is Important | Meaning "glues together" (coherently unifies) knowledge, goals and understanding. \\ Meaning is at the foundation of intelligence. | |
| Definition | The meaning of a datum **d** to an agent **A** is defined by the effect that **d** has on **A**'s behavior and action potential (what is possible). | | | Definition | The meaning of a datum **d** to an agent **A** is defined by the effect that **d** has on **A**'s behavior and action potential (i.e. what is //possible, impossible, inevitable,// and //unlikely)//. | |
| More Specifically | Given an agent **A** with a set of differentiable goals **G**, the meaning of a datum **d** to **A** consists of how **d** affects **A**'s the knowledge and goals, changing them, preventing them, . | | | More Specifically | Given an agent **A** with a set of differentiable goals **G**, the meaning of a datum **d** to **A** consists of how **d** affects **A**'s the knowledge and goals, changing them, preventing them, . | |
| Producing Meaning | Meaning is produced through a //process of understanding// using reasoning over causal relations, to produce implications in the //now//. \\ As time passes, meaning changes and must be re-computed. | | | Producing Meaning | Meaning is produced through a //process of understanding// using reasoning over causal relations, to produce implications in the //**now**//. \\ As time passes, meaning changes and must be updated (re-computed). | |
| Causal Relations | The relationship between two or more differentiable events such that one of them can (reasonably reliably) produce the other. \\ One event **E**, the //cause//, must come before another event **E'**, the //effect//, where **E** can (reasonably reliably) be used to produce **E'**. | | | Causal Relations | The relationship between two or more differentiable events such that one of them can (reasonably reliably) produce the other. \\ One event **C**, the //cause//, must come before another event **E**, the //effect//, where **C** can (reasonably reliably) be used to produce **E**. | |
| \\ Foundational Meaning | Foundational meaning is the meaning of anything to an agent - often contrasted with "semantic meaning" or "symbolic meaning", which is the meaning of symbols or language. \\ The latter rests on the former. \\ Meaning is generated through a process when causal-relational models are used to compute the //implications// of some action, state, event, etc. \\ Any meaning-producing agent extracts meaning when the implications //interact with its goals// in some way (preventing them, enhancing them, shifting them, ...). | | | \\ Foundational Meaning | Foundational meaning is the meaning of anything to an agent - often contrasted with "semantic meaning" or "symbolic meaning", which is the meaning of symbols or language. \\ The latter rests on the former. \\ Meaning is generated through a process when causal-relational models are used to compute the //implications// of some action, state, event, etc. \\ Any meaning-producing agent extracts meaning when the implications //interact with its goals// in some way (preventing them, enhancing them, shifting them, ...). | |
| |
| |
| |
| \\ |
| \\ |
| |
| ==== Some Open Questions About Meaning, Understanding, Autonomy, & Responsibility ==== |
| |
| | Meaning | **True or False?**: To create meaning relevant to itself in a particular situation, a cognitive system (a special kind of computing system) must be able to predict the effect and side-effects of any potential action and event in the physical world that has relevance to this situation. It must be able to //understand// these effects and side-effects. | |
| | Understanding | To understand any event and/or situation in relation to itself and/or others, a cognitive system must be able to unify relevant aspects of such situations to its own past, present or future. | |
| | Autonomy | To have "full autonomy" (or "near-full autonomy"), a cognitive system must be able to relate its meaning generation and understanding to its situation and goals (whether these are given to it by the designers or evolved to have them), as well as others' goals. \\ Do any machines yet exist that can be said to have "full autonomy"? | |
| | Responsibility | We consider a cognitive system to be ' worthy of responsibility' for a particular process if that system can be trusted to deflect most reasonable threats to that process that could come up. \\ What kinds of cognitive systems can be trusted with responsibility for human life? | |
| | Responsibility | If no machines do yet exist that create meaning, have understanding, or harbor "full autonomy" (or "near-full autonomy"), can we trust any machines -- as of yet -- for their own behavior, or some important responsibilities? | |
| |
\\ | \\ |
\\ | \\ |
\\ | \\ |
2024(c)K.R.Thórisson | 2025(c)K.R.Thórisson |
| |