User Tools

Site Tools


public:t-709-aies-2025:aies-2025:classification_control_autonomy

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
public:t-709-aies-2025:aies-2025:classification_control_autonomy [2025/08/24 16:48] – [Meaning] thorissonpublic:t-709-aies-2025:aies-2025:classification_control_autonomy [2025/08/24 17:09] (current) thorisson
Line 88: Line 88:
 |  Definition  | The meaning of a datum **d** to an agent **A** is defined by the effect that **d** has on **A**'s behavior and action potential (i.e. what is //possible, impossible, inevitable,// and //unlikely)//    | |  Definition  | The meaning of a datum **d** to an agent **A** is defined by the effect that **d** has on **A**'s behavior and action potential (i.e. what is //possible, impossible, inevitable,// and //unlikely)//    |
 |  More Specifically  | Given an agent **A** with a set of differentiable goals **G**, the meaning of a datum **d** to **A** consists of how **d** affects **A**'s the knowledge and goals, changing them, preventing them, .    | |  More Specifically  | Given an agent **A** with a set of differentiable goals **G**, the meaning of a datum **d** to **A** consists of how **d** affects **A**'s the knowledge and goals, changing them, preventing them, .    |
-|  Producing Meaning  | Meaning is produced through a //process of understanding// using reasoning over causal relations, to produce implications in the //now//. \\ As time passes, meaning changes and must be re-computed.    | +|  Producing Meaning  | Meaning is produced through a //process of understanding// using reasoning over causal relations, to produce implications in the //**now**//. \\ As time passes, meaning changes and must be updated (re-computed).    | 
-|  Causal Relations  | The relationship between two or more differentiable events such that one of them can (reasonably reliably) produce the other. \\ One event **E**, the //cause//, must come before another event **E'**, the //effect//, where **E** can (reasonably reliably) be used to produce **E'**.   |+|  Causal Relations  | The relationship between two or more differentiable events such that one of them can (reasonably reliably) produce the other. \\ One event **C**, the //cause//, must come before another event **E**, the //effect//, where **C** can (reasonably reliably) be used to produce **E**.   |
 |  \\ Foundational Meaning  | Foundational meaning is the meaning of anything to an agent - often contrasted with "semantic meaning" or "symbolic meaning", which is the meaning of symbols or language. \\ The latter rests on the former. \\ Meaning is generated through a process when causal-relational models are used to compute the //implications// of some action, state, event, etc. \\ Any meaning-producing agent extracts meaning when the implications //interact with its goals// in some way (preventing them, enhancing them, shifting them, ...).    | |  \\ Foundational Meaning  | Foundational meaning is the meaning of anything to an agent - often contrasted with "semantic meaning" or "symbolic meaning", which is the meaning of symbols or language. \\ The latter rests on the former. \\ Meaning is generated through a process when causal-relational models are used to compute the //implications// of some action, state, event, etc. \\ Any meaning-producing agent extracts meaning when the implications //interact with its goals// in some way (preventing them, enhancing them, shifting them, ...).    |
  
Line 117: Line 117:
  
  
 +\\
 +\\
 +
 +==== Some Open Questions About Meaning, Understanding, Autonomy, & Responsibility ====
  
 +|  Meaning  | **True or False?**: To create meaning relevant to itself in a particular situation, a cognitive system (a special kind of computing system) must be able to predict the effect and side-effects of any potential action and event in the physical world that has relevance to this situation. It must be able to //understand// these effects and side-effects.   
 +|  Understanding  | To understand any event and/or situation in relation to itself and/or others, a cognitive system must be able to unify relevant aspects of such situations to its own past, present or future.    | 
 +|  Autonomy  | To have "full autonomy" (or "near-full autonomy"), a cognitive system must be able to relate its meaning generation and understanding to its situation and goals (whether these are given to it by the designers or evolved to have them), as well as others' goals. \\ Do any machines yet exist that can be said to have "full autonomy"?   
 +|  Responsibility  | We consider a cognitive system to be ' worthy of responsibility' for a particular process if that system can be trusted to deflect most reasonable threats to that process that could come up. \\ What kinds of cognitive systems can be trusted with responsibility for human life?   |
 +|  Responsibility  | If no machines do yet exist that create meaning, have understanding, or harbor "full autonomy" (or "near-full autonomy"), can we trust any machines -- as of yet -- for their own behavior, or some important responsibilities?    | 
  
 \\ \\
Line 126: Line 135:
 \\ \\
 \\ \\
-2024(c)K.R.Thórisson+2025(c)K.R.Thórisson
  
/var/www/cadia.ru.is/wiki/data/attic/public/t-709-aies-2025/aies-2025/classification_control_autonomy.1756054102.txt.gz · Last modified: 2025/08/24 16:48 by thorisson

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki