User Tools

Site Tools


public:t-709-aies-2024:aies-2024:autonomy-meaning

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
public:t-709-aies-2024:aies-2024:autonomy-meaning [2024/10/23 21:43] – [Understanding] thorissonpublic:t-709-aies-2024:aies-2024:autonomy-meaning [2024/10/23 21:45] (current) – [Meaning] thorisson
Line 90: Line 90:
 |  Producing Meaning  | Meaning is produced through a //process of understanding// using reasoning over causal relations, to produce implications in the //now//. \\ As time passes, meaning changes and must be re-computed.    | |  Producing Meaning  | Meaning is produced through a //process of understanding// using reasoning over causal relations, to produce implications in the //now//. \\ As time passes, meaning changes and must be re-computed.    |
 |  Causal Relations  | The relationship between two or more differentiable events such that one of them can (reasonably reliably) produce the other. \\ One event **E**, the //cause//, must come before another event **E'**, the //effect//, where **E** can (reasonably reliably) be used to produce **E'**.   | |  Causal Relations  | The relationship between two or more differentiable events such that one of them can (reasonably reliably) produce the other. \\ One event **E**, the //cause//, must come before another event **E'**, the //effect//, where **E** can (reasonably reliably) be used to produce **E'**.   |
-|  Theory of Foundational Meaning  | Foundational meaning is the meaning of anything to an agent. \\ Meaning is generated through a process when causal-relational models are used to compute the //implications// of some action, state, event, etc. \\ Any meaning-producing agent extracts meaning when the implications //interact with its goals// in some way (preventing them, enhancing them, shifting them, ...).    |+|  \\ Foundational Meaning  | Foundational meaning is the meaning of anything to an agent - often contrasted with "semantic meaning" or "symbolic meaning", which is the meaning of symbols or language. \\ The latter rests on the former. \\ Meaning is generated through a process when causal-relational models are used to compute the //implications// of some action, state, event, etc. \\ Any meaning-producing agent extracts meaning when the implications //interact with its goals// in some way (preventing them, enhancing them, shifting them, ...).    |
  
 \\ \\
Line 100: Line 100:
 |  What Does It Mean?  | No well-known scientific theory exists. \\ Normally we do not hand control of anything over to anyone who doesn't understand it. All other things being equal, this is a recipe for disaster.  |  What Does It Mean?  | No well-known scientific theory exists. \\ Normally we do not hand control of anything over to anyone who doesn't understand it. All other things being equal, this is a recipe for disaster. 
 |  Evaluating Understanding  | Understanding any **X** can be evaluated along four dimensions: \\ 1. Being able to predict **X**, \\ 2. being able to achieve goals with respect to **X**, \\ 3. being able to explain **X**, and \\ 4. being able to "re-create" **X** ("re-create" here means e.g. creating a simulation that produces **X** and many or all its side-effects.)    |  |  Evaluating Understanding  | Understanding any **X** can be evaluated along four dimensions: \\ 1. Being able to predict **X**, \\ 2. being able to achieve goals with respect to **X**, \\ 3. being able to explain **X**, and \\ 4. being able to "re-create" **X** ("re-create" here means e.g. creating a simulation that produces **X** and many or all its side-effects.)    | 
-|  In AI  | Understanding as a concept has been neglected in AI. \\ Contemporary AI systems do not //understand//. \\ The concept seems crucial when talking about human intelligence; the concept holds explanatory power - we do not assign responsibilities for a task to someone or something with a demonstrated lack of understanding of the task. Moreover, the level of understanding can be evaluated. \\ Understanding of a particular phenomenon **P** is the potential to perform actions and answer questions with respect to **P**. Example: Which is heavier, 1kg of iron or 1kg of feathers?     ||+|  \\ \\ In AI  | Understanding as a concept has been neglected in AI. \\ Contemporary AI systems do not //understand//. \\ The concept seems crucial when talking about human intelligence; the concept holds explanatory power - we do not assign responsibilities for a task to someone or something with a demonstrated lack of understanding of the task. Moreover, the level of understanding can be evaluated. \\ Understanding of a particular phenomenon **P** is the potential to perform actions and answer questions with respect to **P**. Example: Which is heavier, 1kg of iron or 1kg of feathers?     ||
 |  Bottom Line  | Can't talk about intelligence without talking about understanding. \\ Can't talk about understanding without talking about meaning.    | |  Bottom Line  | Can't talk about intelligence without talking about understanding. \\ Can't talk about understanding without talking about meaning.    |
  
/var/www/cadia.ru.is/wiki/data/attic/public/t-709-aies-2024/aies-2024/autonomy-meaning.1729719836.txt.gz · Last modified: 2024/10/23 21:43 by thorisson

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki