| |
public:t_720_atai:atai-20:knowledge_representation [2020/10/06 14:44] – [Meaning] thorisson | public:t_720_atai:atai-20:knowledge_representation [2024/04/29 13:33] (current) – external edit 127.0.0.1 |
---|
| |
| Words | A word - e.g. "chair" - is not the symbol itself; the word is a **token** that can //act// as symbol (to someone, in some circumstance). | | | Words | A word - e.g. "chair" - is not the symbol itself; the word is a **token** that can //act// as symbol (to someone, in some circumstance). | |
| Being a Symbol | Being a "symbol" means serving a **function**. In this case the token stands as a "pointer" to **information structures**. | | |
| An Example | "That is not a chair!" vs. "Ahh.. that is a //great// chair!" | | | An Example | "That is not a chair!" vs. "Ahh.. that is a //great// chair!" | |
| | Symbolic Role | Being a "symbol" means serving a **function**. In this case the token stands as a "pointer" to **information structures**. | |
| Tokens as Symbols | The association of a token with a set of information structures is //arbitrary// - if we agree to call "chairs" something else, e.g. "blibbeldyblabb", well, then that's what we call "chairs" from now on. "Go ahead, take a seat on the bibbeldyblabb". | | | Tokens as Symbols | The association of a token with a set of information structures is //arbitrary// - if we agree to call "chairs" something else, e.g. "blibbeldyblabb", well, then that's what we call "chairs" from now on. "Go ahead, take a seat on the bibbeldyblabb". | |
| \\ Context | Using the token, these information structures can be collected and used. But their ultimate meaning depends on the **context** of the token's use. \\ When you use a token, **which information structures** are rounded up, and how they are used, depends on more than the token... | | | \\ Context | Using the token, these information structures can be collected and used. But their ultimate meaning depends on the **context** of the token's use. \\ When you use a token, **which information structures** are rounded up, and how they are used, depends on more than the token... | |
| \\ What Are These \\ Information Structures? | They have to do with all sorts of **experience** of the world. \\ In the case of chairs this would be experience collected, compressed, abstracted and generalized from indoor environments in relation to the physical object we refer to as 'chair' (//a lot// of information could be relevant at any point in time - color, shape, size, usage, manufacturing, destruction, material properties, compositions into parts, ... the list is very long! - which ones are relevant //right now// depends on the //context//, and context is determined primarily by the current state and which //goals// are currently active at this moment). | | | \\ What Are These \\ Information Structures? | They have to do with all sorts of **experience** of the world. \\ In the case of chairs this would be experience collected, compressed, abstracted and generalized from indoor environments in relation to the physical object we refer to as 'chair' (//a lot// of information could be relevant at any point in time - color, shape, size, usage, manufacturing, destruction, material properties, compositions into parts, ... the list is very long! - which ones are relevant //right now// depends on the //context//, and context is determined primarily by the current state and which //goals// are currently active at this moment). | |
| \\ Models | When interpreting symbols, syntax and context, information structures are collected and put together to form //composite models// that can be used for computing the meaning. By //meaning// we really mean (no pun intended) the //implications// encapsulated in the //**now**//: What may come next and how can goals be impacted? | | | Models & Symbols | Both are representations - but //models contain more than symbols;// if symbols are **pointers** models are **machines**. | |
\\ | \\ |
| |
| \\ Prerequisites for using symbols | A prerequisite for communication are thus shared knowledge (object - referred concepts, i.e. //sets// of models), and shared encoding and interpretation methods: How syntax is used. And last but not least, shared //cultural// methods for how to handle //context// (background knowledge) including missing information. \\ This last point has to do with the tradeoff between compactness and potential for misunderstanding (the more compact, the more there is a danger of misinterpretation; the less compact, the longer it takes to communicate. | | | \\ Prerequisites for using symbols | A prerequisite for communication are thus shared knowledge (object - referred concepts, i.e. //sets// of models), and shared encoding and interpretation methods: How syntax is used. And last but not least, shared //cultural// methods for how to handle //context// (background knowledge) including missing information. \\ This last point has to do with the tradeoff between compactness and potential for misunderstanding (the more compact, the more there is a danger of misinterpretation; the less compact, the longer it takes to communicate. | |
| \\ What About 'Signs from the Gods'? | Is stormy ocean weather a "sign" that you should not go rowing in your tiny boat? \\ No, not directly, but the situation makes use of exactly the same machinery: The weather is a pattern. That pattern has implications for your goals - in particular, your goal to live, which would be prevented if you were to drown. Stormy weather has potential for drowning you. When someone says "this weather is dangerous" the implication is the same as looking out and seeing it for yourself, except that the //arbitrary patterns// of speech are involved in the first instance, but not the second. | | | \\ What About 'Signs from the Gods'? | Is stormy ocean weather a "sign" that you should not go rowing in your tiny boat? \\ No, not directly, but the situation makes use of exactly the same machinery: The weather is a pattern. That pattern has implications for your goals - in particular, your goal to live, which would be prevented if you were to drown. Stormy weather has potential for drowning you. When someone says "this weather is dangerous" the implication is the same as looking out and seeing it for yourself, except that the //arbitrary patterns// of speech are involved in the first instance, but not the second. | |
| \\ Prediction Creates Meaning | Hearing the words "stormy weather" or seeing the raging storm, your models allow you to make predictions. These predictions are compared to your active goals to see if any of them will be prevented; if so, the storm may make you stay at home. \\ In the case where you //really really// want to go rowing, even stormy weather may not suffice to keep you at home - depending on your character or state of mind you may be prone to make that risk tradeoff in different ways. | | | \\ Prediction Creates Meaning | Hearing the words "stormy weather" or seeing the raging storm, your models allow you to make predictions. These predictions are compared to your active goals to see if any of them will be prevented; if so, the storm may make you stay at home. In which case its meaning was a 'threat to your survival'. \\ In the case where you //really really// want to go rowing, even stormy weather may not suffice to keep you at home - depending on your character or state of mind you may be prone to make that risk tradeoff in different ways. | |
| Models & Symbols | Both are representations - but //models contain more than symbols;// if symbols are **pointers** models are **machines**. | | | \\ Models | When interpreting symbols, syntax and context, information structures are collected and put together to form //composite models// that can be used for computing the meaning. By //meaning// we really mean (no pun intended) the //implications// encapsulated in the //**now**//: What may come next and how can goals be impacted? | |
\\ | \\ |
| |
| |
==== Symbols, Models, Syntax ==== | ==== Symbols, Models, Syntax ==== |
| What is This Table About? | It is "glue" for connecting the above concepts, ideas, and claims in a way that unifies it into a coherent story that explains intelligence. | | | What Now? | Here comes some "glue" for connecting the above concepts, ideas, and claims in a way that unifies it into a coherent story that explains intelligence. | |
| \\ \\ Knowledge | Knowledge is "actionable information" - information structures that can be used to //do stuff//, including \\ (a) predict (deduce), \\ (b) derive potential causes (abduce - like Sherlock Holmes does), \\ ( c) explain, and \\ (d) re-create (like Einstein did with <m>E=mc^2</m>). | | | \\ \\ Knowledge | Knowledge is "actionable information" - information structures that can be used to //do stuff//, including \\ (a) predict (deduce), \\ (b) derive potential causes (abduce - like Sherlock Holmes does), \\ ( c) explain, and \\ (d) re-create (like Einstein did with <m>E=mc^2</m>). | |
| \\ \\ Knowledge = Models | Sets of models allow a thinking agent to do the above, by \\ (a) finding the relevant models for anything (given a certain situation and active goals), \\ (b) apply them according to the goals to derive predictions, \\ ( c) selecting the right actions based on these predictions such that the goals can be achieved, and \\ (d) monitoring the outcome. \\ (Learning then results from correcting the models that predicted incorrectly.) | | | \\ Knowledge \\ = \\ Models | Sets of models allow a thinking agent to do the above, by \\ (a) finding the relevant models for anything (given a certain situation and active goals), \\ (b) apply them according to the goals to derive predictions, \\ ( c) selecting the right actions based on these predictions such that the goals can be achieved, and \\ (d) monitoring the outcome. \\ (Learning then results from correcting the models that predicted incorrectly.) | |
| \\ What's Contained \\ in Models? | Models must capture, in some way: \\ - Patterns \\ - Relations \\ - Volitional acts \\ - Causal chains | | | \\ What's Contained \\ in Models? | Models must, on their own or in sets, capture in some way: \\ - Patterns \\ - Relations \\ - Volitional acts \\ - Causal chains | |
| Where Do The Symbols Come In? | Symbols are mechanisms for rounding up model sets - they are "handles" on the information structures. \\ In humans this "rounding up" happens subconsciously and automatically, most of the time, using similarity mapping (content-driven association). | | | Where Do The Symbols Come In? | Symbols are mechanisms for rounding up model sets - they are "handles" on the information structures. \\ In humans this "rounding up" happens subconsciously and automatically, most of the time, using similarity mapping (content-driven association). | |
| \\ Syntactic Autonomy | To enable autonomous thought, the use of symbols for managing huge sets of models must follow certain rules. For determining the development of biological agents, these rules - their syntax - must exist in form //a priori// of the developing, learning mind, because it determines what these symbols can and cannot do. In this sense, "syntax" means the "rules of management" of information structures (just like the use of symbols in human communication). | | | \\ Syntactic Autonomy | To enable autonomous thought, the use of symbols for managing huge sets of models must follow certain rules. For determining the development of biological agents, these rules - their syntax - must exist in form //a priori// of the developing, learning mind, because it determines what these symbols can and cannot do. In this sense, "syntax" means the "rules of management" of information structures (just like the use of symbols in human communication). | |
| What It Is | The establishment of axioms for the world and applying logic to these. | | | What It Is | The establishment of axioms for the world and applying logic to these. | |
| Depends On | Semantic closure. | | | Depends On | Semantic closure. | |
| But The World Is Non-Axiomatic ! | Yes. But there is no way to apply logic unless we hypothesize some pseudo-axioms. The only difference between this and mathematics is that in science we must accept that the so-called "laws" of physics may be only conditionally correct (or possibly even completely incorrect, in light of our goal of figuring out the "ultimate" truth about how the universe works). | | | But The World Is Non-Axiomatic ! | \\ Yes. But there is no way to apply logic unless we hypothesize some pseudo-axioms. The only difference between this and mathematics is that in science we must accept that the so-called "laws" of physics may be only conditionally correct (or possibly even completely incorrect, in light of our goal of figuring out the "ultimate" truth about how the universe works). | |
| Deduction | Results of two statements that logically are necessarily true. \\ //Example: If it's true that all swans are white, and Joe is a swan, then Joe must be white//. | | | Deduction | Results of two statements that logically are necessarily true. \\ //Example: If it's true that all swans are white, and Joe is a swan, then Joe must be white//. | |
| Abduction | Reasoning from conclusions to causes. \\ //Example: If the light is on, and it was off just a minute ago, someone must have flipped the switch//. | | | Abduction | Reasoning from conclusions to causes. \\ //Example: If the light is on, and it was off just a minute ago, someone must have flipped the switch//. | |