Both sides previous revisionPrevious revisionNext revision | Previous revision |
public:t-720-atai:atai-22:knowledge_representation [2022/10/03 12:01] – [Meaning] thorisson | public:t-720-atai:atai-22:knowledge_representation [2024/11/20 12:02] (current) – [Symbols, Models, Syntax] thorisson |
---|
| |
| \\ Meaning | Philosophers are still grappling with the topic of "meaning", and it is far from settled. It is highly relevant to AI, especially GMI - a GMI that cannot extract the meaning of a joke, threat, promise, or explanation - to some level or extent - is hardly worthy of its label. | | | \\ Meaning | Philosophers are still grappling with the topic of "meaning", and it is far from settled. It is highly relevant to AI, especially GMI - a GMI that cannot extract the meaning of a joke, threat, promise, or explanation - to some level or extent - is hardly worthy of its label. | |
| \\ Current Constructivist Approach | Meaning rests on many principles. Two main ones that could be called "pillars" are **context** (the assumed steady-state of a particular situation and the physical forces at play) and **prediction** (implications of these for subsequent steady-states, and relations to the involved agent's goals - esp. the agent's doing the prediction). \\ Firstly, acquired and tested //models// that form a graph of relations; the comprehensiveness of this graph determines the level of understanding that the models can support with respect to a particular phenomenon. \\ This means that //meaning cannot be generated without (some level of) understanding//. \\ Secondly, meaning relies on the //context// of the usage of symbols, where the context is provided by (a) who/what uses the symbols, (b) in what particular task-environment, using ( c) particular //syntactic constraints//. | | | \\ Current Constructivist Approach | Meaning rests on many principles. Two main ones that could be called "pillars" are **context** (the assumed steady-state of a particular situation and the physical forces at play) and **prediction** (implications of these for subsequent steady-states, and relations to the involved agent's goals - esp. those of the agent doing the prediction). This is captured in models (e.g. Drescher's schemas) that form a graph. The meaning, then is captured in ... \\ Firstly, acquired and tested //models// that form a graph of relations; the comprehensiveness of this graph determines the level of understanding that the models can support with respect to a particular phenomenon. \\ This means that //meaning cannot be generated without (some level of) understanding//. We will get back to this later. \\ Secondly, meaning relies on the //context// of the usage of symbols, where the context is provided by (a) who/what uses the symbols, (b) in what particular task-environment, using ( c) particular //syntactic constraints//. | |
| \\ Production of Meaning | Meaning is produced //on demand//, based on the //tokens// used and //contextual// data. If you are interpreting language, //syntax// also matters (syntax is a system of rules that allows serialization of tokens.) \\ How important is context? Seeing a rock roll down a hill and crushing people is very different if you are watching a cartoon than if you're in the physical world. This is pretty obvious when you think about it. | | | \\ Production of Meaning | Meaning is produced //on demand//, based on the //tokens// used and //contextual// data. If you are interpreting language, //syntax// also matters (syntax is a system of rules that allows serialization of tokens.) \\ How important is context? Seeing a rock roll down a hill and crushing people is very different if you are watching a cartoon than if you're in the physical world. This is pretty obvious when you think about it. | |
| \\ Example of Meaning Production | In the case of a chair, you can use information about the **functional aspects** of the chairs you have experienced if you're, say, in the forest and your friend points to a flat rock and says "There's a chair". Your knowledge of a chair's function allows you to **understand** the **meaning** of your friend's utterance. You can access the material properties of chairs to understand what it means when your friend says "The other day the class bully threw a chair at me". And you can access the morphological properties of chairs to understand your friend when she says "Those modern chairs - they are so 'cool' you can hardly sit in them." | | | \\ Example of Meaning Production | In the case of a chair, you can use information about the **functional aspects** of the chairs you have experienced if you're, say, in the forest and your friend points to a flat rock and says "There's a chair". Your knowledge of a chair's function allows you to **understand** the **meaning** of your friend's utterance. You can access the material properties of chairs to understand what it means when your friend says "The other day the class bully threw a chair at me". And you can access the morphological properties of chairs to understand your friend when she says "Those modern chairs - they are so 'cool' you can hardly sit in them." | |
==== Symbols, Models, Syntax ==== | ==== Symbols, Models, Syntax ==== |
| What Now? | Here comes some "glue" for connecting the above concepts, ideas, and claims in a way that unifies it into a coherent story that explains intelligence. | | | What Now? | Here comes some "glue" for connecting the above concepts, ideas, and claims in a way that unifies it into a coherent story that explains intelligence. | |
| \\ \\ Knowledge | Knowledge is "actionable information" - information structures that can be used to //do stuff//, including \\ (a) predict (deduce), \\ (b) derive potential causes (abduce - like Sherlock Holmes does), \\ ( c) explain, and \\ (d) re-create (like Einstein did with <m>E=mc^2</m>). | | | \\ \\ Knowledge | Knowledge is "actionable information" - information structures that can be used to //do stuff//, including \\ (a) predict (mostly deduce, but also abduce), \\ (b) derive potential causes (abduce - like Sherlock Holmes does), \\ ( c) explain (abduce), and \\ (d) re-create (like Einstein did with E=mc<sup>2</sup> and the Sims do in software). | |
| \\ Knowledge \\ = \\ Models | Sets of models allow a thinking agent to do the above, by \\ (a) finding the relevant models for anything (given a certain situation and active goals), \\ (b) apply them according to the goals to derive predictions, \\ ( c) selecting the right actions based on these predictions such that the goals can be achieved, and \\ (d) monitoring the outcome. \\ (Learning then results from correcting the models that predicted incorrectly.) | | | \\ Knowledge \\ = \\ Models | Sets of models allow a thinking agent to do the above, by \\ (a) finding the relevant models for anything (given a certain situation and active goals), \\ (b) apply them according to the goals to derive predictions, \\ ( c) selecting the right actions based on these predictions such that the goals can be achieved, and \\ (d) monitoring the outcome. \\ (Learning then results from correcting the models that predicted incorrectly.) | |
| \\ What's Contained \\ in Models? | To work as building blokcs for knowledge, models must, on their own or in sets, capture in some way: \\ - Patterns \\ - Relations \\ - Volitional acts \\ - Causal chains | | | \\ What's Contained \\ in Models? | To work as building blokcs for knowledge, models must, on their own or in sets, capture in some way: \\ - Patterns \\ - Relations \\ - Volitional acts \\ - Causal chains | |
| Where Do The Symbols Come In? | Symbols are mechanisms for rounding up model sets - they are "handles" on the information structures. \\ In humans this "rounding up" happens subconsciously and automatically, most of the time, using similarity mapping (content-driven association). | | | Where Do The Symbols Come In? | Symbols are mechanisms for rounding up model sets - they are "handles" on the information structures. \\ In humans this "rounding up" happens subconsciously and automatically, most of the time, using similarity mapping (content-driven association). | |
| \\ Syntactic Autonomy | To enable autonomous thought, the use of symbols for managing huge sets of models must follow certain rules. For determining the development of biological agents, these rules - their syntax - must exist in form //a priori// of the developing, learning mind, because it determines what these symbols can and cannot do. In this sense, "syntax" means the "rules of management" of information structures (just like the use of symbols in human communication). | | | \\ Syntactic Autonomy | To enable autonomous thought, the use of symbols for managing huge sets of models must follow certain rules. For determining the development of biological agents, these rules - their syntax - must exist in form //a priori// of the developing, learning mind (encoded in DNA), because it determines what these symbols can and cannot do, from early infant life to more grown-up stages. In this sense, "syntax" means the "rules of management" of information structures (just like the use of symbols in human communication). | |
| \\ Historical Note | Chomsky claimed that humans are born with a "language acquisition device". \\ What may be the case is that the language simply sits on top of a more general set of "devices" for the formation of knowledge //in general//. | | | \\ Historical Note | Chomsky claimed that humans are born with a "language acquisition device". \\ What may be the case is that the language simply sits on top of a more general set of "devices" for the formation of knowledge //in general//. | |
| \\ Evolution & Cognition | Because thought depends on underlying biological structures, and because biological structure depends on ongoing maintenance processes, the syntax and semantics for creating a biological agent, and the syntax and semantics for generating meaningful thought in such an agent, both depend on //syntactic autonomy// - i.e. rules that determine how the referential processes of **encode-transmit-decode** work. | | | \\ Evolution & Cognition | Because thought depends on underlying biological structures, and because biological structure depends on ongoing maintenance processes, the syntax and semantics for creating a biological agent, and the syntax and semantics for generating meaningful thought in such an agent, both depend on //syntactic autonomy// - i.e. rules that determine how the referential processes of **encode-transmit-decode** work. | |
| What It Is | The establishment of axioms for the world and applying logic to these. | | | What It Is | The establishment of axioms for the world and applying logic to these. | |
| Depends On | Semantic closure. | | | Depends On | Semantic closure. | |
| But The World Is Non-Axiomatic ! | \\ Yes. But there is no way to apply logic unless we hypothesize some pseudo-axioms. The only difference between this and mathematics is that in science we must accept that the so-called "laws" of physics may be only conditionally correct (or possibly even completely incorrect, in light of our goal of figuring out the "ultimate" truth about how the universe works). | | | But The World Is Non-Axiomatic ! | Yes. But there is no way to apply logic unless we hypothesize some pseudo-axioms. The only difference between this and mathematics is that in science we must accept that the so-called "laws" of physics may be only conditionally correct (or possibly even completely incorrect, in light of our goal of figuring out the "ultimate" truth about how the universe works). | |
| Deduction | Results of two statements that logically are necessarily true. \\ //Example: If it's true that all swans are white, and Joe is a swan, then Joe must be white//. | | | Deduction | Results of two statements that logically are necessarily true. \\ //Example: If it's true that all swans are white, and Joe is a swan, then Joe must be white//. | |
| Abduction | Reasoning from conclusions to causes. \\ //Example: If the light is on, and it was off just a minute ago, someone must have flipped the switch//. | | | Abduction | Reasoning from conclusions to causes. \\ //Example: If the light is on, and it was off just a minute ago, someone must have flipped the switch//. | |