Both sides previous revisionPrevious revisionNext revision | Previous revision |
public:t-720-atai:atai-19:lecture_notes_w6 [2019/09/11 18:00] – [Benefits of Combined Feed-forward + Feedback Controllers] thorisson | public:t-720-atai:atai-19:lecture_notes_w6 [2024/04/29 13:33] (current) – external edit 127.0.0.1 |
---|
\\ | \\ |
\\ | \\ |
| |
| |
| |
--------------- | --------------- |
\\ | \\ |
| |
| ==== Concepts ==== |
| | Data | Measurement. | |
| | Information | Data that can be / is used or formatted for purpose. | |
| | Knowledge | A set of interlinked information that can be used to plan, produce action, and interpret new information. | |
| | Thought | The drive- and goal-driven processes of a situated knowledge-based system. | |
| \\ |
| \\ |
====Representation==== | ====Representation==== |
| \\ What it is | A representation is what you have when you pick something to stand for something else, like the lines forming the word "cup" used in particular contexts are used to **represent** (implicate, point to) an object with some features and properties. \\ //All knowledge used for intelligent action must have a representation.// | | | \\ What it is | A way to encode data/measurements. \\ A representation is what you have when you pick something to stand for something else, like the lines forming the word "cup" used in particular contexts are used to **represent** (implicate, point to) an object with some features and properties. \\ //All knowledge used for intelligent action must have a representation.// | |
| \\ What it Involves | A particular process (computation, thought) is given a particular pattern (e.g. the text "cup" or the word "cup" uttered -- or simply by the form of the light falling on a retina, at a particular time in a particular context) that acts as a "pointer" to an //internal representation//, an information structure that is rich enough to answer questions about a particular phenomenon that this "pointer" pattern points to, without having to perform any other action than to manipulate that information structure in particular ways. | | | \\ What it Involves | A particular process (computation, thought) is given a particular pattern (e.g. the text "cup" or the word "cup" uttered -- or simply by the form of the light falling on a retina, at a particular time in a particular context) that acts as a "pointer" to an //internal representation//, an information structure that is rich enough to answer questions about a particular phenomenon that this "pointer" pattern points to, without having to perform any other action than to manipulate that information structure in particular ways. | |
| \\ Why it is Important | **Mathematically**: \\ With the amount of information in the physical world vastly outnumbering the ability of any system to store it all in a lookup table, methods for information storage and retrieval with greater compression are needed. \\ **Historically**: \\ - The founding fathers of AI spoke frequently of //representations// in the first three decades of AI research. \\ - //Skinnerian psychology// and //Brooksian AI// -- both "representation-free" methodologies -- largely outlawed the concept of representation from AI from the mid-80s onward. \\ - Post 2000s: The rise of ANNs has helped continue this trend. | | | \\ Why it is Important | **Mathematically**: \\ With the amount of information in the physical world vastly outnumbering the ability of any system to store it all in a lookup table, methods for information storage and retrieval with greater compression are needed. \\ **Historically**: \\ - The founding fathers of AI spoke frequently of //representations// in the first three decades of AI research. \\ - //Skinnerian psychology// and //Brooksian AI// -- both "representation-free" methodologies -- largely outlawed the concept of representation from AI from the mid-80s onward. \\ - Post 2000s: The rise of ANNs has helped continue this trend. | |
\\ | \\ |
\\ | \\ |
| ====Meaning==== |
| |
| | \\ What It Is | Something of great importance to people. \\ Meaning seems "extracted" from other people's actions, utterances, attitudes, etc. \\ Proper handling of meaning is generally considered to require intelligence. | |
| | Why It Is Important | Meaning seems to enter almost every aspect of cognition. | |
| | My Theory | Meaning is generated when a causal-relational model is used to compute the //implications// of some action, state, event, etc. Any agent that does so will extract meaning when the implications interact with its goals in some way. | |
| |
| \\ |
| \\ |
====Symbols & Meaning==== | ====Symbols & Meaning==== |
| \\ What are Symbols? | Peirce's Theory of Semiotics (signs) proposes 3 parts to a sign: a //sign/symbol//, an //object//, and an //interpretant//. Example of symbol: an arbitrary pattern, e.g. a written word (with acceptable error ranges whose threshold determine when it is either 'uninterpretable' or 'inseparable from other symbols'. \\ Example of object: an automobile (clustering of atoms in certain ways). \\ Example of interpretant: Your mind as it experiences something in your mind's eye when you read the word "automobile". The last part is the most complex thing, because obviously what you see and I see when we read the word "automobile" is probably not exactly the same. | | | \\ What are Symbols? | Peirce's Theory of Semiotics (signs) proposes 3 parts to a sign: a //sign/symbol//, an //object//, and an //interpretant//. \\ Example of symbol: an arbitrary pattern, e.g. a written word (with acceptable error ranges whose threshold determine when it is either 'uninterpretable' or 'inseparable from other symbols'. \\ Example of object: an automobile (clustering of atoms in certain ways). \\ Example of interpretant: Your mind as it experiences something in your mind's eye when you read the word "automobile". The last part is the most complex thing, because obviously what you see and I see when we read the word "automobile" is probably not exactly the same. | |
| "Symbol" | Peirce used various terms for this, including "sign", "representamen", "representation", and "ground". Others have suggested "sign-vehicle". What is mean in all cases is a pattern that can be used to stand for something else, and thus requires an interpretation to be used as such. | | | "Symbol" | Peirce used various terms for this, including "sign", "representamen", "representation", and "ground". Others have suggested "sign-vehicle". What is mean in all cases is a pattern that can be used to stand for something else, and thus requires an interpretation to be used as such. | |
| Peirce's Innovation | Detaching the symbol/sign from the object signified, and introducing the interpretation process as a key entity. This makes it possible to explain why people misunderstand each other, and how symbols and meaning can grow and change in a culture. | | | Peirce's Innovation | Detaching the symbol/sign from the object signified, and introducing the interpretation process as a key entity. This makes it possible to explain why people misunderstand each other, and how symbols and meaning can grow and change in a culture. | |
\\ | \\ |
\\ | \\ |
| |
| |
====So, What Are Models?==== | ====So, What Are Models?==== |
| |
| \\ Greater Potential to Learn | A machine that is free to create, select, and evaluate models operating on observable and hypothesized variables has potential to learn anything (within the confines of the algorithms it has been given for these operations) because as long as the range of possible models is reasonably broad and general, the topics, tasks, domains, and worlds it could (in theory) handle becomes vastly larger than systems where a particular model is given to the system a priori (I say ‘in theory’ because there are other factors, e.g. the ergodicity of the environment and resource constraints that must be favorable to e.g. the system’s speed of learning). | | | \\ Greater Potential to Learn | A machine that is free to create, select, and evaluate models operating on observable and hypothesized variables has potential to learn anything (within the confines of the algorithms it has been given for these operations) because as long as the range of possible models is reasonably broad and general, the topics, tasks, domains, and worlds it could (in theory) handle becomes vastly larger than systems where a particular model is given to the system a priori (I say ‘in theory’ because there are other factors, e.g. the ergodicity of the environment and resource constraints that must be favorable to e.g. the system’s speed of learning). | |
| Greater Potential for Cognitive Growth | A system that can build models of its own model creation, selection, and evaluation has the ability to improve its own nature. This is in some sense the ultimate AGI (depending on the original blueprint, original seed, and some other factors of course) and therefore we only need two levels of this, in theory, for a self-evolving potentially omniscient/omnipotent (as far as the universe allows) system. | | | Greater Potential for Cognitive Growth | A system that can build models of its own model creation, selection, and evaluation has the ability to improve its own nature. This is in some sense the ultimate AGI (depending on the original blueprint, original seed, and some other factors of course) and therefore we only need two levels of this, in theory, for a self-evolving potentially omniscient/omnipotent (as far as the universe allows) system. | |
| Bottom Line | AGI without both feed-forward and feed-back mechanisms is fairly unthinkable. | | | Bottom Line | //AGI without both feed-forward and feed-back mechanisms is fairly unthinkable.// | |
| |
| |
| \\ |
| \\ |
| |
| ====Reasoning==== |
| |
| | What It Is | The establishment of axioms for the world and applying logic to these. | |
| | But The World Is Non-Axiomatic ! | Yes. But there is no way to apply logic unless we hypothesize some pseudo-axioms. The only difference between this and mathematics is that in science we must accept that the so-called "laws" of physics may be only conditionally correct (or possibly even completely incorrect, in light of our goal of figuring out the "ultimate" truth about how the universe works). | |
| | Deduction | Results of two statements that logically are necessarily true. \\ //Example: If it's true that all swans are white, and Joe is a swan, then Joe must be white//. | |
| | Abduction | Reasoning from conclusions to causes. \\ //Example: If the light is on, and it was off just a minute ago, someone must have flipped the switch//. | |
| | Induction | Generalization from observation. \\ //Example: All the swans I have ever seen have been white, hence I hypothesize that all swans are white//. | |
| |
| \\ |
| \\ |
| |
| ====Understanding==== |
| |
| | What It Is | A concept that people use all the time about each other's cognition. With respect to achieving a task, given that the target of the understanding is all or some aspects of the task, more of it is generally considered better than less of it. | |
| | Why It Is Important | Seems to be connected to "real intelligence" - when a machine does X reliably and repeatedly we say that it is "capable" of doing X qualify it with "... but it doesn't 'really' understand what it's doing". | |
| | What Does It Mean? | No well-known scientific theory exists. \\ Normally we do not hand control of anything over to anyone who doesn't understand it. All other things being equal, this is a recipe for disaster. | |
| | My Theory | Understanding involves the manipulation of causal-relational models (like we discussed in the context of the AERA AGI-aspiring architecture). | |
| | Evaluating Understanding | Understanding any X can be evaluated along four dimensions: 1. Being able to predict X, 2. being able to achieve goals with respect to X, 3. being able to explain X, and 4. being able to "re-create" X ("re-create" here means e.g. creating a simulation that produces X and many or all its side-effects.) | |
| |
| |