Both sides previous revisionPrevious revisionNext revision | Previous revision |
public:t-720-atai:atai-22:understanding [2022/09/16 13:22] – [It Used To Be Called Common Sense] thorisson | public:t-720-atai:atai-22:understanding [2024/04/29 13:33] (current) – external edit 127.0.0.1 |
---|
====Why Curiosity?==== | ====Why Curiosity?==== |
| |
| Why Are We Talking About Curiosity? | Curiosity is a really great term for a very complex systemic phenomenon of significant importance to AGI: Motivation. | | | Why Are We Talking About Curiosity? | \\ Curiosity is a really great term for a very complex systemic phenomenon of significant importance to AGI: Motivation. | |
| Motivation | A learner without "internal motivation" will not have any reason to learn anything - we call it 'internal motivation' because it is a mechanism (complex or simple) of the cognitive architecture itself (without which "nothing would happen"), that gives an agent a tendency to act in a certain way in certain circumstances (and possibly: in general). | | | \\ Motivation | A learner without "internal motivation" will not have any reason to learn anything - we call it 'internal motivation' because it is a mechanism (complex or simple) of the cognitive architecture itself (without which "nothing would happen"), that gives an agent a tendency to act in a certain way in certain circumstances (and possibly: in general). | |
| \\ How is Motivation Programmed? \\ \\ **Drives** | Fundamental motivation is not something that a learner can learn (unless we assume that as it's "born" there is something in the environment to program that in; assuming something that highly-specific exists and is available in the environment is not a good strategy for ensuring survival if the creature is intended to grow cognitively in a predictable way). \\ The way that nature does this is to provide newborns with some sort of impetus to act in certain ways in certain situations, e.g. cry when hungry. This works most of the time because all living creatures have parents. \\ We call internal motivational factors //**drives**//. | | | \\ How is Motivation Programmed? \\ \\ **Drives** | Fundamental motivation is not something that a learner can learn (unless we assume that as it's "born" there is something in the environment to program that in; assuming something that highly-specific exists and is available in the environment is not a good strategy for ensuring survival if the creature is intended to grow cognitively in a predictable way). \\ The way that nature does this is to provide newborns with some sort of impetus to act in certain ways in certain situations, e.g. cry when hungry. This works most of the time because all living creatures have parents. \\ We call internal motivational factors //**drives**//. | |
| \\ Baby Machines | General learners can learn over their lifetime vastly larger amounts of knowledge than they are born with. Such machines are sometimes called 'baby machines'. The drives of baby machines typically must change over their lifetime, especially if they are very good and general learners. \\ In psychology this is called //cognitive development//. \\ Very few - if any - AI systems exist that have demonstrated such a capability. But some form of cognitive development is probably unavoidable in any powerful learning scheme, because what motivational mechanisms you need when you know very little are likely to be very different from useful motivational mechanisms that work well when you know a lot (when you have learned most of the fundamental principles of how your world works, your old learning mechanisms are unlikely to be as efficient or relevant as they were in the beginning). | | | \\ Baby Machines | General learners can learn over their lifetime vastly larger amounts of knowledge than they are born with. Such machines are sometimes called 'baby machines'. The drives of baby machines typically must change over their lifetime, especially if they are very good and general learners. \\ In psychology this is called //cognitive development//. \\ Very few - if any - AI systems exist that have demonstrated such a capability.[1] But some form of cognitive development is probably unavoidable in any powerful learning scheme, because what motivational mechanisms you need when you know very little are likely to be very different from useful motivational mechanisms that work well when you know a lot (when you have learned most of the fundamental principles of how your world works, your old learning mechanisms are unlikely to be as efficient or relevant as they were in the beginning). | |
| | Footnote | [1] Mind you, it should not be too hard to create a system that //appears// to demonstrate cognitive development, just as it isn't difficult to write a for-loop called "thinking". The mechanisms demonstrated in a real cog-dev system should also demonstrate the //need// for such a capacity, and that they happen //autonomously// in the learning process that the system implements. | |
\\ | \\ |
\\ | \\ |
====What is Creativity?==== | ====What is Creativity?==== |
| |
| \\ The Word | This word has many meanings. \\ The simplest meaning is typically you're creative if you "think of something that nobody else thought of". \\ A better meaning in our context is the ability of an intelligent system to produce non-obvious solutions to problems. \\ Creativity is about **producing** something. | | | \\ The Word | The word 'creativity' has many meanings. \\ The simplest meaning is typically that "you're creative if you think of something that nobody else thought of". \\ A better meaning in our context is the ability of an intelligent system to produce non-obvious solutions to problems. \\ Creativity is about **producing** something. | |
| Why it's Important | Ultimately we want creative machines. It is difficult to tease apart the concepts of intelligence and creativity: It is hard to imagine a great intelligence that is not creative. Likewise, it is also difficult to imagine a creative agent that is also not intelligent. | | | Why it's Important | Ultimately we want creative machines. It is difficult to tease apart the concepts of intelligence and creativity: It is hard to imagine a great intelligence that is not creative. Likewise, it is also difficult to imagine a creative agent that is also not intelligent. | |
| Creativity Without Intelligence? | The relation between creativity may not be bijective: while it is difficult to imagine a highly intelligent system that is not creative, it is not AS difficult to imagine an (artificial) system that is creative but not intelligent. This is especially true if we assume there are other (natural) intelligences around to make sense of what this "non-intelligent creative system" produces. | | | Creativity Without Intelligence? | The relation between creativity may not be bijective: while it is difficult to imagine a highly intelligent system that is not creative, it is not AS difficult to imagine an (artificial) system that is creative but not intelligent. This is especially true if we assume there are other (natural) intelligences around to make sense of what this "non-intelligent creative system" produces. | |
| Aaron | http://prostheticknowledge.tumblr.com/post/20734326468/aaron-the-first-artificial-intelligence-creative \\ https://www.youtube.com/watch?v=3PA-XApZkso | | | Aaron | http://prostheticknowledge.tumblr.com/post/20734326468/aaron-the-first-artificial-intelligence-creative \\ https://www.youtube.com/watch?v=3PA-XApZkso | |
| \\ Thaler's \\ Creativity Machine | http://www.imagination-engines.com \\ CM patented in 1994 \\ A few years later: the CM makes an invention that gets a patent from the United States Patent & Trademark Office (USPTO) \\ [[http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5659666|CM patent]] \\ What it is: ANN, becomes "creative" by "relaxing some parameters" so that the ANN "begins to hallucinate". | | | \\ Thaler's \\ Creativity Machine | http://www.imagination-engines.com \\ CM patented in 1994 \\ A few years later: the CM makes an invention that gets a patent from the United States Patent & Trademark Office (USPTO) \\ [[http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5659666|CM patent]] \\ What it is: ANN, becomes "creative" by "relaxing some parameters" so that the ANN "begins to hallucinate". | |
| \\ Are these machines creative \\ ? | Maybe - in some sense of the concept. \\ Are they //truly// creative? Probably not. \\ How so? What does it mean to be "truly creative"? \\ That would be the **full monty**: The ability to see unique, valuable solutions to a wide range of challenging problems better than others. | | | Are these machines creative \\ ? | Maybe - in some sense of the concept. \\ Are they //truly// creative? Probably not. \\ How so? What does it mean to be "truly creative"? \\ That would be the **full monty**: The ability to see unique, valuable solutions to a wide range of challenging problems better than others. | |
| Creativity is \\ a Relative Term | It is somewhat unavoidable to interpret the concept of 'creativity' as a relative term - i.e. "person (or system) X is more creative than person (or system) Y", as no absolute scale for it exists, as of yet. (It is possible that AI / AGI research may one day develop such a scale.) | | | Creativity is \\ a Relative Term | It is somewhat unavoidable to interpret the concept of 'creativity' as a relative term - i.e. "person (or system) X is more creative than person (or system) Y", as no absolute scale for it exists, as of yet. (It is possible that AI / AGI research may one day develop such a scale.) | |
| Intermediate Conclusion | To answer the question "Do creative machines exist?" we must inspect the concept of creativity in more detail. | | | Intermediate Conclusion | To answer the question "Do creative machines exist?" we must inspect the concept of creativity in more detail. | |
| Main Methodology | The foundation of CYC is formal logic, represented in predicate logic statements and compound structures. | | | Main Methodology | The foundation of CYC is formal logic, represented in predicate logic statements and compound structures. | |
| Key Results | Results from the CYC project are similar to the expert systems of the 80s - these systems are brittle and unpredictable. \\ The state of the CYC system in 2016 is provided in this nicely written essay: [[https://www.technologyreview.com/s/600984/an-ai-with-30-years-worth-of-knowledge-finally-goes-to-work/|REF]]. | | | Key Results | Results from the CYC project are similar to the expert systems of the 80s - these systems are brittle and unpredictable. \\ The state of the CYC system in 2016 is provided in this nicely written essay: [[https://www.technologyreview.com/s/600984/an-ai-with-30-years-worth-of-knowledge-finally-goes-to-work/|REF]]. | |
| \\ Two Problems | Upon further scrutiny, no good analysis or arguments exist of why 'understanding' should be equated with 'common sense'. The two are simply not the same thing. \\ Furthermore, progress under the rubric of 'common sense' in AI has neither produced any grand results nor evidence that the methodology followed is a promising one. | | | \\ Two Problems | Upon further scrutiny, no good analysis or arguments exist of why 'understanding' should be equated with 'common sense'. The two are simply not the same thing. \\ Furthermore, progress under the rubric of 'common sense' in AI has neither produced any grand results nor evidence that the methodology followed is a promising one. And it certainly doesn't seem to have inspired fresh ideas in a very long time. | |
\\ | \\ |
| |
| |
| |
==== Explanation & Explainability ==== | ==== Self-Explaining Systems ==== |
| |
| What It Is | The ability of a controller to explain, after the fact or before, why it did something or intends to do it. | | | What It Is | The ability of a controller to explain, after the fact or before, why it did something or intends to do it. | |
| Why It Is Important | If a controller does something we don't want it to repeat - e.g. crash an airplane full of people (in simulation mode, hopefully!) - it needs to be able to explain why it did what it did. If it can't, it means it - and //we// - can never be sure of it did what it did, whether it had any other choice, or even how likely it is to do it again. | | | 'Explainability' \\ ≠ \\ 'self-explanation' | If an intelligence X can explain a phenomenon Y, Y is 'explainable' by Y, through some process chosen by Y. \\ \\ In contrast, if an intelligence X can explain itself, its own actions, knowledge, understanding, beliefs, and reasoning, it is capable of self-explanation. The latter is stronger and subsumes the former. | |
| | Why It Is Important | If a controller does something we don't want it to repeat - e.g. crash an airplane full of people (in simulation mode, hopefully!) - it needs to be able to explain why it did what it did. If it can't, it means it - and //we// - can never be sure of why it did what it did, whether it had any other choice, whether it is likely to do it again, whether it's an evil machine that actually meant to do it, or even how likely it is to do it again. | |
| \\ Human-Level AI | Even more importantly, to grow and learn and self-inspect the AI system must be able to sort out causal chains. If it can't it will not only be incapable of explaining to others why it is like it is, it will be incapable of explaining to itself why things are the way they are, and thus, it will be incapable of sorting out whether something it did is better for its own growth than something else. Explanation is the big black hole of ANNs: In principle ANNs are black boxes, and thus they are in principle unexplainable - whether to themselves or others. \\ One way to address this is by encapsulating knowledge as hierarchical models that are built up over time, and can be de-constructed at any time (like AERA does). | | | \\ Human-Level AI | Even more importantly, to grow and learn and self-inspect the AI system must be able to sort out causal chains. If it can't it will not only be incapable of explaining to others why it is like it is, it will be incapable of explaining to itself why things are the way they are, and thus, it will be incapable of sorting out whether something it did is better for its own growth than something else. Explanation is the big black hole of ANNs: In principle ANNs are black boxes, and thus they are in principle unexplainable - whether to themselves or others. \\ One way to address this is by encapsulating knowledge as hierarchical models that are built up over time, and can be de-constructed at any time (like AERA does). | |
| |