| |
public:t-720-atai:atai-20:self-x [2020/09/25 14:09] – [Cognitive Growth] thorisson | public:t-720-atai:atai-20:self-x [2024/04/29 13:33] (current) – external edit 127.0.0.1 |
---|
[[public:t-720-atai:atai-20:main|T-720-ATAI-2020 Main]] \\ | [[public:t-720-atai:atai-20:main|T-720-ATAI-2020 Main]] \\ |
[[public:t-720-atai:atai-20:Lecture_Notes|Links to Lecture Notes]] | [[public:t-720-atai:atai-20:Lecture_Notes|Links to Lecture Notes]] |
| \\ |
| \\ |
\\ | \\ |
\\ | \\ |
| \\ When is the quality of a program evaluated? | a) After execution, according to its actual contribution [G] \\ b) Before execution, according to its definition or historical record [I, S, P] \\ c) Both of the above [A, E, R] \\ Relevant assumption: \\ Are adaptation and prediction necessary? | | | \\ When is the quality of a program evaluated? | a) After execution, according to its actual contribution [G] \\ b) Before execution, according to its definition or historical record [I, S, P] \\ c) Both of the above [A, E, R] \\ Relevant assumption: \\ Are adaptation and prediction necessary? | |
| source | [[http://alumni.media.mit.edu/~kris/ftp/JAGI-Special-Self-Progr-Editorial-ThorissonEtAl-09.pdf|Thórisson et al. 2012]] | | | source | [[http://alumni.media.mit.edu/~kris/ftp/JAGI-Special-Self-Progr-Editorial-ThorissonEtAl-09.pdf|Thórisson et al. 2012]] | |
\\ | |
\\ | |
| |
==== Integrated Cognitive Control ==== | ==== Integrated Cognitive Control ==== |
| What it is | The ability of a controller / cognitive system to steer its own structural development - architectural growth (cognitive growth). The (sub-) system responsible for meta-learning. | | | What it is | The ability of a controller / cognitive system to steer its own structural development - architectural growth (cognitive growth). The (sub-) system responsible for meta-learning. | |
| Cognitive Growth | The structural change resulting from learning in a structurally autonomous cognitive system - the target of which is self-improvement. | | | Cognitive Growth | The structural change resulting from learning in a structurally autonomous cognitive system - the target of which is self-improvement. | |
\\ | |
\\ | |
| |
==== Cognitive Growth ==== | ==== Cognitive Growth ==== |
| What It Is | The ability of a machine to always return the same - or similar - answer to the same input. | | | What It Is | The ability of a machine to always return the same - or similar - answer to the same input. | |
| Why It Is Important | Simple machine learning algorithms are very good in this respect, delivering high reliability. Human-level AI, on the other hand, may have the same limitations as humans in this respect, i.e. not being able to give any guarantees. | | | Why It Is Important | Simple machine learning algorithms are very good in this respect, delivering high reliability. Human-level AI, on the other hand, may have the same limitations as humans in this respect, i.e. not being able to give any guarantees. | |
| Human-Level AI | To make human-level AI reliable is important because a human-level AI without reliability cannot be trusted, and hence would defeat most of the purpose for creating it in the first place. AERA proposes a method for this - through continuous pee-wee model generation and refinement. | | | Human-Level AI | To make human-level AI reliable is important because a human-level AI without reliability cannot be trusted, and hence would defeat most of the purpose for creating it in the first place. (AERA proposes a method for this - through continuous pee-wee model generation and refinement.) | |
| | To Achieve Reliability | Requires **predictability**. Predictability requires sorting out //causal relations// (without these we can never be sure what lead to what). | |
| | Predictability is Hard to Achieve | In a growing, developing system that is adapting and learning (3 or 4 levels of dynamics!) achieving predictability can only be achieved by **abstraction**: Going to the next level of detail (e.g. I cannot be sure //what exactly// I will eat for dinner, but I can be pretty sure that I //will// eat dinner). | |
| | Achieving Abstraction | Can be done through hierarchy (but it needs to be //dynamic// - i.e. tailored to its intended usage, as the circumstances call for - because the world has too complex combinatorics to store precomputed hierarchies for everything). | |
\\ | \\ |
| |
==== Explainability ==== | |
| |
| What It Is | The ability of a controller to explain, after the fact or before, why it did or intends to do something. | | |
| Why It Is Important | If a controller does something we don't want it to repeat - e.g. crash an airplane full of people - it needs to be able to explain why it did what it did. If it can't it means we can never be sure of why this autonomous system did what it did, or even whether it had any other choice. | | |
| \\ Human-Level AI | Even more importantly, to grow and learn and self-inspect the AI system must be able to sort out causal chains. If it can't it will not only be incapable of explaining to others why it is like it is, it will be incapable of explaining to itself why things are the way they are, and thus, it will be incapable of sorting out whether something it did is better for its own growth than something else. Explanation is the big black hole of ANNs: In principle ANNs are black boxes, and thus they are in principle unexplainable - whether to themselves or others. \\ AERA tries to address this by encapsulating knowledge as hierarchical models that are built up over time, and can be de-constructed at any time. | | |
| |
\\ | |
\\ | \\ |
| |