public:t-720-atai:atai-19:lecture_notes_architectures
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
public:t-720-atai:atai-19:lecture_notes_architectures [2019/10/07 16:20] – [Levels of Self-Programming] thorisson | public:t-720-atai:atai-19:lecture_notes_architectures [2024/04/29 13:33] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 53: | Line 53: | ||
- | | What it is | // | + | | What it is | // |
| Self-Generated Program | | Self-Generated Program | ||
| Historical note | Concept of self-programming is old (J. von Neumann one of the first to talk about self-replication in machines). However, few if any proposals for how to achieve this has been fielded. | | Historical note | Concept of self-programming is old (J. von Neumann one of the first to talk about self-replication in machines). However, few if any proposals for how to achieve this has been fielded. | ||
Line 73: | Line 73: | ||
\\ | \\ | ||
\\ | \\ | ||
- | |||
- | ====Levels of Self-Programming==== | ||
- | | Level 1 | Level one self-programming capability is the ability of a system to make programs that exclusively make use of its primitive actions from action set. | | ||
- | | Level 2 | Level two self-programming systems can do Level 1, and additionally generate new primitives. | ||
- | | Level 3 | Level three self-programming adds the ability to change the principles by which Level one and Level two operate, in other words, Level three self-programming systems are capable of what we would here call meta-programming. This would involve changing or replacing some or all of the programs provided to the system at design time. Of course, the generations of primitives and the changes of principles are also controlled by some programs. | ||
- | | Infinite regress? | ||
- | | Likely to be many ways? | For AGI the set of relevant self-programming approaches is likely to be a much smaller set than that typically discussed in computer science, and in all likelihood much smaller than often implied in AGI. | | ||
- | | Architecture | ||
- | |||
- | \\ | ||
- | \\ | ||
- | |||
- | ====Existing Systems Which Target Self-Programming==== | ||
- | ^ Label ^ What ^ Example | ||
- | | [S] | State-space search | ||
- | | [P] | Production system | ||
- | | [R] | Reinforcement learning | ||
- | | [G] | Genetic programming | ||
- | | [I] | Inductive logic programming | ||
- | | [E] | Evidential reasoning | ||
- | | [A] | Autocatalytic model-driven bi-directional search | ||
- | | source | ||
- | \\ | ||
- | \\ | ||
- | |||
- | |||
- | |||
Line 131: | Line 104: | ||
====The AERA System==== | ====The AERA System==== | ||
- | The Auto-catalytic Endogenous Reflective Architecture – AERA – is an AGI-aspiring architectural blueprint that was produced as part of the HUMANOBS FP7 project. It encompasses several fundamentally new ideas, in the history of AI, including a new programming language specifically conceived to solve some major limitations of prior efforts in this respect, including self-inspection and self-representation, | + | The Auto-catalytic Endogenous Reflective Architecture – AERA – is an AGI-aspiring architectural blueprint that was produced as part of the HUMANOBS FP7 project. It encompasses several fundamentally new ideas in the history of AI, including a new programming language specifically conceived to solve some major limitations of prior efforts in this respect, including self-inspection and self-representation, |
AERA's knowledge is stored in models, which essentially encode transformations on input, to produce output. Models have a trigger side (left-hand side) and a result side (right-hand side). In a forward-chaining scenario, when a particular piece of data matches on the left hand of a model (it is only allowed to test the match if the data has high enough saliency and the program has sufficient activation) the model fires, producing the output specified by its left-hand side and injecting it into a global memory store. The semantics of the output is prediction, and the semantics of the input is either fact or prediction. Notice that a model in AERA is not a production rule; a model relating A to B does not mean “A entails B”, it means A predicts B, and it has an associated confidence value. Such models stem invariably (read: most of the time) from the system' | AERA's knowledge is stored in models, which essentially encode transformations on input, to produce output. Models have a trigger side (left-hand side) and a result side (right-hand side). In a forward-chaining scenario, when a particular piece of data matches on the left hand of a model (it is only allowed to test the match if the data has high enough saliency and the program has sufficient activation) the model fires, producing the output specified by its left-hand side and injecting it into a global memory store. The semantics of the output is prediction, and the semantics of the input is either fact or prediction. Notice that a model in AERA is not a production rule; a model relating A to B does not mean “A entails B”, it means A predicts B, and it has an associated confidence value. Such models stem invariably (read: most of the time) from the system' |
/var/www/cadia.ru.is/wiki/data/attic/public/t-720-atai/atai-19/lecture_notes_architectures.1570465237.txt.gz · Last modified: 2024/04/29 13:32 (external edit)