Both sides previous revisionPrevious revisionNext revision | Previous revision |
public:t720-atai-2012:area [2013/07/24 00:57] – thorisson | public:t720-atai-2012:area [2024/04/29 13:33] (current) – external edit 127.0.0.1 |
---|
====== The AERA System ====== | ====== The AERA System ====== |
| |
The Auto-catalytic Endogenous Reflective Architecture – AERA – is a new AGI-aspiring system that was recently produced as part of the [[http://www.humanobs.org|HUMANOBS]] FP7 project. It encompasses several new ideas, including a new programming language specifically conceived to solve some major limitations of prior efforts in this respect, including self-inspection and self-representation, distributed representation of knowledge, and distributed reasoning. | The Auto-catalytic Endogenous Reflective Architecture – AERA – is an AGI-aspiring architectural blueprint that was produced as part of the HUMANOBS FP7 project. It encompasses several fundamentally new ideas, in the history of AI, including a new programming language specifically conceived to solve some major limitations of prior efforts in this respect, including self-inspection and self-representation, distributed representation of knowledge, and distributed reasoning. AERA systems are //any-time//, //real-time//, //incremental/continuous learning//, //on-line learning// systems. |
| |
A key principle of AERA operation is that of a **production** cycle: Each program in AERA has a level of **activation** that determines if it is allowed to run or not. Every piece of data has a corresponding **saliency** level that determines if it is visible or not inside the system. In AERA there is a single memory, but it (typically) embeds groups that allow sets of data and programs to be addressed, e.g. changing their activation, saliency, or existence (via creation or deletion). | AERA's knowledge is stored in models, which essentially encode transformations on input, to produce output. Models have a trigger side (left-hand side) and a result side (right-hand side). In a forward-chaining scenario, when a particular piece of data matches on the left hand of a model (it is only allowed to test the match if the data has high enough saliency and the program has sufficient activation) the model fires, producing the output specified by its left-hand side and //injecting// it into a global memory store. The semantics of the output is //prediction//, and the semantics of the input is either //fact// or //prediction//. Notice that a model in AERA is **not** a production rule; a model relating <m>A</m> to <m>B</m> does not mean "A entails B", it means //A predicts B//, and it has an associated confidence value. Such models stem invariably (read: most of the time) from the system's experience, and in early stages of learning an AERA-based system's set of models may mostly consist fairly useless and bad models, all with relatively low confidence values ("not all models are created equal -- some are in fact better than others"). |
| |
AERA's knowledge is stored in models, which essentially encode transformations on input, to produce output. Models have a trigger side (left-hand side) and a result side (right-hand side). In a forward-chaining scenario, when a particular piece of data matches on the left hand of a model (it is only allowed to test the match if the data has high enough saliency and the program has sufficient activation) the model fires, producing the output specified by its left-hand side and //injecting// it into a global memory store. The semantics of the output is //prediction//, and the semantics of the input is either //fact// or //prediction//. Notice that a model in AERA is **not** a production rule; a model relating <m>A</m> to <m>B</m> does not mean "A entails B", it means //A predicts B//, and it has an associated confidence value. Such models stem invariably (read: most of the time) from the system's experience, and in early stages of learning an AERA-based system's set of models may mostly consist fairly useless and bad models, all with relatively low confidence values. | In backward-chaining -- to implement the process of //abduction// -- models act the other way around, namely, when some data match the right-hand side, a model produces new data patterned after its left side, whose semantics essentially state that "if you want a <m>B</m> (on the right-hand side) perhaps it would help to get an <m>A</m> (term on the left-hand side). The semantics of either (both the input and output) is "goal". |
| |
In a backward chaining scenario, models act the other way around, namely, when some data match the right side, the model produces new data patterned after its left side. The semantics of both the input and output is "goal". | A key principle of AERA operation is that of a **distributed production** process: Each program in AERA has a level of **activation** that determines if it is allowed to run or not. Every piece of data has a corresponding **saliency** level that determines how visible it is inside the system. In AERA there is a single memory, but it (typically) embeds groups that allow sets of data and programs to be addressed, e.g. changing their activation, saliency, or existence (via creation or deletion). |
| |
Behind this system lie five main design principles: | Behind this system lie five main design principles: |
==== Replicode ==== | ==== Replicode ==== |
| |
The programming language of AERA, **Replicode**, was created to meet some shortcomings of //all prior programming languages//. Enumerating the main ones we get the following list: | The programming language of AERA, **Replicode**, was created to meet some shortcomings found in //all prior programming languages//. Enumerating the key ones of these we get the following list: |
| |
- Syntax and semantics meant for humans | - Syntax and semantics meant for humans |
- Inefficiencies in execution stemming from item 1 | - Inefficiencies in execution stemming from item 1 |
- Lack of support for induction and abduction as first-class logical operations | - Lack of support for induction and abduction as first-class logical operations |
- Lack of ability to model own behavior, at a high level of detail | - Lack of ability to model own behavior, at fine levels of detail |
- Lack of ways to handle passage and representation of (external and internal) time | - Lack of ways to handle passage and representation of (external and internal) time |
- Lack of support for self-generated code | - Lack of support for self-generated code |
| |
Replicode addresses all of these, and exhibits therefore the following unique features: | Replicode (used with the AERA Executive) addresses all of these, and exhibits therefore the following unique features: |
| |
* Machine-readable operational semantics | * Machine-readable operational semantics |
* Direct support for unified logical operations: induction, deduction, abduction | * Direct support for unified logical operations: induction, deduction, abduction |
* Extremely fast logical and distributed operations execution | * Extremely fast logical and distributed operations execution |
| * Bijective compilation: a 1:1 mapping between Replicode structures and their compiled byte code, so that a particular byte code is represented uniquely and bijectively as a particular human-readable Replicode code. |
| |
Replicode has the following distinguishing features: | Replicode has the following distinguishing features: |
* Designed to handle a vast amount of parallel processing tasks | * Designed to handle a vast amount of parallel processing tasks |
| |
While some – or possibly //all// – of these features may be found in the singular, or possibly in pairs, in other programming languages, we are pretty sure no current programming language embodies them //all//. They are //all// needed to achieve what was envisioned with the AERA architecture, and without them AERA would not perform. | While some – or possibly //all// – of these features may be found in the singular, or possibly in pairs, in other programming languages, we are pretty sure no current programming language embodies them //all in one and the same system//. They are //all// needed to achieve what was envisioned with the AERA architecture, and without them AERA would not perform. |
| |
==== AERA ==== | ==== AERA ==== |