Both sides previous revisionPrevious revisionNext revision | Previous revision |
public:t720-atai-2012:area [2013/08/20 09:43] – thorisson | public:t720-atai-2012:area [2024/04/29 13:33] (current) – external edit 127.0.0.1 |
---|
====== The AERA System ====== | ====== The AERA System ====== |
| |
The Auto-catalytic Endogenous Reflective Architecture – AERA – is an AGI-aspiring architectural blueprint that was produced as part of the [[http://www.humanobs.org|HUMANOBS]] FP7 project. It encompasses several fundamentally new ideas, in the history of AI, including a new programming language specifically conceived to solve some major limitations of prior efforts in this respect, including self-inspection and self-representation, distributed representation of knowledge, and distributed reasoning. AERA systems are //any-time//, //real-time//, //incremental/continuous learning//, //on-line learning// systems. | The Auto-catalytic Endogenous Reflective Architecture – AERA – is an AGI-aspiring architectural blueprint that was produced as part of the HUMANOBS FP7 project. It encompasses several fundamentally new ideas, in the history of AI, including a new programming language specifically conceived to solve some major limitations of prior efforts in this respect, including self-inspection and self-representation, distributed representation of knowledge, and distributed reasoning. AERA systems are //any-time//, //real-time//, //incremental/continuous learning//, //on-line learning// systems. |
| |
AERA's knowledge is stored in models, which essentially encode transformations on input, to produce output. Models have a trigger side (left-hand side) and a result side (right-hand side). In a forward-chaining scenario, when a particular piece of data matches on the left hand of a model (it is only allowed to test the match if the data has high enough saliency and the program has sufficient activation) the model fires, producing the output specified by its left-hand side and //injecting// it into a global memory store. The semantics of the output is //prediction//, and the semantics of the input is either //fact// or //prediction//. Notice that a model in AERA is **not** a production rule; a model relating <m>A</m> to <m>B</m> does not mean "A entails B", it means //A predicts B//, and it has an associated confidence value. Such models stem invariably (read: most of the time) from the system's experience, and in early stages of learning an AERA-based system's set of models may mostly consist fairly useless and bad models, all with relatively low confidence values ("not all models are created equal -- some are in fact better than others"). | AERA's knowledge is stored in models, which essentially encode transformations on input, to produce output. Models have a trigger side (left-hand side) and a result side (right-hand side). In a forward-chaining scenario, when a particular piece of data matches on the left hand of a model (it is only allowed to test the match if the data has high enough saliency and the program has sufficient activation) the model fires, producing the output specified by its left-hand side and //injecting// it into a global memory store. The semantics of the output is //prediction//, and the semantics of the input is either //fact// or //prediction//. Notice that a model in AERA is **not** a production rule; a model relating <m>A</m> to <m>B</m> does not mean "A entails B", it means //A predicts B//, and it has an associated confidence value. Such models stem invariably (read: most of the time) from the system's experience, and in early stages of learning an AERA-based system's set of models may mostly consist fairly useless and bad models, all with relatively low confidence values ("not all models are created equal -- some are in fact better than others"). |
In backward-chaining -- to implement the process of //abduction// -- models act the other way around, namely, when some data match the right-hand side, a model produces new data patterned after its left side, whose semantics essentially state that "if you want a <m>B</m> (on the right-hand side) perhaps it would help to get an <m>A</m> (term on the left-hand side). The semantics of either (both the input and output) is "goal". | In backward-chaining -- to implement the process of //abduction// -- models act the other way around, namely, when some data match the right-hand side, a model produces new data patterned after its left side, whose semantics essentially state that "if you want a <m>B</m> (on the right-hand side) perhaps it would help to get an <m>A</m> (term on the left-hand side). The semantics of either (both the input and output) is "goal". |
| |
A key principle of AERA operation is that of a **production** process: Each program in AERA has a level of **activation** that determines if it is allowed to run or not. Every piece of data has a corresponding **saliency** level that determines how visible it is inside the system. In AERA there is a single memory, but it (typically) embeds groups that allow sets of data and programs to be addressed, e.g. changing their activation, saliency, or existence (via creation or deletion). | A key principle of AERA operation is that of a **distributed production** process: Each program in AERA has a level of **activation** that determines if it is allowed to run or not. Every piece of data has a corresponding **saliency** level that determines how visible it is inside the system. In AERA there is a single memory, but it (typically) embeds groups that allow sets of data and programs to be addressed, e.g. changing their activation, saliency, or existence (via creation or deletion). |
| |
Behind this system lie five main design principles: | Behind this system lie five main design principles: |