User Tools

Site Tools


public:t720-atai-2012:area

Course Notes

The AERA System

The Auto-catalytic Endogenous Reflective Architecture – AERA – is an AGI-aspiring architectural blueprint that was produced as part of the HUMANOBS FP7 project. It encompasses several fundamentally new ideas, in the history of AI, including a new programming language specifically conceived to solve some major limitations of prior efforts in this respect, including self-inspection and self-representation, distributed representation of knowledge, and distributed reasoning. AERA systems are any-time, real-time, incremental/continuous learning, on-line learning systems.

AERA's knowledge is stored in models, which essentially encode transformations on input, to produce output. Models have a trigger side (left-hand side) and a result side (right-hand side). In a forward-chaining scenario, when a particular piece of data matches on the left hand of a model (it is only allowed to test the match if the data has high enough saliency and the program has sufficient activation) the model fires, producing the output specified by its left-hand side and injecting it into a global memory store. The semantics of the output is prediction, and the semantics of the input is either fact or prediction. Notice that a model in AERA is not a production rule; a model relating <m>A</m> to <m>B</m> does not mean “A entails B”, it means A predicts B, and it has an associated confidence value. Such models stem invariably (read: most of the time) from the system's experience, and in early stages of learning an AERA-based system's set of models may mostly consist fairly useless and bad models, all with relatively low confidence values (“not all models are created equal – some are in fact better than others”).

In backward-chaining – to implement the process of abduction – models act the other way around, namely, when some data match the right-hand side, a model produces new data patterned after its left side, whose semantics essentially state that “if you want a <m>B</m> (on the right-hand side) perhaps it would help to get an <m>A</m> (term on the left-hand side). The semantics of either (both the input and output) is “goal”.

A key principle of AERA operation is that of a distributed production process: Each program in AERA has a level of activation that determines if it is allowed to run or not. Every piece of data has a corresponding saliency level that determines how visible it is inside the system. In AERA there is a single memory, but it (typically) embeds groups that allow sets of data and programs to be addressed, e.g. changing their activation, saliency, or existence (via creation or deletion).

Behind this system lie five main design principles:

  1. Holistic Design. The desired operation of the system results indirectly from the inter-operation of a multitude of general-purpose underlying processes. In other words, there is no component called “learning” or “planner” and so on. Instead, learning and planning are emergent processes that result from the same set of system-wide functions. More over, high-level processes (like planning and learning) influence each other (positively and also negatively): they are dynamically coupled, as they both result from the execution of the same knowledge - the very core of the system, its models.
  2. Reflectivity. A system must know what it is doing, when and at what cost. Enforcing explicit traces of the system's operation allows the building of models of said operation, which is needed for self-control (also called meta-control). In that respect, the architecture shall be applicable to itself, i.e. a control system for the system shall be implementable the same way the system is in the domain. In other words, this principle must be followed if one wants to implement an Integrated Cognitive Control.
  3. Uniform Operations: Have all operations controllable in a uniform way using mechanisms as simple as possible. More elaborate control schemes shall be learned by a control system, enabling the system little by little to move towards higher efficiency, more capabilities, and more targeted and focused operation.
  4. Low Granularity of Knowledge Representation: This means that (a) the encoding of knowledge shall be short and concise and, (b) all primitive operations of the system shall focus on one task and take as little time as possible, keeping in mind that higher-level operations result from the coupling of a multitude of said primitive operations. This principle aims at preserving plasticity – the capability of implementing small, incremental changes in the system: this is one of the key requirements for the architecture as it underpins our entire research avenue.
  5. Deep Handling of Time. Account for time at all stages of computation and at all scales - from the scale of an individual operation (e.g. performing a reduction) to the scale of a collective operation (e.g. achieving a goal). This is an essential requirement for a system that (a) has to perform in the real world and, (b) has to model its own operation with regards to its expenditure of resources. Additionally, time values shall be considered as intervals to encode the variable precisions and accuracies to be expected in the real world: for example, sensors are not always performing at fixed frame rates and therefore the ability to model their operation is critical to ensure the reliable operation of their controllers and the models that depends on their input. Also, the precision for goals and predictions may vary considerably depending on both their time horizons and semantics.

Replicode

The programming language of AERA, Replicode, was created to meet some shortcomings found in all prior programming languages. Enumerating the key ones of these we get the following list:

  1. Syntax and semantics meant for humans
  2. Inefficiencies in execution stemming from item 1
  3. Lack of support for induction and abduction as first-class logical operations
  4. Lack of ability to model own behavior, at fine levels of detail
  5. Lack of ways to handle passage and representation of (external and internal) time
  6. Lack of support for self-generated code

Replicode (used with the AERA Executive) addresses all of these, and exhibits therefore the following unique features:

  • Machine-readable operational semantics
    • Simple syntax
    • Simple semantics: e.g. no if-then constructs, no loop constructs
  • Methods for distributed partial consolidation and coordination of knowledge (distributed synchronization)
  • Direct support for unified logical operations: induction, deduction, abduction
  • Extremely fast logical and distributed operations execution
  • Bijective compilation: a 1:1 mapping between Replicode structures and their compiled byte code, so that a particular byte code is represented uniquely and bijectively as a particular human-readable Replicode code.

Replicode has the following distinguishing features:

  • Optimized for model-based representation
  • Data-driven execution, via pattern matching
  • Goal-directed processing
  • Support for automatic code generation
  • Explicit handling of time
  • Unified support for auto-generated forward and inverse models
  • Integrated reasoning and reflective (introspective) capabilities
  • Designed to handle a vast amount of parallel processing tasks

While some – or possibly all – of these features may be found in the singular, or possibly in pairs, in other programming languages, we are pretty sure no current programming language embodies them all in one and the same system. They are all needed to achieve what was envisioned with the AERA architecture, and without them AERA would not perform.

AERA

The AERA system is model-based and model-driven, meaning that the crux of all operations are controlled via models. Models are essentially executable code that is created by the system, as an AERA system is typically provided with only a very small set of knowledge up-front by its designers. AERA operation is heavily rooted in the principles of Replicode, relying on its reasoning control mechanisms, but adds principles of auto-catalytic operation, and general principles for managing a growing set of such auto-generated models and implementing integrated cognitive control – the control of the growth of the system by the system itself.

The main operational loop in AERA is a three-way auto-catalytic interaction between three processes and the memory store. The memory of AERA consists of models, forming an abstraction hierarchy via control and specification relationships. In addition to observed states, assumptions, goals and predictions, the memory contains executable code (models) and as such, constitutes an active part of the system: it is actually the very core of a model-based and model- driven system. The memory is responsible for most of the computation occurring in the system. At any point in time models are rated for their relevance to presently active goals in the system. The models receiving the highest score are candidates for predicting the turn of events, via simulation, given particular action or inaction on behalf of the system. Those with the predictions that most closely match the goals are chosen for execution.

AERA 3-way auto-catalytic cycles

There are essentially three main auto-catalytic cycles in AERA, realized via interaction between the functions of learning, attention control, and planning. All of these functions are controlled and implemented by models. Planning consists of model forward chaining (simulation / deduction) and backward chaining (abduction / goal regression). In simulation selected models are run from particular states/settings, e.g. the system's present state, to predict what happens “from now on” if a given action were to be taken. The action in that case being chosen based on a particular goal to be reached. A list of action candidates is produced by backward chaining from a particular goal state, each model's output becoming in effect a “sub-goal”. Starting from the end-goal, the system looks for a model that can produce the end-goal as output. The input of that model becomes then a sub-goal, to be matched in the same way by some other model. This activity is continued until a model is found whose input is the present state. If such chaining is successful, that is, models can be found for the full chain from end-goal to present state, each model's output thus matching a sub-goal generated by a chain of models “hooked” to the end-goal, a successful plan has been produced, one whose (micro-) steps (each step being a model) describes how to achieve the end-goal from the present state.

In AERA abduction and deduction happens simultaneously and continuously. In addition an induction process attempts to generalize good models, that is, models that have proven by experience to predict correctly future states from present ones. These three logic processes essentially constitute a substantial part of any AERA-based agent's realtime cognition. The auto-catalytic processes generate hierarchies of models that represent external entities and their relationships, and can be used to achieve new goals in the environment, as well as predict the effect of various actions on the world. Models in AERA can span a wide range of levels of abstraction. At the lowest level, models may represent extremely primitive building blocks, equivalent to primitive machine code instructions in a modern CPU. A fine-grained level is necessary to enable the system to acquire novel behaviors from combining the building blocks in new ways. However, models may also contain higher-level behaviors. For instance, given a system capable of perception-guided action, useful manipulation algorithms would be presented as models.

AERA overview

The focus – or attention – mechanism in AERA steers the system to look at data (by increasing its saliency, or “priority”) that is predicted to be useful to achieve the system's presently active goals. The model creation and subsequent verification (via simulation and execution in the real world) produces material that triggers generalization, enabling the system little by little to move towards higher efficiency, more capabilities, and more targeted and focused operation.

Because the representation is uniform there is no difference between how knowledge about the external world and the internal world is represented. Therefore these knowledge representation mechanisms can be used to represent knowledge of self as well. And because the AERA knowledge representation is very well suited to represent knowledge about time, knowledge about self-growth and goal achievement can be represented straightforwardly as well. This has the enormous benefit of allowing cognitive growth to be controlled by precisely the same kinds of mechanism as the external behavior of the system, and therefore, to implement integrated cognitive control one need only duplicate the architecture as described and point it towards itself, that is, the external-world copy of itself. This results in a two-tier architecture where each one is identical except that the lower level deals with the external world and the higher one deals with the lower one, that is, itself.

Knowledge Acquisition Example

Learning is produced via model generation and induction: Hypotheses for causal relationships between observed phenomena are posed, which produce models of real-world variables. As an example of the use of models for acquisition, let's look at a case where a model for moving a gripper (displacement in space) and a model for grasping objects result in the observation that when an object is grasped and the grasper is moved to location x,y,z, over time t, the object also moves to location x,y,z, over time t. This results in the system injecting a new model for how to move objects (which ultimately may result in a generalization of how to move movable-graspable-objects).

Note that this example is geared to explain the process, not to provide an actual example – when the actual implemented system generates models automatically the names of models are not human-readable.

Let's assume the repository contains these (innate) models:

<m>M1:</m> pick-up<m>(obj)</m> <m>→</m> hand-attached-to<m>(obj)</m>

where pick-up is an external command that implements the grasping function, and hand-attached-to is an observable state of the world – the latter being an example of information provided by a set of externally-provided hand-coded routines connected to physical devices, which continuously report information about the state of the robot and/or the world for a particular period of time. (For the sake of simplicity, time is not taken into account in this example; bear in mind that each model provides temporalized predictions and their temporal scope is controlled through control models.) Model <m>M1</m> above has a prerequisite: in order to pick up an object the hand must not be holding another object (for the sake of the present use-case, we are not interested in modeling an exhaustive and realistic set of preconditions):

<m>M2:</m> hand-free <m>→</m> <m>M1</m>

Another model in the initial bootstrap code is one that models the effects of a move command:

<m>M3:</m> move-hand<m>(dx)</m> <m>→</m> hand-at-position<m>(dx)</m>

where move-hand is an external command linked to the robot’s arm, <m>dx</m> is amount of displacement (as a vector in 3D), and hand-at-position is an observable state of the world providing the actual displacement with respect to an initial state (not shown here).

Suppose that first the system is first given the goal to grasp an object (i.e. goal: hand-attached-to<m>(obj)</m> – which it could discover through e.g. “motor babbling”), and then it is given the goal to move the hand by a displacement dx (i.e. goal: hand-at-position<m>(dx)</m>). The execution of that command produces a set of data (position of arm, gripper, etc.). By analyzing the data flow it not hard to find out that – in the case the hand is actually holding an object – that the displacement of the hand has lead to the displacement of the attached object as well (indeed, the only fact not predicted by the available set of models is object-at-position<m>(dx)</m>). From this activity the system has acquired the following simple model:

<m>M4:</m> move-hand<m>(dx)</m> <m>→</m> object-at-position<m>(dx)</m>

This model has the following precondition extracted from data:

<m>M5:</m> hand-attached-to<m>(obj)</m> <m>→</m> <m>M4</m>

Suppose now that the system has the goal to displace a known object by <m>dx</m> and that the hand is not currently holding any object; therefore, the goal of the system is the fact object-at-position<m>(dx)</m>. By going backwards (using abduction/backward chaining) the system can find that in order to satisfy this goal it has to activate the model M4. However, having this model is a precondition expressed in the model <m>M5</m>, which is not satisfied (the hand is not attached to the object), so it must first satisfy the goal hand-attached-to<m>(obj)</m>. The system commits to this new (sub)goal, which requires the activation of model <m>M1</m>. This time, the prerequisite of the model <m>M1</m> is satisfied (indeed, the hand is free – see model <m>M2</m>) which implies the actual activation of the model M1 and the execution of command pick-up<m>(obj)</m>. Should this command be executed correctly, the environment will output the fact hand-attached-to<m>(obj)</m> which satisfies both the current (sub)goal and the prerequisite of the model <m>M4</m>. The model is activated which implies the execution of the command move-hand<m>(dx)</m> and, as the consequence, the actual displacement of the object at the desired position.

This example is vastly simplified for explaining the basic principles of model acquisition in AERA; to make this work in a system operating in the real-world a number of additional mechanisms are needed, bringing our approach well beyond what might at first sight seem like “a glorified Prolog system”. Of those, two important mechanisms must be mentioned. First, in contrast to other systems, AERA has the ability to add and delete terms without serious penalty to runtime speed. This, coupled with a new way to distribute inferencing mechanisms across nodes, makes it practical to add and remove hypotheses, as well as modified and brand-new models, at rates not possible before – approximately a 1000-fold increase over existing systems. Execution time of other reasoning operations improves comparably. Second, the approach taken in AERA is “non-axiomatic”, in the sense that the beliefs of the system are not derived from a set of pre-defined axioms, but each from a separate evidential basis, using available knowledge and resources. Consequently, the system can operate under significant logical uncertainty yet still capable of using logical operations – as a result we avoid having to use statistical methods to model the world.



2012©Kristinn R. Thórisson & Eric Nivel

/var/www/cadia.ru.is/wiki/data/pages/public/t720-atai-2012/area.txt · Last modified: 2024/04/29 13:33 by 127.0.0.1

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki