User Tools

Site Tools


public:t-720-atai:atai-19:aera

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

public:t-720-atai:atai-19:aera [2019/09/11 16:44]
thorisson [Lecture Notes, W9: AERA]
public:t-720-atai:atai-19:aera [2019/10/20 18:50] (current)
thorisson [Model Generation & Evaluation]
Line 23: Line 23:
 |  Pervasive Use of Codelets  | A //codelet// is a piece of code that is smaller than a typical self-contained program, typically a few lines long, and can only be executed in particular contexts. Programs are constructed on the fly by the operation of the whole system selecting which codelets to run when, based on the knowledge of the system, the active goals, and the state it finds itself in at any point in time.   | |  Pervasive Use of Codelets  | A //codelet// is a piece of code that is smaller than a typical self-contained program, typically a few lines long, and can only be executed in particular contexts. Programs are constructed on the fly by the operation of the whole system selecting which codelets to run when, based on the knowledge of the system, the active goals, and the state it finds itself in at any point in time.   |
 |  \\ No "Modules"  | Note that the diagram above may imply the false impression that AERA consists of these four software "modules", or "classes", or the like. Nothing could be further from the truth: All of AERA's mechanism above are a set of functions that are "welded in with" the operation of the whole system, distributed in a myriad of mechanisms and actions. \\ Does this mean that AERA is spaghetti code, or a mess of a design? On the contrary, the integration and overlap of various mechanisms to achieve the high-level functions depicted in the diagram are surprisingly clean, simple, and coherent in their implementation and operation. \\ This does not mean, however, that AERA is easy to understand -- mainly because it uses concepts and implements mechanisms and relies on concepts that are //very different// from most traditional software systems commonly recognized in computer science.    | |  \\ No "Modules"  | Note that the diagram above may imply the false impression that AERA consists of these four software "modules", or "classes", or the like. Nothing could be further from the truth: All of AERA's mechanism above are a set of functions that are "welded in with" the operation of the whole system, distributed in a myriad of mechanisms and actions. \\ Does this mean that AERA is spaghetti code, or a mess of a design? On the contrary, the integration and overlap of various mechanisms to achieve the high-level functions depicted in the diagram are surprisingly clean, simple, and coherent in their implementation and operation. \\ This does not mean, however, that AERA is easy to understand -- mainly because it uses concepts and implements mechanisms and relies on concepts that are //very different// from most traditional software systems commonly recognized in computer science.    |
 +
 +\\
 +\\
 +
 +==== General Form of AERA Models ====
 +
 +|  {{public:t-720-atai:screenshot_2019-10-20_17.07.15.png?300}}  |
 +|  Models in AERA have a left-hand-side (LHS) and a right-hand-side (RHS). Read from left-to-right they state that "if you see what is in the LHS then I predict what you see in the RHS". When read right-to-left they say "If you want what is on the LHS try getting what is on the LHS first". The latter is a way to produce sub-goals via abduction; the former is a way to predict the future via deduction.    |
 +|  This model, called Model_M, predicts that if you see variables 6 and 7 you will see variable 4 some time later (AERA models refer to specific times - the model is somewhat simplified here for convenience). Read from right to left (backward chaining - BWD) it states that if you want variable-4 you should try to obtain variables 6 and 7.     |
 +|  We call such models "bi-directional causal-relational models" because they can be read in either direction and they model the relations (including causal relations) between variables. Note that models can reference other models on either side and can include patterns on either side. In case the values of variables on either side matter for the other side we use functions that belong to the model that compute these values. And due to the bi-directionality we must have bi-directional functions for this purpose as well. \\ (For instance, if you want to open the door you must push down the handle first, then pull the door towards you; if you pull the door towards you with the handle pushed down then the door will open. The amount of pulling will determine the amount the door is ajar - this can be computed via a function relating the LHS to the RHS.)    |
  
 \\ \\
Line 29: Line 39:
 ====Autonomous Model Acquisition==== ====Autonomous Model Acquisition====
 |  What it is   | The ability to create a model of some target phenomenon //automatically//.   | |  What it is   | The ability to create a model of some target phenomenon //automatically//.   |
-|  Challenge  | Unless we know beforehand which signals cause perturbations in <m>o</m> and can hard-wire these from the get-go in the controller, the controller must search for these signals. \\ In task-domains where the number of available signals is vastly greater than the controller's resources available to do such search, it may take an unacceptable time for the controller to find good predictive variables to create models with. \\ <m>V_te >> V_mem</m>, where the former is the total number of potentially observable and manipulatable variables in the task-environment and the latter is the number of variables that the agent can hold in its memory at any point in time.   |+|  Challenge  | Unless we (the designers of an intelligent controller) know beforehand which signals from the controller cause desired perturbations in <m>o</m> and can hard-wire these from the get-go, the controller must find these signals. \\ In task-domains where the number of available signals is vastly greater than the controller's resources available to do such search, it may take an unacceptable time for the controller to find good predictive variables to create models with. \\ <m>V_te >> V_mem</m>, where the former is the total number of potentially observable and manipulatable variables in the task-environment and the latter is the number of variables that the agent can hold in its memory at any point in time.   |
  
 \\ \\
Line 46: Line 56:
  
 |  {{public:t-720-atai:three-models-1.png?400}}  | |  {{public:t-720-atai:three-models-1.png?400}}  |
-|  Based on prior observations, of the variables and their temporal execution in some context, the controller's model generation function <m>P_M</m> may have captured their causal relationship in three alternative models, <m>M_1, M_2, M_3</m>, each slightly but measurably different from the others. Each can be considered a //hypothesis of the actual relationship between the included variables//, when in the context provided by <m>V_5, V_6</m>.  |+|  Based on prior observations, of the variables and their temporal execution in some context, the controller's model generation process <m>P_M</m> may have captured their causal relationship in three alternative models, <m>M_1, M_2, M_3</m>, each slightly but measurably different from the others. Each can be considered a //hypothesis of the actual relationship between the referenced variables//, when in the context provided by <m>V_5, V_6</m>. \\ As an example, we could have a tennis ball's direction <m>V_1</m>, speed <m>V_2</m>, and shape <m>V_3</m> that changes when it hits a wall <m>V_5</m>, according to its relative angle <m>V_6</m> to the wall.  |
 |  {{public:t-720-atai:agent-with-models-1.png?300}}  | |  {{public:t-720-atai:agent-with-models-1.png?300}}  |
-|  The agent's model generation mechanisms allow it to produce models of events it sees. Here it creates models (a) <m>M_1</m> and (b) <m>M_2</m>. The usefulness / utility of these models can be tested by performing an operation on the world (c ) as prescribed by the models(Ideally, when one wants to find on which one is best, the most efficient method is an (energy-preserving) intervention that can only leave one as the winner.  |+|  The agent's model generation mechanisms allow it to produce models of events it sees. Here it creates models (a) <m>M_1</m> and (b) <m>M_2</m>. The usefulness of these models for particular situations and goals can be tested by performing an operation on the world (c ) as prescribed by the models, through backward chaining (abduction). \\ Ideally, when one wants to find on which model is best for a particular situation (goals+environment+state), the most efficient method is an (energy-preserving) intervention that can only leave one as the winner.   |
 |  {{public:t-720-atai:model-m2-prime-1.png?150}}  | |  {{public:t-720-atai:model-m2-prime-1.png?150}}  |
-|  The result of feedback (reinforcement) may result in the deletion, rewriting, or some other modification of the original model selected for prediction. Here the feedback has resulted in a modified model <m>M{prime}_2</m>.  |+|  The feedback (reinforcement) resulting from direct or indirect tests of a model may result in its deletion, rewriting, or some other modification. Here the feedback has resulted in a modified model <m>M{prime}_2</m>.  |
  
 \\ \\
Line 57: Line 67:
  
  
-\\ +
-\\+
  
  
/var/www/ailab/WWW/wiki/data/attic/public/t-720-atai/atai-19/aera.1568220296.txt.gz · Last modified: 2019/09/11 16:44 by thorisson