User Tools

Site Tools


public:t-720-atai:atai-19:lecture_notes:autonomy

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
public:t-720-atai:atai-19:lecture_notes:autonomy [2019/10/14 20:31] – [Programming for Self-Programming] thorissonpublic:t-720-atai:atai-19:lecture_notes:autonomy [2024/04/29 13:33] (current) – external edit 127.0.0.1
Line 35: Line 35:
 |  Self-Inspection  | Virtually no systems exist as of yet that has been demonstrated to be able to inspect (measure, quantify, compare, track, make use of) their own development for use in its continued growth - whether learning, goal-generation, selection of variables, resource usage, or other self-X.   | |  Self-Inspection  | Virtually no systems exist as of yet that has been demonstrated to be able to inspect (measure, quantify, compare, track, make use of) their own development for use in its continued growth - whether learning, goal-generation, selection of variables, resource usage, or other self-X.   |
 |  Self-Growth  | No System as of yet has been demonstrated to be able to autonomously manage its own **self-growth**. Self-Growth is necessary for autonomous learning in task-environments with complexities far higher than the controller operating in it. It is even more important where certain bootstrapping thresholds are necessary before safe transition into more powerful/different learning schemes. \\ For instance, if only a few bits of knowledge can be programmed into a controller's seed ("DNA"), because we want it to have maximal flexibility in what it can learn, then we want to put something there that is essential to protect the controller while it develops more sophisticated learning. An example is that nature programmed human babies with an innate fear of heights.    | |  Self-Growth  | No System as of yet has been demonstrated to be able to autonomously manage its own **self-growth**. Self-Growth is necessary for autonomous learning in task-environments with complexities far higher than the controller operating in it. It is even more important where certain bootstrapping thresholds are necessary before safe transition into more powerful/different learning schemes. \\ For instance, if only a few bits of knowledge can be programmed into a controller's seed ("DNA"), because we want it to have maximal flexibility in what it can learn, then we want to put something there that is essential to protect the controller while it develops more sophisticated learning. An example is that nature programmed human babies with an innate fear of heights.    |
 +\\
 +\\
 +==== Autonomy & Closure ====
 +|  Autonomy  | The ability to do tasks without interference / help from others in a particular task-environment in a particular world.  |
 +|  Cognitive Autonomy  | Refers to the mental (control-) independence of agents - the more independent they are (of their designers, of outside aid, etc.) the more autonomous they are. Systems without it could hardly be considered to have general intelligence.   |
 +|  Structural Autonomy  | Refers to the process through which cognitive autonomy is achieved: Motivations, goals and behaviors as dynamically and continuously (re)constructed by the machine as a result of changes in its internal structure.  |
 +|  Operational closure  | The system's own operations is all that is required to maintain (and improve) the system itself.   |
 +|  \\ Semantic closure  | The system's own operations and experience produces/defines the meaning of its constituents. //Meaning// can thus be seen as being defined/given by the operation of the system as a whole: the actions it has taken, is taking, could be taking, and has thought about (simulated) taking, both cognitive actions and external actions in its physical domain. For instance, the **meaning** of the act of punching your best friend are the implications of that act - actual and potential - that this action has/may have, and its impact on your own and others' cognition.    |
 +|  Self-Programming in Autonomy  | The global process that animates computational structurally autonomous systems, i.e. the implementation of both the operational and semantic closures.   |
 +|  System evolution  | A controlled and planned reflective process; a global and never-terminating process of architectural synthesis.  |
 +
 \\ \\
 \\ \\
Line 124: Line 135:
 |  What It Is  | The ability of an outsider to predict the behavior of a controller based on some information.   | |  What It Is  | The ability of an outsider to predict the behavior of a controller based on some information.   |
 |  Why It Is Important  | Predicting the behavior of (semi-) autonomous machines is important if we want to ensure their safe operation, or be sure that they do what we want them to do.    | |  Why It Is Important  | Predicting the behavior of (semi-) autonomous machines is important if we want to ensure their safe operation, or be sure that they do what we want them to do.    |
-|  How To Do It  | Predicting the future behavior of ANNs (of any kind) is easier if we switch off their learning after they have been trained, because there exists no method for predicting where their development will lead them if they continue to learn after the leave the lab. Predicting ANN behavior on novel input can be done statistically, but there is no way to be sure that novel input will not completely reverse their behavior. There are very few if any methods for giving ANNs the ability to judge the "novelty" of any input, which might to some extent possibly help with this issue. Reinforcement learning addresses this by only scaling to a handful of variables with known max and min.  |+|  \\ How To Do It  | Predicting the future behavior of ANNs (of any kind) is easier if we switch off their learning after they have been trained, because there exists no method for predicting where their development will lead them if they continue to learn after the leave the lab. Predicting ANN behavior on novel input can be done statistically, but there is no way to be sure that novel input will not completely reverse their behavior. There are very few if any methods for giving ANNs the ability to judge the "novelty" of any input, which might to some extent possibly help with this issue. Reinforcement learning addresses this by only scaling to a handful of variables with known max and min.  |
 \\ \\
 \\ \\
Line 138: Line 149:
 |  What It Is  | The ability of a controller to explain, after the fact or before, why it did or intends to do something.   | |  What It Is  | The ability of a controller to explain, after the fact or before, why it did or intends to do something.   |
 |  Why It Is Important  | If a controller does something we don't want it to repeat - e.g. crash an airplane full of people - it needs to be able to explain why it did what it did. If it can't it means we can never be sure of why this autonomous system did what it did, or even whether it had any other choice.     | |  Why It Is Important  | If a controller does something we don't want it to repeat - e.g. crash an airplane full of people - it needs to be able to explain why it did what it did. If it can't it means we can never be sure of why this autonomous system did what it did, or even whether it had any other choice.     |
-|  Human-Level AI  | Even more importantly, to grow and learn and self-inspect the AI system must be able to sort out causal chains. If it can't it will not only be incapable of explaining to others why it is like it is, it will be incapable of explaining to itself why things are the way they are, and thus, it will be incapable of sorting out whether something it did is better for its own growth than something else. Explanation is the big black hole of ANNs: In principle ANNs are black boxes, and thus they are in principle unexplainable - whether to themselves or others. \\ AERA tries to address this by encapsulating knowledge as hierarchical models that are built up over time, and can be de-constructed at any time.   |+|  \\ Human-Level AI  | Even more importantly, to grow and learn and self-inspect the AI system must be able to sort out causal chains. If it can't it will not only be incapable of explaining to others why it is like it is, it will be incapable of explaining to itself why things are the way they are, and thus, it will be incapable of sorting out whether something it did is better for its own growth than something else. Explanation is the big black hole of ANNs: In principle ANNs are black boxes, and thus they are in principle unexplainable - whether to themselves or others. \\ AERA tries to address this by encapsulating knowledge as hierarchical models that are built up over time, and can be de-constructed at any time.   |
  
 \\ \\
 \\ \\
  
-==== Autonomy & Closure ==== 
-|  Autonomy  | The ability to do tasks without interference / help from others in a particular task-environment in a particular world.  | 
-|  Cognitive Autonomy  | Refers to the mental (control-) independence of agents - the more independent they are (of their designers, of outside aid, etc.) the more autonomous they are. Systems without it could hardly be considered to have general intelligence.   | 
-|  Structural Autonomy  | Refers to the process through which cognitive autonomy is achieved: Motivations, goals and behaviors as dynamically and continuously (re)constructed by the machine as a result of changes in its internal structure.  | 
-|  Operational closure  | The system's own operations is all that is required to maintain (and improve) the system itself.   | 
-|  Semantic closure  | The system's own operations and experience produces/defines the meaning of its constituents. //Meaning// can thus be seen as being defined/given by the operation of the system as a whole: the actions it has taken, is taking, could be taking, and has thought about (simulated) taking, both cognitive actions and external actions in its physical domain. For instance, the meaning of punching your best friend are the implications - actual and potential - that this action has/may have, and its impact on your own cognition.    | 
-|  Self-Programming in Autonomy  | The global process that animates computational structurally autonomous systems, i.e. the implementation of both the operational and semantic closures.   | 
-|  System evolution  | A controlled and planned reflective process; a global and never-terminating process of architectural synthesis.  | 
- 
-\\ 
-\\ 
  
 \\ \\
/var/www/cadia.ru.is/wiki/data/attic/public/t-720-atai/atai-19/lecture_notes/autonomy.1571085105.txt.gz · Last modified: 2024/04/29 13:32 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki