Both sides previous revisionPrevious revisionNext revision | Previous revision |
public:t-720-atai:atai-19:lecture_notes:autonomy [2019/10/14 20:37] – [Predictability] thorisson | public:t-720-atai:atai-19:lecture_notes:autonomy [2024/04/29 13:33] (current) – external edit 127.0.0.1 |
---|
| Self-Inspection | Virtually no systems exist as of yet that has been demonstrated to be able to inspect (measure, quantify, compare, track, make use of) their own development for use in its continued growth - whether learning, goal-generation, selection of variables, resource usage, or other self-X. | | | Self-Inspection | Virtually no systems exist as of yet that has been demonstrated to be able to inspect (measure, quantify, compare, track, make use of) their own development for use in its continued growth - whether learning, goal-generation, selection of variables, resource usage, or other self-X. | |
| Self-Growth | No System as of yet has been demonstrated to be able to autonomously manage its own **self-growth**. Self-Growth is necessary for autonomous learning in task-environments with complexities far higher than the controller operating in it. It is even more important where certain bootstrapping thresholds are necessary before safe transition into more powerful/different learning schemes. \\ For instance, if only a few bits of knowledge can be programmed into a controller's seed ("DNA"), because we want it to have maximal flexibility in what it can learn, then we want to put something there that is essential to protect the controller while it develops more sophisticated learning. An example is that nature programmed human babies with an innate fear of heights. | | | Self-Growth | No System as of yet has been demonstrated to be able to autonomously manage its own **self-growth**. Self-Growth is necessary for autonomous learning in task-environments with complexities far higher than the controller operating in it. It is even more important where certain bootstrapping thresholds are necessary before safe transition into more powerful/different learning schemes. \\ For instance, if only a few bits of knowledge can be programmed into a controller's seed ("DNA"), because we want it to have maximal flexibility in what it can learn, then we want to put something there that is essential to protect the controller while it develops more sophisticated learning. An example is that nature programmed human babies with an innate fear of heights. | |
| \\ |
| \\ |
| ==== Autonomy & Closure ==== |
| | Autonomy | The ability to do tasks without interference / help from others in a particular task-environment in a particular world. | |
| | Cognitive Autonomy | Refers to the mental (control-) independence of agents - the more independent they are (of their designers, of outside aid, etc.) the more autonomous they are. Systems without it could hardly be considered to have general intelligence. | |
| | Structural Autonomy | Refers to the process through which cognitive autonomy is achieved: Motivations, goals and behaviors as dynamically and continuously (re)constructed by the machine as a result of changes in its internal structure. | |
| | Operational closure | The system's own operations is all that is required to maintain (and improve) the system itself. | |
| | \\ Semantic closure | The system's own operations and experience produces/defines the meaning of its constituents. //Meaning// can thus be seen as being defined/given by the operation of the system as a whole: the actions it has taken, is taking, could be taking, and has thought about (simulated) taking, both cognitive actions and external actions in its physical domain. For instance, the **meaning** of the act of punching your best friend are the implications of that act - actual and potential - that this action has/may have, and its impact on your own and others' cognition. | |
| | Self-Programming in Autonomy | The global process that animates computational structurally autonomous systems, i.e. the implementation of both the operational and semantic closures. | |
| | System evolution | A controlled and planned reflective process; a global and never-terminating process of architectural synthesis. | |
| |
\\ | \\ |
\\ | \\ |
| What It Is | The ability of a controller to explain, after the fact or before, why it did or intends to do something. | | | What It Is | The ability of a controller to explain, after the fact or before, why it did or intends to do something. | |
| Why It Is Important | If a controller does something we don't want it to repeat - e.g. crash an airplane full of people - it needs to be able to explain why it did what it did. If it can't it means we can never be sure of why this autonomous system did what it did, or even whether it had any other choice. | | | Why It Is Important | If a controller does something we don't want it to repeat - e.g. crash an airplane full of people - it needs to be able to explain why it did what it did. If it can't it means we can never be sure of why this autonomous system did what it did, or even whether it had any other choice. | |
| Human-Level AI | Even more importantly, to grow and learn and self-inspect the AI system must be able to sort out causal chains. If it can't it will not only be incapable of explaining to others why it is like it is, it will be incapable of explaining to itself why things are the way they are, and thus, it will be incapable of sorting out whether something it did is better for its own growth than something else. Explanation is the big black hole of ANNs: In principle ANNs are black boxes, and thus they are in principle unexplainable - whether to themselves or others. \\ AERA tries to address this by encapsulating knowledge as hierarchical models that are built up over time, and can be de-constructed at any time. | | | \\ Human-Level AI | Even more importantly, to grow and learn and self-inspect the AI system must be able to sort out causal chains. If it can't it will not only be incapable of explaining to others why it is like it is, it will be incapable of explaining to itself why things are the way they are, and thus, it will be incapable of sorting out whether something it did is better for its own growth than something else. Explanation is the big black hole of ANNs: In principle ANNs are black boxes, and thus they are in principle unexplainable - whether to themselves or others. \\ AERA tries to address this by encapsulating knowledge as hierarchical models that are built up over time, and can be de-constructed at any time. | |
| |
\\ | \\ |
\\ | \\ |
| |
==== Autonomy & Closure ==== | |
| Autonomy | The ability to do tasks without interference / help from others in a particular task-environment in a particular world. | | |
| Cognitive Autonomy | Refers to the mental (control-) independence of agents - the more independent they are (of their designers, of outside aid, etc.) the more autonomous they are. Systems without it could hardly be considered to have general intelligence. | | |
| Structural Autonomy | Refers to the process through which cognitive autonomy is achieved: Motivations, goals and behaviors as dynamically and continuously (re)constructed by the machine as a result of changes in its internal structure. | | |
| Operational closure | The system's own operations is all that is required to maintain (and improve) the system itself. | | |
| Semantic closure | The system's own operations and experience produces/defines the meaning of its constituents. //Meaning// can thus be seen as being defined/given by the operation of the system as a whole: the actions it has taken, is taking, could be taking, and has thought about (simulated) taking, both cognitive actions and external actions in its physical domain. For instance, the meaning of punching your best friend are the implications - actual and potential - that this action has/may have, and its impact on your own cognition. | | |
| Self-Programming in Autonomy | The global process that animates computational structurally autonomous systems, i.e. the implementation of both the operational and semantic closures. | | |
| System evolution | A controlled and planned reflective process; a global and never-terminating process of architectural synthesis. | | |
| |
\\ | |
\\ | |
| |
\\ | \\ |