Both sides previous revisionPrevious revisionNext revision | Previous revision |
public:t-720-atai:atai-20:generality [2020/09/23 13:35] – [One Task, Many Tasks, One Environment, Many Environments, One Domain, Many Domains] thorisson | public:t-720-atai:atai-20:generality [2024/04/29 13:33] (current) – external edit 127.0.0.1 |
---|
| |
====What Do You Mean by "Generality"?==== | ====What Do You Mean by "Generality"?==== |
| \\ Flexibility: \\ Breadth of task-environments | Enumeration of variety. \\ (By 'variety' we mean the discernibly different states that can be sensed and that make a difference to a controller.) \\ If a system X can operate in more diverse task-environments than system Y, system X is more //flexible// than system Y. | | | Flexibility: \\ Breadth of task-environments | Enumeration of variety. \\ (By 'variety' we mean the discernibly different states that can be sensed and that make a difference to a controller.) \\ If a system X can operate in more diverse task-environments than system Y, system X is more //flexible// than system Y. | |
| Solution Diversity: \\ Breadth of solutions | If a system X can reliably generate a larger variation of acceptable solutions to problems than system Y, system X is more //powerful// than system Y. | | | Solution Diversity: \\ Breadth of solutions | \\ If a system X can reliably generate a larger variation of acceptable solutions to problems than system Y, system X is more //powerful// than system Y. | |
| Constraint Diversity: \\ Breadth of constraints on solutions | \\ If a system X can reliably produce acceptable solutions under a higher number of solution constraints than system Y, system X is more //powerful// than system Y. | | | Constraint Diversity: \\ Breadth of constraints on solutions | \\ If a system X can reliably produce acceptable solutions under a higher number of solution constraints than system Y, system X is more //powerful// than system Y. | |
| Goal Diversity: \\ Breadth of goals | If a system X can meet a wider range of goals than system Y, system X is more //powerful// than system Y. | | | Goal Diversity: \\ Breadth of goals | If a system X can meet a wider range of goals than system Y, system X is more //powerful// than system Y. | |
| |
| {{public:t-720-atai:afteritleavesthelab.png?750|After it Leaves the Lab}} | | | {{public:t-720-atai:afteritleavesthelab.png?750|After it Leaves the Lab}} | |
| A: Simple machine learners (<m>L_0</m>) take a small set of inputs (<m>x, y, z</m>) and make a choice between a set of possible outputs (<m>α,β</m>), as specified in detail by the system’s designer. Increasing either the set of inputs or number of possible outputs will either break the algorithm or slow learning to impractical levels. | | | **A:** Simple machine learners (<m>L_0</m>) take a small set of inputs (<m>x, y, z</m>) and make a choice between a set of possible outputs (<m>α,β</m>), as specified in detail by the system’s designer. Increasing either the set of inputs or number of possible outputs will either break the algorithm or slow learning to impractical levels. | |
| B: Let <m>tsk_i</m> refer to relatively non-trivial tasks such as assembling furniture and moving office items from one room to another, <m>S_i</m> to various situations that a family of tasks can be performed, and <m>e_i</m> to environments where those situations may be encountered. Simple learner <m>L_0</m> is limited to only a fraction of the various things that must be learned to achieve such a task. Being able to handle a single such task in a particular type of situation (<m>S_1</m>) with features that were unknown prior to the system’s deployment, <m>L_1</m> is already more capable than most if not all autonomous learning systems available today. <m>L_2</m>, <m>L_3</m> and <m>L_4</m> take successive steps up the complexity ladder beyond that, being able to learn //numerous// complex tasks (<m>L_2</m>), in //various situations// (<m>L_3</m>), and in a wider range of //environments and mission spaces// (<m>L_4</m>). Only towards the higher end of this ladder can we hope to approach really //general, autnomous// intelligence – systems capable of learning to effectively and efficiently perform multiple //a-priori unfamiliar// tasks, in //a variety of a-priori unfamiliar situations//, in a variety of //a-priori unfamiliar environments//, //**on their own**//. | | | **B:** Let <m>tsk_i</m> refer to relatively non-trivial tasks such as assembling furniture and moving office items from one room to another, <m>S_i</m> to various situations that a family of tasks can be performed, and <m>e_i</m> to environments where those situations may be encountered. Simple learner <m>L_0</m> is limited to only a fraction of the various things that must be learned to achieve such a task. Being able to handle a single such task in a particular type of situation (<m>S_1</m>) with features that were unknown prior to the system’s deployment, <m>L_1</m> is already more capable than most if not all autonomous learning systems available today. <m>L_2</m>, <m>L_3</m> and <m>L_4</m> take successive steps up the complexity ladder beyond that, being able to learn //numerous// complex tasks (<m>L_2</m>), in //various situations// (<m>L_3</m>), and in a wider range of //environments and mission spaces// (<m>L_4</m>). | |
| | Only towards the higher end of this ladder can we hope to approach really //general, autnomous// intelligence – systems capable of learning to effectively and efficiently perform multiple //a-priori unfamiliar// tasks, in //a variety of a-priori unfamiliar situations//, in a variety of //a-priori unfamiliar environments//, //**on their own**//. | |
| |
\\ | \\ |
\\ | \\ |
| |
===== Requirements ===== | ===== Requirements for General Learning ===== |
\\ | \\ |
\\ | \\ |
| "Intelligent System": \\ Expectation | We expect an "intelligent" system to be able to //learn//. | | | "Intelligent System": \\ Expectation | We expect an "intelligent" system to be able to //learn//. | |
| Standard Learning Expectation | That the system can learn //a task//. | | | Standard Learning Expectation | That the system can learn //a task//. | |
| Examples of "Intelligent" Systems | Deep Blue. Watson. Alpha Go. | | | Examples of \\ "Intelligent" Systems | Deep Blue. Watson. Alpha Go. | |
| \\ What these systems have in common | They can only learn (do) //one task//. \\ They are really bad at learning temporal tasks. \\ Their learning must be turned off when they leave the lab. \\ The tasks they learn are relatively simple (in that their goal structure can be easily formalized). \\ They are neither "domain-independent" nor "general" - they are not //general learners//. | | | \\ What these systems \\ have in common | They can only learn (and do) //one task//. \\ They are really bad at learning temporal tasks. \\ Their learning must be turned off when they leave the lab. \\ The tasks they learn are relatively simple (in that their goal structure can be easily formalized). \\ They are neither "domain-independent" nor "general" - they are not //general learners//. | |
| We want more general learners | A general learner would not be limited by domain, topic, task-environment, or other such limitations - the more free from such constraints, the more "intelligent" the system. | | | We want more general learners | A general learner would not be limited by domain, topic, task-environment, or other such limitations - the more free from such constraints, the more "intelligent" the system. | |
| |
| |
^Key^What it Means^Why it's Important^ | ^Key^What it Means^Why it's Important^ |
| \\ Mission | R1. The system must fulfill its mission – the goals and constraints it has been given by its designers – with possibly several different priorities. | This is the very reason we built the system. We should have pretty good ideas as to why. Shared by all AI systems. | | | \\ Mission | **R1.** The system must fulfill its mission – the goals and constraints it has been given by its designers – with possibly several different priorities. | This is the very reason we built the system. We should have pretty good ideas as to why. Shared by all AI systems. | |
| \\ AILL \\ "After it Leaves the Lab" | R2. The system must be designed to be operational in the long-term, without intervention of its designers after it leaves the lab, as dictated by the temporal scope of its mission. | All machine learning methods today are "before it leaves the lab", meaning that the task-environment must be known and clearly delineated beforehand, and the system cannot handle changes to these assumptions. To be more autonomous we must look at the life of these systems "beyond the lab". | | | \\ AILL \\ "After it Leaves the Lab" | **R2.** The system must be designed to be operational in the long-term, without intervention of its designers after it leaves the lab, as dictated by the temporal scope of its mission. | All machine learning methods today are "before it leaves the lab", meaning that the task-environment must be known and clearly delineated beforehand, and the system cannot handle changes to these assumptions. To be more autonomous we must look at the life of these systems "beyond the lab". | |
| \\ Domain-independence | R3. The system must be domain- and task-independent – but without a strict requirement for determinism: We limit our architecture to handle only missions for which rigorous determinism is not a requirement. | It is easy to implement domain dependence in software systems: Virtually //all// software today is made this way. Domain independence is necessary if we want to build more autonomous systems. | | | \\ Domain-independence | **R3.** The system must be domain- and task-independent – but without a strict requirement for determinism: We limit our architecture to handle only missions for which rigorous determinism is not a requirement. | It is easy to implement domain dependence in software systems: Virtually //all// software today is made this way. Domain independence is necessary if we want to build more autonomous systems. | |
| \\ Modeling | R4. The system must be able to model its environment to adapt to changes thereof. | A good controller not only reacts to changes in its environment, it anticipates them. Anticipation, or prediction, is only possible with a decent model the system whose behavior we are predicting. A good model allows detailed and long-term prediction. | | | \\ Modeling | \\ **R4.** The system must be able to model its environment to adapt to changes thereof. | A good controller not only reacts to changes in its environment, it anticipates them. Anticipation, or prediction, is only possible with a decent model the system whose behavior we are predicting. A good model allows detailed and long-term prediction. | |
| \\ Anytime | R5. As with learning, planning must be performed continuously, incrementally and in real-time. Pursuing goals and predicting must be done concurrently. | A good system learns //all the time// and is planning and revising its plans //all the time//. Anything less makes the system less fit ("dumber"). | | | \\ Anytime | **R5.** As with learning, planning must be performed continuously, incrementally and in real-time. Pursuing goals and predicting must be done concurrently. | A good system learns //all the time// and is planning and revising its plans //all the time//. Anything less makes the system less fit ("dumber"). | |
| \\ Attention | R6. The system must be able to control the focus of its attention. | Any system in a world that is vastly more complex and large than its resources allow to explore at any one time, must select what to apply its thinking, memory, and behavior to. Such "resource management" when applied to thinking is called "attention". | | | \\ Attention | \\ **R6.** The system must be able to control the focus of its attention. | Any system in a world that is vastly more complex and large than its resources allow to explore at any one time, must select what to apply its thinking, memory, and behavior to. Such "resource management" when applied to thinking is called "attention". | |
| \\ Self-Modeling | R7. The system must be able to model itself. | Any cognitive growth (development) requires comparing or evaluating a new state or architecture of the system to an old one. Unless the system has a model of self such self-modification cannot be evaluated a priori, and all changes are random explorations, which is the most inefficient method to apply to goal-directed behavior, and certainly not "intelligent" in any way. | | | \\ Self-Modeling | \\ **R7.** The system must be able to model itself. | Any cognitive growth (development) requires comparing or evaluating a new state or architecture of the system to an old one. Unless the system has a model of self such self-modification cannot be evaluated a priori, and all changes are random explorations, which is the most inefficient method to apply to goal-directed behavior, and certainly not "intelligent" in any way. | |
| \\ No Certainty | R8. The system must be able to handle incompleteness, uncertainty, and inconsistency, both in state space and in time. | In any large world there will be unintended and unforeseen consequences to all changes, as well as potential errors in measurements (perception). Certainty can never be 1. \\ In other words, "Nothing is 100% (not even this axiom!)." | | | \\ No Certainty | **R8.** The system must be able to handle incompleteness, uncertainty, and inconsistency, both in state space and in time. | In any large world there will be unintended and unforeseen consequences to all changes, as well as potential errors in measurements (perception). Certainty can never be 1. \\ In other words, "Nothing is 100% (not even this axiom!)." | |
| \\ Abstractions | R9. The system must be able to generate abstractions from learned knowledge. | Abstractions are a kind of compression that allows more efficient management of small details, causal chains, etc. Abstraction is fundamental to induction (generalization) and analogies, two cognitive skills of critical importance in human intelligence. | | | \\ Abstractions | \\ **R9.** The system must be able to generate abstractions from learned knowledge. | Abstractions are a kind of compression that allows more efficient management of small details, causal chains, etc. Abstraction is fundamental to induction (generalization) and analogies, two cognitive skills of critical importance in human intelligence. | |
| \\ Reasoning | R10. The system must be able to use applied logic - reasoning - to generate, manipulate, and use its knowledge. | Reasoning in humans is not the same as reasoning in formal logics; it is non-axiomatic and is always performed under uncertainty (per R8). | | | \\ Reasoning | **R10.** The system must be able to use applied logic - reasoning - to generate, manipulate, and use its knowledge. | Reasoning in humans is not the same as reasoning in formal logics; it is non-axiomatic and is always performed under uncertainty (per R8). | |
| Learning | R11. The system must be able to learn. | | | Learning | **R11.** The system must be able to learn. | |
| |
| |
| Learning | Acquisition of knowledge that enables more successful completion of tasks and adaptation to environments. | | | Learning | Acquisition of knowledge that enables more successful completion of tasks and adaptation to environments. | |
| Life-long learning | Incremental acquisition of knowledge throughout a (non-trivially long) lifetime. | | | Life-long learning | Incremental acquisition of knowledge throughout a (non-trivially long) lifetime. | |
| Cumulative Learning | The ability to integrate new information with that already acquired, in a coherent, efficient and effective manner (seeing what relates to what, resolving conflicts). | | | Cumulative Learning | The ability to unify new information and knowledge that is already acquired, in a coherent, efficient and effective manner (seeing what relates to what, resolving conflicts). | |
| Transfer learning | The ability to transfer what has been learned in one task to another. | | | Transfer learning | The ability to transfer what has been learned in one task, situation, environment or domain to another task, situation, environment or domain. | |
| Autonomy | The ability to do tasks without interference / help from others. | | | Autonomy | The ability to do tasks without interference / help from others. | |
| |