User Tools

Site Tools


public:t-720-atai:atai-22:generality

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
public:t-720-atai:atai-22:generality [2022/09/11 11:00] – created thorissonpublic:t-720-atai:atai-22:generality [2024/04/29 13:33] (current) – external edit 127.0.0.1
Line 25: Line 25:
 ====One Task, Many Tasks, One Environment, Many Environments, One Domain, Many Domains==== ====One Task, Many Tasks, One Environment, Many Environments, One Domain, Many Domains====
  
-|  {{public:t-720-atai:afteritleavesthelab.png?750|After it Leaves the Lab}}  |+|  {{public:t-720-atai:afteritleavesthelab.png?650|After it Leaves the Lab}}  |
 |  **A:** Simple machine learners (<m>L_0</m>) take a small set of inputs (<m>x, y, z</m>) and make a choice between a set of possible outputs (<m>α,β</m>), as specified in detail by the system’s designer. Increasing either the set of inputs or number of possible outputs will either break the algorithm or slow learning to impractical levels.  | |  **A:** Simple machine learners (<m>L_0</m>) take a small set of inputs (<m>x, y, z</m>) and make a choice between a set of possible outputs (<m>α,β</m>), as specified in detail by the system’s designer. Increasing either the set of inputs or number of possible outputs will either break the algorithm or slow learning to impractical levels.  |
 |  **B:** Let <m>tsk_i</m> refer to relatively non-trivial tasks such as assembling furniture and moving office items from one room to another, <m>S_i</m> to various situations that a family of tasks can be performed, and <m>e_i</m> to environments where those situations may be encountered. Simple learner <m>L_0</m> is limited to only a fraction of the various things that must be learned to achieve such a task. Being able to handle a single such task in a particular type of situation (<m>S_1</m>) with features that were unknown prior to the system’s deployment, <m>L_1</m> is already more capable than most if not all autonomous learning systems available today. <m>L_2</m>, <m>L_3</m> and <m>L_4</m> take successive steps up the complexity ladder beyond that, being able to learn //numerous// complex tasks (<m>L_2</m>), in //various situations// (<m>L_3</m>), and in a wider range of //environments and mission spaces// (<m>L_4</m>).  | |  **B:** Let <m>tsk_i</m> refer to relatively non-trivial tasks such as assembling furniture and moving office items from one room to another, <m>S_i</m> to various situations that a family of tasks can be performed, and <m>e_i</m> to environments where those situations may be encountered. Simple learner <m>L_0</m> is limited to only a fraction of the various things that must be learned to achieve such a task. Being able to handle a single such task in a particular type of situation (<m>S_1</m>) with features that were unknown prior to the system’s deployment, <m>L_1</m> is already more capable than most if not all autonomous learning systems available today. <m>L_2</m>, <m>L_3</m> and <m>L_4</m> take successive steps up the complexity ladder beyond that, being able to learn //numerous// complex tasks (<m>L_2</m>), in //various situations// (<m>L_3</m>), and in a wider range of //environments and mission spaces// (<m>L_4</m>).  |
Line 41: Line 41:
 |  "Intelligent System": \\ Expectation   | We expect an "intelligent" system to be able to //learn//   | |  "Intelligent System": \\ Expectation   | We expect an "intelligent" system to be able to //learn//   |
 |  Standard Learning Expectation  | That the system can learn //a task//  | |  Standard Learning Expectation  | That the system can learn //a task//  |
-|  Examples of \\ "Intelligent" Systems  | Deep Blue. Watson. Alpha Go.   |+|  Examples of \\ "Intelligent" Systems \\ from industry  | Deep Blue. Watson. AlphaGo. AlphaZero.   |
 |  \\ What these systems \\ have in common  | They can only learn (and do) //one task// (one form of one task, to be exact). \\ They are really bad at learning temporal tasks. \\ Their learning must be turned off when they leave the lab. \\ The tasks they learn are relatively simple (in that their goal structure can be easily formalized). \\ They are neither "domain-independent" nor "general" - they are not //general learners//  | |  \\ What these systems \\ have in common  | They can only learn (and do) //one task// (one form of one task, to be exact). \\ They are really bad at learning temporal tasks. \\ Their learning must be turned off when they leave the lab. \\ The tasks they learn are relatively simple (in that their goal structure can be easily formalized). \\ They are neither "domain-independent" nor "general" - they are not //general learners//  |
 |  We want more general learners  | A general learner would not be limited by domain, topic, task-environment, or other such limitations - the more free from such constraints, the more "intelligent" the system.   | |  We want more general learners  | A general learner would not be limited by domain, topic, task-environment, or other such limitations - the more free from such constraints, the more "intelligent" the system.   |
/var/www/cadia.ru.is/wiki/data/attic/public/t-720-atai/atai-22/generality.1662894019.txt.gz · Last modified: 2024/04/29 13:32 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki