User Tools

Site Tools


public:t-720-atai:atai-20:task-environment

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

public:t-720-atai:atai-20:task-environment [2020/10/28 15:12] – [The Physical World & AI Research] thorissonpublic:t-720-atai:atai-20:task-environment [2024/04/29 13:33] (current) – external edit 127.0.0.1
Line 94: Line 94:
 |  Useful AI  | To be useful, an AI must **do** something. Ultimately, to be of any use, it must do something in the //physical// world, be it building skyscrapers, auto-generating whole movies from scratch, doing experimental science or inventing the next AI.  | |  Useful AI  | To be useful, an AI must **do** something. Ultimately, to be of any use, it must do something in the //physical// world, be it building skyscrapers, auto-generating whole movies from scratch, doing experimental science or inventing the next AI.  |
 |  What this means  | A key target environment of the present work is the physical world.   | |  What this means  | A key target environment of the present work is the physical world.   |
-|  Uncertainty  | Since any agent in the physical universe will never know //everything//Some things will always be uncertain.    |+|  Uncertainty  | Since any agent in the physical universe will never know //everything:// Some things will always be uncertain.    |
 |  Novelty  | In such a world novelty abounds—most things that a learner encounters will contain some form of novelty.   | |  Novelty  | In such a world novelty abounds—most things that a learner encounters will contain some form of novelty.   |
 |  \\ Learning  | Some unobservable //exposable variables// can be made //observable// (exposed) through //manipulation//. For any //Agent// with a set of //Goals// and limited knowledge but an ability to learn, which variables may be made observable and how is the subject of the //Agent's// learning.  || |  \\ Learning  | Some unobservable //exposable variables// can be made //observable// (exposed) through //manipulation//. For any //Agent// with a set of //Goals// and limited knowledge but an ability to learn, which variables may be made observable and how is the subject of the //Agent's// learning.  ||
Line 152: Line 152:
 |  Constraint  | A set of factors that limit the flexibility of that which it constrains. \\ Best conceived of as a //negative Goal// (//State// that is to be avoided).   || |  Constraint  | A set of factors that limit the flexibility of that which it constrains. \\ Best conceived of as a //negative Goal// (//State// that is to be avoided).   ||
 |  Family  | A set whose elements share one or more common trait (within some "sensible" (pre-defined or definable) allowed variability on one or more of: the types of variables, number of variables, the ranges of these variables).  || |  Family  | A set whose elements share one or more common trait (within some "sensible" (pre-defined or definable) allowed variability on one or more of: the types of variables, number of variables, the ranges of these variables).  ||
-|  Domains  | A Family of Environments, <m>D ⊂ W</m> \\ The concept of ‘domains’ as subsets of the world, where a particular bias of distributions of variables, values and ranges exists, may be useful in the context of tasks that can be systematically impacted by such a bias (e.g. gravity vs. zero-gravity). Each variable <m>v ~∈~ D</m> may take on any value from the associated domain <m>D</m>; for physical domains we can take the domain of variables to be a subset of real numbers, <m>bbR</m>  ||+|  \\ \\ Domains  | A Family of Environments, <m>D ⊂ W</m> \\ The concept of ‘domains’ as subsets of the world, where a particular bias of distributions of variables, values and ranges exists, may be useful in the context of tasks that can be systematically impacted by such a bias (e.g. gravity vs. zero-gravity). Each variable <m>v ~∈~ D</m> may take on any value from the associated domain <m>D</m>; for physical domains we can take the domain of variables to be a subset of real numbers, <m>bbR</m>  ||
 |  Solution  | The set of (atomic) //Actions// that can achieve one or more //Goals// in one or more //Task-Environments// || |  Solution  | The set of (atomic) //Actions// that can achieve one or more //Goals// in one or more //Task-Environments// ||
 |  Action  | The changes an //Agent// can make to variables relevant to a //Task-Environment// || |  Action  | The changes an //Agent// can make to variables relevant to a //Task-Environment// ||
Line 202: Line 202:
 |  | Medium or few number of Solutions.  | |  | Medium or few number of Solutions.  |
 |  | Instructions of varying detail are possible.  | |  | Instructions of varying detail are possible.  |
-|  Novelty  | Novelty is unavoidable. \\ In other words, unforeseen circumstances will be encountered by any AI operating in such circumstances. Accordingly, it should be able to handle it, since that is essentially why intelligence exists in the first place.   |+|  \\ Novelty  | Novelty is unavoidable. \\ In other words, unforeseen circumstances will be encountered by any AI operating in such circumstances. Accordingly, it should be able to handle it, since that is essentially why intelligence exists in the first place.   |
 |  \\ What that means  | 1. Since no agent will ever know everything, no agent (artificial or natural) must assume non-axiomatic knowledge. \\ 2. It cannot be assumed that all knowledge that the AI system needs to know is known by its designer up front. This means it must acquire its own knowledge. All advanced AI systems must be //cumulative learners//. \\ 3. Since it must acquire its own knowledge, incrementally, knowledge acquisition will introduce //knowledge gaps and inconsistencies//. \\ 4. A cumulative learning agent will continuously live in a state of //insufficient knowledge and resources// (with respect to perfect knowledge), due to the physical world's //limited time and energy//   | |  \\ What that means  | 1. Since no agent will ever know everything, no agent (artificial or natural) must assume non-axiomatic knowledge. \\ 2. It cannot be assumed that all knowledge that the AI system needs to know is known by its designer up front. This means it must acquire its own knowledge. All advanced AI systems must be //cumulative learners//. \\ 3. Since it must acquire its own knowledge, incrementally, knowledge acquisition will introduce //knowledge gaps and inconsistencies//. \\ 4. A cumulative learning agent will continuously live in a state of //insufficient knowledge and resources// (with respect to perfect knowledge), due to the physical world's //limited time and energy//   |
  
/var/www/cadia.ru.is/wiki/data/attic/public/t-720-atai/atai-20/task-environment.1603897938.txt.gz · Last modified: 2024/04/29 13:32 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki