| |
public:t-720-atai:atai-20:task-environment [2020/09/07 14:13] – [Formalization] thorisson | public:t-720-atai:atai-20:task-environment [2024/04/29 13:33] (current) – external edit 127.0.0.1 |
---|
| ::: | Static World | Changes //State// only through //Agent Action//. | | | ::: | Static World | Changes //State// only through //Agent Action//. | |
| ::: | Dynamic World | Changes //State// through //Agent Action// and through other means. | | | ::: | Dynamic World | Changes //State// through //Agent Action// and through other means. | |
| \\ State | <m>S subset V</m>. A set of variables <m>x</m> with a set of values, specified to some particular precision (with constraints, e.g. error bounds), for relevant to a //World//. \\ For all practical purposes, in any complex World "State" refers by default to a sub-state, since it is a practical impossibility to know its full state (values of the complete set of variables) of a world; there will always be a vastly higher number of "don't care" variables than the variables listed for e.g. a //Goal State// (a //State// associated with a //Goal//). || | | \\ State | <m>s_t~subset~V_t</m>. A set of variables <m>x</m> with a set of values, specified to some particular precision (with constraints, e.g. error bounds), for relevant to a //World//. \\ For all practical purposes, in any complex World "State" refers by default to a sub-state, since it is a practical impossibility to know its full state (values of the complete set of variables) of a world; there will always be a vastly higher number of "don't care" variables than the variables listed for e.g. a //Goal State// (a //State// associated with a //Goal//). || |
| | \\ State \\ definition | <m> S subset V </m> \\ where \\ <m> {{lbrace}x_l, ~x_u{rbrace}} ~{|}~{{x_l <= x <= x_u}} </m> \\ \\ define lower and upper bounds on acceptable range for each <m>x</m> to count towards the State, respectively. | | | | \\ State \\ definition | <m>s_t~subset~V_t</m> \\ where \\ <m>{{lbrace}x_l, ~x_u{rbrace}} ~{|}~{{x_l <= x <= x_u}}</m> \\ \\ define lower and upper bounds on acceptable range for each <m>x</m> to count towards the State, respectively. | |
| Exposable Variables | Variables in <m>V</m> that are measurable //in principle//. || | | Exposable Variables | Variables in <m>V</m> that are measurable //in principle//. || |
| Observable Variables | Variables in <m>V</m> that can be measured for a particular interval in time are //observable// during that interval. || | | Observable Variables | Variables in <m>V</m> that can be measured for a particular interval in time are //observable// during that interval. || |
| |
====Formalization==== | ====Formalization==== |
| The Physical World | <m>V = {lbrace}{x_1, ~x_2, ~... ~x_n}{rbrace}.</m> \\ <m>F={lbrace}{f_1, ~f_2 ~... ~f_n}{rbrace}</m>. \\ <m>x</m> are real-valued variables, \\ <m>V_{t+delta} = F(V_{t})</m> \\ and \\ <m>{lbrace}{x}over{.}_1, ~{x}over{.}_2, ~... ~{x}over{.}_n {rbrace}</m> \\ represent the first derivative of the variables during continuous change. \\ Note that <m>||V||~=~infty</m> and <m>||F||~=~infty</m>. | | | The Physical World | <m>V = {lbrace}{x_1, ~x_2, ~... ~x_n}{rbrace}.</m> \\ <m>F={lbrace}{f_1, ~f_2 ~... ~f_n}{rbrace}</m>. \\ <m>x</m> are real-valued variables, <m>f</m> are transition functions, and \\ <m>V_{t+delta} = F(V_{t})</m> \\ and \\ <m>{lbrace}{x}over{.}_1, ~{x}over{.}_2, ~... ~{x}over{.}_n {rbrace}</m> \\ represent the first derivative of the variables during continuous change. \\ Note that <m>||V||~=~infty</m> and <m>||F||~=~infty</m>. | |
| |
\\ | \\ |
| Useful AI | To be useful, an AI must **do** something. Ultimately, to be of any use, it must do something in the //physical// world, be it building skyscrapers, auto-generating whole movies from scratch, doing experimental science or inventing the next AI. | | | Useful AI | To be useful, an AI must **do** something. Ultimately, to be of any use, it must do something in the //physical// world, be it building skyscrapers, auto-generating whole movies from scratch, doing experimental science or inventing the next AI. | |
| What this means | A key target environment of the present work is the physical world. | | | What this means | A key target environment of the present work is the physical world. | |
| Uncertainty | Since any agent in the physical universe will never know everything, some things will always be uncertain. | | | Uncertainty | Since any agent in the physical universe will never know //everything:// Some things will always be uncertain. | |
| Novelty | In such a world novelty abounds—most things that a learner encounters will contain some form of novelty. | | | Novelty | In such a world novelty abounds—most things that a learner encounters will contain some form of novelty. | |
| \\ Learning | Some unobservable //exposable variables// can be made //observable// (exposed) through //manipulation//. For any //Agent// with a set of //Goals// and limited knowledge but an ability to learn, which variables may be made observable and how is the subject of the //Agent's// learning. || | | \\ Learning | Some unobservable //exposable variables// can be made //observable// (exposed) through //manipulation//. For any //Agent// with a set of //Goals// and limited knowledge but an ability to learn, which variables may be made observable and how is the subject of the //Agent's// learning. || |
| Problem | A Goal with (all) relevant constraints imposed by a Task-Environment. || | | Problem | A Goal with (all) relevant constraints imposed by a Task-Environment. || |
| Goal | <m>G subset S</m>. A (future) (sub-) State to be attained during period //t//, plus other //optional// constraints on the Goal. || | | Goal | <m>G subset S</m>. A (future) (sub-) State to be attained during period //t//, plus other //optional// constraints on the Goal. || |
| Task | All tasks have a limited time & energy: No Task exists that can be performed with infinite energy, or for for which infinite time is available for achieving. || | | **LTE** | All tasks have a limited time & energy: No Task exists that can be performed with infinite energy, or for for which infinite time is available for achieving. || |
| |
| |
| Constraint | A set of factors that limit the flexibility of that which it constrains. \\ Best conceived of as a //negative Goal// (//State// that is to be avoided). || | | Constraint | A set of factors that limit the flexibility of that which it constrains. \\ Best conceived of as a //negative Goal// (//State// that is to be avoided). || |
| Family | A set whose elements share one or more common trait (within some "sensible" (pre-defined or definable) allowed variability on one or more of: the types of variables, number of variables, the ranges of these variables). || | | Family | A set whose elements share one or more common trait (within some "sensible" (pre-defined or definable) allowed variability on one or more of: the types of variables, number of variables, the ranges of these variables). || |
| Domains | A Family of Environments, <m>D ⊂ W</m>. \\ The concept of ‘domains’ as subsets of the world, where a particular bias of distributions of variables, values and ranges exists, may be useful in the context of tasks that can be systematically impacted by such a bias (e.g. gravity vs. zero-gravity). Each variable <m>v ~∈~ D</m> may take on any value from the associated domain <m>D</m>; for physical domains we can take the domain of variables to be a subset of real numbers, <m>bbR</m>. || | | \\ \\ Domains | A Family of Environments, <m>D ⊂ W</m>. \\ The concept of ‘domains’ as subsets of the world, where a particular bias of distributions of variables, values and ranges exists, may be useful in the context of tasks that can be systematically impacted by such a bias (e.g. gravity vs. zero-gravity). Each variable <m>v ~∈~ D</m> may take on any value from the associated domain <m>D</m>; for physical domains we can take the domain of variables to be a subset of real numbers, <m>bbR</m>. || |
| Solution | The set of (atomic) //Actions// that can achieve one or more //Goals// in one or more //Task-Environments//. || | | Solution | The set of (atomic) //Actions// that can achieve one or more //Goals// in one or more //Task-Environments//. || |
| Action | The changes an //Agent// can make to variables relevant to a //Task-Environment//. || | | Action | The changes an //Agent// can make to variables relevant to a //Task-Environment//. || |
===='Problem'==== | ===='Problem'==== |
| |
| Definition | In everyday use the term is a flexible one that generally be applied to any part of a task that, more or less, prevents goals (or subgoals) from being achieved effortlessly or directly. It assumes a non-zero set of goals of agents for which it is relevant (otherwise anything and everything would be a 'problem'). || | | \\ Definition | In everyday use the term is a flexible one that generally be applied to any part of a task that, more or less, prevents goals (or subgoals) from being achieved effortlessly or directly. It assumes a non-zero set of goals of agents for which it is relevant (otherwise anything and everything would be a 'problem'). || |
| Closed Problem | A //Problem// that may be assigned as a //Task// to a particular //Agent// with known **time & energy** for producing a //Solution// (achieving a //Goal)//. || | | Closed Problem | A //Problem// that may be assigned as a //Task// to a particular //Agent// with known **time & energy** for producing a //Solution// (achieving a //Goal)//. || |
| | Alternative formulation | A known //Goal// for which there exists a known //{Agent, Task}// tuple that can achieve it, where the //Task// is fully specified in the //{Agent Knowledge}+{Instructions}// tuple (including all required parameters for assigning its completion). | | | | Alternative formulation | A known //Goal// for which there exists a known //{Agent, Task}// tuple that can achieve it, where the //Task// is fully specified in the //{Agent Knowledge}+{Instructions}// tuple (including all required parameters for assigning its completion). | |
| | Medium or few number of Solutions. | | | | Medium or few number of Solutions. | |
| | Instructions of varying detail are possible. | | | | Instructions of varying detail are possible. | |
| Novelty | Novelty is unavoidable. \\ In other words, unforeseen circumstances will be encountered by any AI operating in such circumstances. Accordingly, it should be able to handle it, since that is essentially why intelligence exists in the first place. | | | \\ Novelty | Novelty is unavoidable. \\ In other words, unforeseen circumstances will be encountered by any AI operating in such circumstances. Accordingly, it should be able to handle it, since that is essentially why intelligence exists in the first place. | |
| \\ What that means | 1. Since no agent will ever know everything, no agent (artificial or natural) must assume non-axiomatic knowledge. \\ 2. It cannot be assumed that all knowledge that the AI system needs to know is known by its designer up front. This means it must acquire its own knowledge. All advanced AI systems must be //cumulative learners//. \\ 3. Since it must acquire its own knowledge, incrementally, knowledge acquisition will introduce //knowledge gaps and inconsistencies//. \\ 4. A cumulative learning agent will continuously live in a state of //insufficient knowledge and resources// (with respect to perfect knowledge), due to the physical world's //limited time and energy//. | | | \\ What that means | 1. Since no agent will ever know everything, no agent (artificial or natural) must assume non-axiomatic knowledge. \\ 2. It cannot be assumed that all knowledge that the AI system needs to know is known by its designer up front. This means it must acquire its own knowledge. All advanced AI systems must be //cumulative learners//. \\ 3. Since it must acquire its own knowledge, incrementally, knowledge acquisition will introduce //knowledge gaps and inconsistencies//. \\ 4. A cumulative learning agent will continuously live in a state of //insufficient knowledge and resources// (with respect to perfect knowledge), due to the physical world's //limited time and energy//. | |
| |