Table of Contents

T-720-ATAI-2019 Main
Links to Lecture Notes

T-720-ATAI-2019

Lecture Notes, W3: Tasks, Environments, Goals




Task-Environments: Key Concepts

Environment A set of constraints relevant to a Task but not counted as part of a Task proper.
Task-Environment A Task is performed in an Environment, whereby the Task has certain variables that are necessary and sufficient <m>V_T</m>, and the Environment has certain variables <m>V_E</m> that can normally be ignored but that may, at any point in time or in any instance of the Task, influence <m>V_T</m>. In complex Environments and complex Tasks (e.g. in the physical world) the number and nature of such variables and their interactions cannot be enumerated; we speak of task-environments so as to not have to discuss what variables are “truly” part of a Task and which are “truly” part of the Environment.
World A set of constraints that a set of Environments have in common.
Task A Problem (or Goal) that can be assigned. Typically comes with Instructions (guide to Solution production).
Problem An unachieved Goal with (all) relevant constraints (≈ negative goals, requirements).
Goal A (future, sub-) State to be attained, plus (optional) constraints on the Goal (if well defined).
State A particular set of values (with error bounds) that a subset of variables in the Environment can take.
State Space The full set of values that the variables in an Environment can take.
Goal State A set of values (with error bounds) for a set of variables relevant to a Goal.
Task Space The size of variations on a Task that still could achieve a Goal.
Constraint A set of limits on a state space, e.g. subset of a solution space that is excluded; in other words a goal that describes states to be avoided – a “negative goal”.
Solution The set of (atomic) actions that can achieve a Goal. May be at various levels of specificity.
Solution Space The amount of variation allowed on a State while still counting as a Solution to a Problem.
Action The changes an Agent can make to variables relevant to a Task-Environment. Atomic action that can be meaningfully referred to when constructing and modifying Plans.
Plan A partial (description of a) way to accomplish a Task. A set of sequential Actions that may be performed in an Environment to achieve a Goal.
Instructions Partial Plan for accomplishing a Task, typically given to an Agent along with a Task by a Teacher.
Teacher The assignor (Agent or entity) assigning a Task to another Agent (student), optionally in charge of Instructions.
Task Constraint Limits the allowed Solution Space for a Problem. Can help or hinder a Task to be achieved.



Limited Time & Energy (LTE)

Task All tasks have a limited time & energy: No Task exists that can be performed with infinite energy, or for for which infinite time is available for achieving.
Limited time No Task, no matter how small, takes zero time.
≈ “Any task worth doing takes time.”
≈ All implemented intelligences will be subject to the laws of physics.
Limited energy “Any task worth doing takes energy.”
All implemented intelligences will be subject to the laws of physics.
No task takes zero time or energy If <m>te</m> is a function that returns time and energy, an act of perception
<m>te(p in P) > 0</m>
a commitment (“decision”) <m>d</m> in
<m>{te({d in D}) > 0}</m>,
and an action
<m>te(a in A) > 0</m>.
It follows deductively that since any Task requires at the very least one of each of these, in the minimum case to decide whether the Goal of a Task <m>T</m> is achieved, then
<m>te(T) > 0</m>.
Anything that takes zero time or zero energy is by definition not a Task.



Time Scales to be Addressed by AGI Systems

From Card, Moran & Newell et al. The Psychology of Human-Computer Interaction (1983).


Problems in Task-Environments

Closed Problem May be assigned as a Task with known Time & Energy for achieving a solution.
Example Doing the dishes.
Plans for Closed problems Can be reliably produced for closed problems, e.g. in the form of Instructions.
Open Problem A Problem whose solution is unknown and cannot be obviously assumed from analogy with similar problems whose solution is known. Cannot be guaranteed a solution with LTE.
Example Any research problem for which no known solution exists.
Plans for Open problems Cannot be reliably produced.



What Kind of Worlds should AGI systems handle?

Novelty Novelty is encountered by controllers in Worlds where the World is vastly larger in variety than the controller can store in a lookup table. Interaction between Environmental elements (subsets of <m>V_E</m>) produces phenomena of a different kind than the elements that interact.
Balance The Worlds we are interested in strike a balance between completely dynamic and completely static.
They also strike a balance between completely deterministic and completely random; some regularity must exist at a level that is observable initially by a learning agent.
Completely static No (or little) need for learning or intelligence.
Completely random Learning or intelligence is of no use.



What Kind of Task-Environments do AGI Systems Target?

Worlds Complex, intricate worlds, large number of variables (relative to the system's CPU and memory).
Complexity lies somewhere between randomness and regularity.
Many levels of temporal and spatial detail.
Ultimately, any system worthy of being called “AGI” must be successful operation in the physical world.
Environments Somewhere between random and static. Dynamic; large number of variables (relative to the system's CPU and memory capacity).
Many levels of temporal and spatial detail.
Tasks Dynamic; large number of variables (relative to the system's CPU and memory capacity). \\Underspecified.
Goals Multiple goals can easily be specified.
Solutions New solutions can be found.



How it Hangs Together: Worlds, Environments, Tasks, Goals

World A set of variables with constraints and relationships.
<m>W = {lbrace V,F rbrace}</m>
where <m>V</m> is a set of variables and <m>F</m> is a set of transition functions / rules describing how the variables can change.
Static World Changes State only through Agent Action.
Dynamic World Changes State through Agent Action and through other means.
Physical World In a physical world
<m>W = {lbrace}x_1, x_2, … x_n, f_1, f_2, … f_m {rbrace}</m>
<m>x</m> are real-valued variables,
<m>V_{t+delta} = F(V_{t})</m>
and
<m>{lbrace}{x}over{.}_1, {x}over{.}_2, … {x}over{.}_n {rbrace}</m>
represent the first derivative of the variables during continuous change.
State A set of values (with constraints, e.g. error bounds) for a set of variables <m>x</m> relevant to a World.
For all practical purposes, in any complex World we will speak of “State” even for sub-states, as most useful States will be sub-states, since there will always be a vastly higher number of “don't care” variables than the variables listed for e.g. a Goal State.
State definition <m> S = V </m>
where
<m> lbrace_x_l_x_u_rbrace {
} x_l_x_x_u </m>

define lower and upper bounds on acceptable range for each <m>x</m> to count towards the State, respectively.
Environment <m>{E = {lbrace V_E, F_E rbrace} + C delim{} V_E subset V & F_E subset F }</m>


where <m>C</m> are additional constraints on
<m>V, F</m>
and some values for <m>V</m> are fixed.
Task A Problem that can be assigned in an Environment. Typically comes with Instructions (guide to Solutions, partial Solution or full Solution - at some level of maximum detail).
“Task” definition An assigned Problem.
Task-Environment An Environment in which one or more Tasks may be assigned.
Problem A Goal with (all) relevant constraints imposed by a Task-Environment.
Goal A (future) (sub-) State to be attained during period t, plus other optional constraints on the Goal.
“Goal” definition <m>G subset S</m>

attached to a Problem.



Family A set whose members share one or more common trait within some sensible (defined) allowed variability, defined as one or more of the types of variables, number of variables, the ranges of these variables.
Problem Family A set of problems that are similar in important ways; a Problem and its variations.
Domain A Family of Environments.
Action The changes an Agent can make to variables relevant to a Task-Environment.
Instructions Partial description of a Plan for accomplishing a Task, typically given to an Agent along with a Task by a Teacher.
Teacher The Agent assigning a Task to another Agent (student).





2019©K.R.Thorisson

EOF