User Tools

Site Tools


public:t-720-atai:atai-24:task-environments

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
public:t-720-atai:atai-24:task-environments [2024/01/22 16:39] – created thorissonpublic:t-720-atai:atai-24:task-environments [2024/04/29 13:33] (current) – external edit 127.0.0.1
Line 1: Line 1:
-[[/public:t-720-atai:atai-22:main|T-720-ATAI-2022 Main]] \\ +[[/public:t-720-atai:atai-24:main|T-720-ATAI-2024Main]] \\ 
-[[/public:t-720-atai:atai-22:lecture_notes|Lecture Notes 2022]]+[[/public:t-720-atai:atai-24:lecture_notes|Lecture Notes 2024]]
  
 +\\
 \\ \\
  
- +====== WORLDS & ENVIRONMENTS ======
-======WORLDS====== +
 \\ \\
- +\\ 
- +==== The Physical World ==== 
- +|  \\ What it is  | A set of constraints that determine what is and isn't possible. We call it "the laws of physics" (even though we don't know if they are immutable 'laws').    | 
-===="The World"==== +|  Interaction  | We think of the 'world' and 'intelligent beings' as separate processes. It is the interaction between intelligence and the world that is the focus of study in artificial intelligence. \\ To be capable of adaptation requires measuring (some part of) the world. This is done via sensors. 
-|  What is it | A set of constraints on what is and isn't possible. \\ We perceive the physical world through sensors. +|  Sensors are physical Animals and robots perceive the physical world through sensors. \\ These sensors are made from the same stuff as that which they perceive and are subject to the same laws of physics.  | 
-|  Sensors are physical Our sensors are made from the same stuff that they perceive. +|  \\ "The real world"  ...is a hypothesized phenomenon based on our collective experience of it and the apparent coordination this experience has with the apparent (experienced) experience of other similar beings. \\ René Descartes, the French philosopher, famously claimed that "I think, therefore I am." He recognized that the only certainty we have of anything is that we perceive in the here-and-now. 
-|  Descartes  | René Descartes, the French philosopher, famously claimed that "I think, therefore I am." \\ He recognized that the only certainty we have of anything is that we perceive in the here-and-now. +|  Artificial Worlds  | We may conceive of any "world" which follows different rules than our own. \\ These worlds are potential worlds for AI systems, just as the physical world is.   | 
-|  Artificial Worlds  | We may conceive of any "world" which follows different rules of our own. \\ These worlds are potential worlds for AI systems, just as the physical world is.   | +|  However ...  | Any //implemented// world, whether abstract or otherwise, must bow to the nature of the physical universe, because implementation means //physically incarnated//.   | 
-|  However ...  | Any //implemented// world, whether abstract or otherwise, but bow to the nature of the physical universe, because implementation means //physical incarnation//.   | +|  Hence  | The nature of our physical universe is fundamental in AI.   |
-|  Hence  | The nature of our physical universe is important in AI.   |+
 |  A Question of (Un)certainty  | The physical world, and in fact many artificial ones also, are uncertain, meaning that there is a lot about them that we don't know.    | |  A Question of (Un)certainty  | The physical world, and in fact many artificial ones also, are uncertain, meaning that there is a lot about them that we don't know.    |
-|  A Requirement of Certainty  | To do anything //reliably// takes //certainty//.   | +|  Reliable Regularity  | To do anything //reliably// means depending on //reliable regularity// which is conducive to prediction.  |
-|  How To Address That  | Figuring out what can reliably be achieved in uncertain worlds.  |+
 |  AI Boils Down To  | Building machines that can figure out what can be reliably achieved in uncertain worlds.  | |  AI Boils Down To  | Building machines that can figure out what can be reliably achieved in uncertain worlds.  |
-|  Abstract Worlds  | We may of course define any kind of "world" of our choosing. However, if it is to be implemented it must obey physical laws.   |+|  \\ \\ Abstract Worlds  | We may of course define any kind of "world" of our choosing. However, if it is to be **implemented** it must run using some physical properties, be it an abacus, transistors, light, or something else, and if uses physical properties these //must obey physical laws//, which means that \\ //1. an abstract AI that cannot be implemented is not intelligent (but it could be a **blueprint** for something else), and \\ 2. any AI must be able to address - using intelligence - physical properties//.   |
  
 \\ \\
 ====Worlds: How it Hangs Together==== ====Worlds: How it Hangs Together====
  
-|  <m>W</m>: \\ A World  | <m>W = {lbrace V,F,S_0,R rbrace}</m>   || +|  **W**: \\ A World  | **W = { V,F,S<sub>0</sub>,R }**   || 
-|  <m>V</m>: \\ Variables \\ <m>V = {lbrace v_1~v_2~. . . , ~v_{||V||} rbrace}</m>   || +|  **V**: \\ Variables **V = { v<sub>1</sub>v<sub>2</sub>, . . . , v{||V||} }**   || 
-|  <m>F</m>: \\ Transition Functions \\ <m>F</m> is a set of transition functions / rules describing how the variables can change. \\ The dynamics can intuitively be thought of as the world’s “laws of nature”, continually transforming the world’s current state into the next: <m>S_{t+δ} = F(S_t)</m>.  || +|  **F**: \\ Transition Functions **F** is a set of transition functions / rules describing how the variables can change. \\ The dynamics can intuitively be thought of as the world’s “laws of nature”, continually transforming the world’s current state into the next: **S{t+δ} = F(S<sub>t</sub>)**.  || 
-|  <m>S_0</m>: \\ Initial State  | <m>S_0</m> is the State that <m>W</m> started out in. \\ In any complex world this is unlikely to be known; for artificial worlds this may be defined.  || +|  **C**: \\ A World Clock  | The clock updates the Transition Functions. \\ In the physical world **C** updates **F** (including energy transfer), irrespective of anything and everything else that may happen in the World, constraining  how much can happen for any time unit.   || 
-|  \\ <m>R</m>: \\ Relations <m>R</m> are the relations between variables in the world. These may be unknown or partially known to an //Agent// in the world.   || +|  **S<sub>0</sub>**: \\ Initial State  | **S<sub>0</sub>** is the State that **W** started out in. \\ In any complex world this is unlikely to be known; for artificial worlds this may be defined.  || 
-|  :::    | Static World   | Changes //State// only through //Agent Action//. | +|  \\ **R**: \\ Relations **R** are the relations between variables in the world. These may be unknown or partially known to an //Agent// in the world.   || 
-|  :::    | Dynamic World  | Changes //State// through //Agent Action// and through other means.   | +|  :::    |  Static World | Changes //State// only through //Agent Action//. | 
-|  \\ State  | <m>s_t~subset~V_t</m>. A set of variables <m>x</m> with a set of values, specified to some particular precision (with constraints, e.g. error bounds), for relevant to a //World//. \\ For all practical purposes, in any complex World "State" refers by default to a sub-state, since it is a practical impossibility to know its full state (values of the complete set of variables) of a world; there will always be a vastly higher number of "don't care" variables than the variables listed for e.g. a //Goal State// (a //State// associated with a //Goal//).  || +|  :::    |  Dynamic World | Changes //State// through //Agent Action// and through other means.   | 
-|      \\ State \\ definition  | <m>s_t~subset~V_t</m> \\ where   \\ <m>{{lbrace}x_l~x_u{rbrace}} ~{|}~{{x_l <= <= x_u}}</m> \\ \\ define lower and upper bounds on acceptable range for each <m>x</m> to count towards the State, respectively.    | +|  \\ State  | **s<sub>t</sub> in V<sub>t</sub>**. A set of variables **x** with a set of values, specified to some particular precision (with constraints, e.g. error bounds), for relevant to a //World//. \\ For all practical purposes, in any complex World "State" refers by default to a sub-state, since it is a practical impossibility to know its full state (values of the complete set of variables) of a world; there will always be a vastly higher number of "don't care" variables than the variables listed for e.g. a //Goal State// (a //State// associated with a //Goal//).  || 
-|  Exposable Variables  | Variables in <m>V</m> that are measurable and/or manipulatable //in principle// || +|      \\ State \\ definition **s<sub>t</sub> in V<sub>t</sub>** \\ where   \\ **{ x<sub>l</sub>, x<sub>u</sub> } | {x<sub>l</sub> =< =< x<sub>u</sub>}** \\ defines lower (**x<sub>l</sub>**) and upper (**x<sub>u</sub>**) bounds on acceptable range for each **x** to count towards the State, respectively.    | 
-|  Observable Variables  | Variables in <m>V</m> that can be measured for a particular interval in time are //observable// during that interval.    || +|  Exposable Variables  | Variables in **V** that are measurable and/or manipulatable //in principle// || 
-|  Manipulatable Variables  | Variables in <m>V</m> whose value can be affected, either directly or indirectly (by an //Agent// or something else).   ||+|  Observable Variables  | Variables in **V** that can be measured for a particular interval in time are //observable// during that interval.    || 
 +|  Manipulatable Variables  | Variables in **V** whose value can be affected, either directly or indirectly (by an //Agent// or something else).   || 
 +|  Measurement  | The only way to "capture the world", e.g. for the purposes of getting something done, is by sampling some of the physical properties of the world's variables at a particular time and place. This is what we call a 'measurement' || 
 +|  Data  | The outcome and record of a (stored) measurement that has been committed to at a particular time and place.   ||
  
  
Line 44: Line 44:
  
  
- 
- 
- 
- 
- 
- 
- 
-\\ 
-\\ 
- 
-=====The Physical World===== 
  
 ====Laplace's Daemon==== ====Laplace's Daemon====
Line 61: Line 50:
 |  Concept  | If a world is deterministic, and everything in it is caused from the ground up, from the smallest parts, then everything in that world is pre-determined based on its starting state.     | |  Concept  | If a world is deterministic, and everything in it is caused from the ground up, from the smallest parts, then everything in that world is pre-determined based on its starting state.     |
 |  \\ Laplace  | "In the history of science, Laplace's demon was the first published articulation of causal or scientific determinism, by Pierre-Simon Laplace in 1814. According to determinism, if someone (the demon) knows the precise location and momentum of every atom in the universe, their past and future values for any given time are entailed; they can be calculated from the laws of classical mechanics." [[https://en.wikipedia.org/wiki/Laplace's_demon|source: Wikipedia]]   | |  \\ Laplace  | "In the history of science, Laplace's demon was the first published articulation of causal or scientific determinism, by Pierre-Simon Laplace in 1814. According to determinism, if someone (the demon) knows the precise location and momentum of every atom in the universe, their past and future values for any given time are entailed; they can be calculated from the laws of classical mechanics." [[https://en.wikipedia.org/wiki/Laplace's_demon|source: Wikipedia]]   |
 +|  Hume  | David Hume's theory of causation states that cause and effect relationships are not a product of natural law or universal truth, but are instead based on the necessity that we associate events based on experience.   |
  
 \\ \\
Line 72: Line 62:
 \\ \\
  
-====Formalization==== 
-| \\ \\ \\ \\ The Physical World   | <m>V = {lbrace}{x_1, ~x_2, ~... ~x_n}{rbrace}.</m> \\ <m>F={lbrace}{f_1, ~f_2 ~... ~f_n}{rbrace}</m>. \\ <m>x</m> are real-valued variables, <m>f</m> are transition functions, and \\ <m>V_{t+delta} = F(V_{t})</m> \\ and \\ <m>{lbrace}{x}over{.}_1, ~{x}over{.}_2, ~... ~{x}over{.}_n {rbrace}</m> \\ represent the first derivative of the variables during continuous change. \\ Note that <m>||V||~=~infty</m> and <m>||F||~=~infty</m>   | 
- 
-\\ 
 ====Time Scales of the Physical World Relevant to AI==== ====Time Scales of the Physical World Relevant to AI====
  
Line 83: Line 69:
  
 ====Maxwell's Demon==== ====Maxwell's Demon====
-|  {{public:t-720-atai:maxwellsdemon.png?500}} \\ Source: [[https://en.wikipedia.org/wiki/Maxwell%27s_demon|Wikipedia]] \\ By User:Htkym - Own work, CC BY 2.5, [[https://commons.wikimedia.org/w/index.php?curid=1625737|REF]] ||+|  {{public:t-720-atai:maxwellsdemon.png?500}} \\ Source: [[https://en.wikipedia.org/wiki/Maxwell%27s_demon|Wikipedia]] \\ By User: Htkym - Own work, CC BY 2.5, [[https://commons.wikimedia.org/w/index.php?curid=1625737|REF]]  ||
 |  \\ A Thought Experiment  | Imagine a container divided into two parts, A and B. Both parts are filled with the same gas at equal temperatures and placed next to each other. Observing the molecules on both sides, an imaginary demon guards a trapdoor between the two parts. When a faster-than-average molecule from A flies towards the trapdoor, the demon opens it, and the molecule will fly from A to B. Likewise, when a slower-than-average molecule from B flies towards the trapdoor, the demon will let it pass from B to A. The average speed of the molecules in B will have increased while in A they will have slowed down. Since average molecular speed corresponds to temperature, the temperature decreases in A and increases in B, contrary to the second law of thermodynamics. A heat extractor operating between the thermal reservoirs A and B could extract energy from this temperature difference, creating a perpetual motion machine. [ Adapted from [[https://en.wikipedia.org/wiki/Maxwell%27s_demon|Wikipedia]] ]  | |  \\ A Thought Experiment  | Imagine a container divided into two parts, A and B. Both parts are filled with the same gas at equal temperatures and placed next to each other. Observing the molecules on both sides, an imaginary demon guards a trapdoor between the two parts. When a faster-than-average molecule from A flies towards the trapdoor, the demon opens it, and the molecule will fly from A to B. Likewise, when a slower-than-average molecule from B flies towards the trapdoor, the demon will let it pass from B to A. The average speed of the molecules in B will have increased while in A they will have slowed down. Since average molecular speed corresponds to temperature, the temperature decreases in A and increases in B, contrary to the second law of thermodynamics. A heat extractor operating between the thermal reservoirs A and B could extract energy from this temperature difference, creating a perpetual motion machine. [ Adapted from [[https://en.wikipedia.org/wiki/Maxwell%27s_demon|Wikipedia]] ]  |
-|  \\ The Error  | The thought experiment is flawed because the demon must be part of the same system that the container is part of; thinking (or computation, if the demon is a robot) requires time and energy, and so whatever heat is saved in the container will be spent to run the demon's thinking processes. (This was first proposed in {{/public:t-720-atai:szilard-1929-entropy-intelligent-beings.pdf|1929 by Leo Szilard}}.) |+|  \\ The Error  | The thought experiment is flawed because the demon must be part of the same system that the container is part of; thinking (or computation, if the demon is a robot) requires time and energy, and so whatever heat is saved in the container will be spent to run the demon's thinking processes. (This was first proposed in {{/public:t-720-atai:szilard-1929-entropy-intelligent-beings.pdf|"On the Decrease of Entropy in a Thermodynamic System by the Intervention of Intelligent Beings" | 1929 by Leo Szilard}}.) |
  
  
  
 \\ \\
 +
 ====The Physical World & AI Research==== ====The Physical World & AI Research====
  
Line 100: Line 87:
 |  \\ Learning  | Some unobservable //exposable variables// can be made //observable// (exposed) through //manipulation//. For any //Agent// with a set of //Goals// and limited knowledge but an ability to learn, which variables may be made observable and how is the subject of the //Agent's// learning.  || |  \\ Learning  | Some unobservable //exposable variables// can be made //observable// (exposed) through //manipulation//. For any //Agent// with a set of //Goals// and limited knowledge but an ability to learn, which variables may be made observable and how is the subject of the //Agent's// learning.  ||
  
-\\ 
 \\ \\
  
  
-\\ +====Empiricism==== 
- +|  What it is  | The idea that all knowledge comes from experience -- the senses. \\ In AI it also means that this experience comes from the physical world, through physical sensors.    | 
-=====Task-Environments=====+|  Why it matters  | Before the emphasis on empirical knowledge, science did not have a chance to rise in any obvious way above "other sources of knowledge," including old scriptures, intuition, religious beliefs, or information produced by oracles. 
 +|  Empiricism & Science  | The fundamental source of information in (empirical, i.e. experimental) science is experience, which eventually became the formalized **comparative experiment**.   | 
 +|  \\ [[https://sciencing.com/calculate-significance-level-7610714.html|Comparative Experiment]]  | A method whereby two experimental conditions are compared, were they are identical except for one or a few strategic differences that the experimenters make. The outcome of the comparison is used to infer causal relations. Often called "the scientific method", this is the most dependable method for creating reliable, sharable knowledge that humanity has come up with.   | 
 +|  [[https://www.britannica.com/topic/logical-positivism|Logical Positivism]]  | Philosophical school of thought closely related to empirical science. 
 +|  [[https://www.britannica.com/topic/rationalism|Rationalism]]  | Historically a philosophically opposing view to empiricism, contending that knowledge is produced through innate knowledge, not through experience. 
 +|  Empirical Rationalism  | Both pure empiricism and rationalism are exaggerated views on where knowledge comes from. The sensible happy medium is that knowledge is bootstrapped from innate bootstrapping processes on which knowledge is built through experience.  This is the philosophical view that we take in this course.    |
  
 \\ \\
  
-====Task==== 
  
-|  Task  | A //Problem// that can be assigned in an //Environment//. Typically comes with //Instructions// (Some form of encoding of the task must exist for it to be assignable to a performing agent). \\ An //assigned Task// has a designated Agent to do it, start-time, end-time, and maximum duration.  || 
-|  Problem  | A Goal with (all) relevant constraints imposed by a particular Task-Environment (where particular ranges of variable values, and particular groupings of elements, can be assumed).  || 
-|  Goal  | <m>G subset S</m>. A (future) (sub-) State to be attained during period //t//, plus other //optional// constraints on the Goal.  || 
-|  **LTE**  | All tasks have a limited time & energy: No Task exists that can be performed with infinite energy, or for for which infinite time is available for achieving.  || 
  
 +====Key Concepts in Empirical Science====
 +|  Theory  | A scientific (empirical) theory is a "story" about how certain phenomena relate to each other. The more details, the more accurately, and the larger scope the theory covers, the better it is.  |
 +|  Hypothesis  | A statement about how the world works, derived from a theory.   |
 +|  Experimental design  | A planned interference in the natural order of events.  |
 +|  Subject(s)  | Subject of interest - that to be studied, whether people, technology, natural phenomena, or other  |
 +|  Sample  | Typically you can't study all the **individuals** of a particular subject pool (set), so in your experiment you use a **sample** (subset) and hope that the results gathered using this subset generalize to the rest of the set (subject pool).  |
 +|  Between subjects vs. within subjects design  | Between subjects: Two separate groups of subject/phenomena measured \\ Within subjects: Same subjects/phenomena measured twice, on different occasions  |
 +|  Quasi-Experimental  | When conditions do not permit an **ideal** design to be used (a properly controlled experiment is not possible), there may still be some way to control some of the variables. This is called quasi-experimental design.  |
 +|  Dependent variable  | The measured variable(s) of the phenomenon which you are studying   
 +|  Independent variable  | The variable(s) that you manipulate in order to systematically affect (or avoid affecting) the dependent variable(s)  |
 +|  Internal validity  | How likely is it that the manipulation of the independent variables caused the effect in dependent variables? 
 +|  External validity  | How likely is it that the results generalize to other instances of the phenomenon under study?  |
  
 \\ \\
  
-====Environment====+====Controlled Experiment==== 
 +|  What is it?   | A fairly recent research method, historically speaking, for testing hypotheses / theories 
 +|  When  | When it is possible to control and select everything of importance to the subject of study  | 
 +|  How  | Select subjects freely, randomize samples, remove experimenter effect through double-blind procedure, use control groups, select independent and dependent variables as necessary to answer the questions raised. 
 +|  Why randomize?  | Given a complex phenomenon, it is impossible to know all potential causal chains that may exist between the various elements under study. Randomization lessens the probability that there is systematic bias in any factors that are not under study but could affect the results and thus imply different conclusions. 
 +|  What is randomized?  | The sample should be randomized; subjects should be randomly assigned to control group versus experimental group; Any independent variable identified which could affect the results but is not considered of interest to the research at hand.  | 
 +|  Bottom line  | The most powerful mechanism for generating reliable knowledge known to mankind.  |
  
- 
-|  \\ Environment  |<m>e ~=~ {lbrace {V ~subset ~V_W}, ~{F ~subset ~F_W} rbrace} ~+ ~C</m>  \\ where <m>C</m> are additional constraints on <m>V,~ F</m> and \\ some values for <m>V</m> are (typically) fixed within ranges.   || 
-|  Task-Environment  | <m>T_e ~subset ~W</m> \\ An Environment in which one or more Tasks may be assigned and performed.   || 
  
 \\ \\
- 
- 
-====Task-Environments: Key Concepts==== 
- 
-|  Task \\ Family  | A set of tasks that are similar in some (important) ways. \\ Since there exists no proper "Task Theory" as of yet, comparisons of similarity or other dimensions are still an art form.  || 
-|  Problem  | A //Goal// with (all) relevant constraints (≈ requirements).  || 
-| |  Problem Family  | A set of problems that are similar in some (important) ways; a //Problem// plus variations of that Problem.  | 
-|  Goal  | A (future) //State// to be attained, plus optional constraints on the //Goal//  || 
-| | Goal State  | A set of values (with error bounds) for a set of variables relevant to a //Goal// | 
-| | Goal Family  | A set of goals that are similar in some (important) ways.  | 
-|  Environment  | A set of constraints relevant to a //Task// but not counted as part of a //Task// proper. \\ Also conceived as negative //Goals// (i.e. //States// to be avoided).  || 
-|  Constraint  | A set of factors that limit the flexibility (state space) of that which it constrains.  || 
-|  Solution  | The set of (atomic) actions that can achieve a //Goal// || 
-|  Action  | The changes an //Agent// can make to variables relevant to a //Task-Environment// || 
-|  Plan  | A partial way (incomplete //Instructions)// to accomplish a //Task//  || 
-|  Instructions  | Partial //Plan// for accomplishing a //Task//, typically given to an //Agent// along with a //Task// by a //Teacher// || 
-|  Teacher  | The //Agent// (or interactive process) assigning a //Task// to another //Agent// (student), optionally in charge of //Instructions// || 
- 
 \\ \\
- 
- 
- 
- 
-====Other High-Level Concepts==== 
- 
-|  Constraint  | A set of factors that limit the flexibility of that which it constrains. \\ Best conceived of as a //negative Goal// (//State// that is to be avoided).   || 
-|  Family  | A set whose elements share one or more common trait (within some "sensible" (pre-defined or definable) allowed variability on one or more of: the types of variables, number of variables, the ranges of these variables).  || 
-|  \\ \\ Domains  | A Family of Environments, <m>D ⊂ W</m> \\ The concept of ‘domains’ as subsets of the world, where a particular bias of distributions of variables, values and ranges exists, may be useful in the context of tasks that can be systematically impacted by such a bias (e.g. gravity vs. zero-gravity). Each variable <m>v ~∈~ D</m> may take on any value from the associated domain <m>D</m>; for physical domains we can take the domain of variables to be a subset of real numbers, <m>bbR</m>  || 
-|  Solution  | The set of (atomic) //Actions// that can achieve one or more //Goals// in one or more //Task-Environments// || 
-|  Action  | The changes an //Agent// can make to variables relevant to a //Task-Environment// || 
-|  Plan  | Description of a partial //Solution// to accomplishing a //Task//  || 
-|  Teacher  | The Agent assigning a //Task// to another Agent (student).  || 
-|  |  Instructions  | //Plan// (partial) for accomplishing a //Task// typically given to an //Agent// by a //Teacher//, along with a //Task//. \\ Guide to //Solutions// (partial or full) - at some level of maximum available detail.  | 
- 
 \\ \\
  
-===='Problem'==== 
- 
-|  \\ Definition  | In everyday use the term is a flexible one that generally be applied to any part of a task that, more or less, prevents goals (or subgoals) from being achieved effortlessly or directly. It assumes a non-zero set of goals of agents for which it is relevant (otherwise anything and everything would be a 'problem').    || 
-|  Closed Problem  | A //Problem// that may be assigned as a //Task// to a particular //Agent// with known **time & energy** for producing a //Solution// (achieving a //Goal)//. || 
-| | Alternative formulation | A known //Goal// for which there exists a known //{Agent, Task}// tuple that can achieve it, where the //Task// is fully specified in the //{Agent Knowledge}+{Instructions}// tuple (including all required parameters for assigning its completion).   | 
-| | Example  | Doing the dishes.   | 
-| | Plans for Closed problems | Can be reliably produced, e.g. in the form of //Instructions// | 
-|  Open Problem  | A //Problem// whose solution is unknown and cannot be obviously assumed from analogy with similar problems whose solution is known. Cannot be guaranteed a solution with LTE.  || 
-| | Example  | Any //Problem// that is the subject of basic research and for which no known solution yet exists.  | 
-| | Plans for Open problems | Cannot be reliably produced.    
- 
-\\ 
- 
- 
-====Task-Environment Constraints on Tasks==== 
- 
-|  Solution Constraint  | Reduces the flexibility for producing a Solution.   | 
-|  Task Constraint  | Limits the allowed Solution Space for a Problem. Can help or hinder a Task to be achieved.   | 
-|  Solution Space  | The amount of variation allowed on a State while still counting as a Solution to a Problem.  | 
-|  Task Space  | The size of variations on a Task that would have be explored with no up-front knowledge or information about the Solution Space of a Problem.  | 
-|  Limited \\ time  | No Task, no matter how small, takes zero time. \\ ≈ "Any task worth doing takes time." \\ ≈ All //implemented intelligences// will be subject to the laws of physics, which means that for every //Task//, time is limited.  | 
-|  Limited energy  | "Any task worth doing takes energy." \\ ≈ All //implemented intelligences// will be subject to the laws of physics, which means that for every //Task//, energy is limited.   | 
-|  \\ \\ No task takes \\ zero time \\ or zero energy  | If <m>te</m> is a function that returns time and energy, an act of perception (reliable measurement) \\ <m>te(p in P) > 0</m> \\ a commitment to a measurement <m>m</m>, <m>com(m in M)</m>, that is, the measurement is deemed reliable, in \\ <m>{te({com(m in M)}) > 0}</m>, and an action \\ <m>te(a in A) > 0</m> to record it (to memory) or announce it (to the external world, e.g. by writing). \\ It follows deductively that any Task requires //at least one of these//, possibly all. \\ For a task <m>T</m> whose //Goals// have already been achieved at the time <m>T</m> is assigned to an agent <m>A</m>, <m>A</m> still needs to do the measurement of the //Goal State// <m>s</m>, <m>s in S</m>, or at least* //commit// to the measurement of this fact, and then record it. Even in this case we have <m>te(T) > 0</m>. \\ Anything that takes zero time or zero energy is by definition not a Task.     | 
-|  \\ Limited Time & Energy  | <m>forall ~{lbrace T, ~T_e rbrace}~: ~te(T)~>~0</m>  \\ where <m>T_e subset W</m> is a task-environment in a world, <m>T</m> is a task, and <m>te</m> is a function that returns time and energy.   | 
- 
-//*This could be the case if, for instance, task// <m>T_1</m> //assigned to agent// <m>A</m> //at time // <m>t_1</m> //comes with **Instructions** telling // <m>A</m> // that // <m>T_1</m> // has already been done at the time of its assignment// <m>t_1</m> //. Sort of like you were to get a shopping list for use on your upcoming shopping trip where at least one item was crossed out. // 
- 
- 
-\\ 
- 
- 
- 
- 
-====What Kinds of Task-Environments are Relevant for AI?==== 
-|  Intelligence = Learning  | Completely static: No (or little) need for learning. \\ Completely random: Learning is of no use.   | 
-|  \\ Worlds  | Worlds where intelligence is relevant: \\ - Sit somewhere between deterministic and completely random (pure noise). \\ - Strike a balance between completely dynamic and completely static. \\ - Some regularity must exist at a level that is observable initially by a learning agent. | 
-|  Hierarchy  | Any world that is hierarchically decomposable into smaller detail along time and space. \\ (See "Time Scales of the Physical World", above.)  | 
-|  Environment  | Large number of potentially relevant variables.  | 
-|  Task  | Ditto (large number of potentially relevant variables).    | 
-|  | Medium or few number of Solutions.  | 
-|  | Instructions of varying detail are possible.  | 
-|  \\ Novelty  | Novelty is unavoidable. \\ In other words, unforeseen circumstances will be encountered by any AI operating in such circumstances. Accordingly, it should be able to handle it, since that is essentially why intelligence exists in the first place.   | 
-|  \\ What that means  | 1. Since no agent will ever know everything, no agent (artificial or natural) can assume axiomatic knowledge. \\ 2. It cannot be assumed that all knowledge that the AI system needs to know is known by its designer up front. This means it must acquire its own knowledge. All advanced AI systems must be //cumulative learners//. \\ 3. Since it must acquire its own knowledge, incrementally, knowledge acquisition will introduce //knowledge gaps and inconsistencies//. \\ 4. A cumulative learning agent will continuously live in a state of //insufficient knowledge and resources// (with respect to perfect knowledge), due to the physical world's //limited time and energy//   | 
- 
-\\ 
- 
- 
- 
-\\ 
- 
- 
-\\ 
-\\ 
-\\ 
-\\ 
 2024(c)K.R.Thórisson 2024(c)K.R.Thórisson
/var/www/cadia.ru.is/wiki/data/attic/public/t-720-atai/atai-24/task-environments.1705941576.txt.gz · Last modified: 2024/04/29 13:32 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki