User Tools

Site Tools


public:t-720-atai:atai-22:task-environment

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
public:t-720-atai:atai-22:task-environment [2022/09/01 10:14] – [Task] thorissonpublic:t-720-atai:atai-22:task-environment [2024/04/29 13:33] (current) – external edit 127.0.0.1
Line 66: Line 66:
 ====Determinism vs. Non-Determinism==== ====Determinism vs. Non-Determinism====
 |  Is our universe deterministic?  | This is a major question for physics, but ultimately is not of much consequence for those building GMI (general machine intelligence). This is because any agent situated in the physical world will never know the precise position, direction and momentum of all its smallest particles, and thus must always deal with uncertainty.    | |  Is our universe deterministic?  | This is a major question for physics, but ultimately is not of much consequence for those building GMI (general machine intelligence). This is because any agent situated in the physical world will never know the precise position, direction and momentum of all its smallest particles, and thus must always deal with uncertainty.    |
-|  Regularity  | A world with no regularity is pure noise. In such a world no intelligence makes sense. |+|  Regularity  | A world with no regularity is pure noise. In such a world intelligence is useless. |
 |  Pure Determinism  | A world that is completely deterministic is pre-determined at all levels for all eternity; in such a world there is no concept of choice, and hence there can be no relevance for intelligence.   | |  Pure Determinism  | A world that is completely deterministic is pre-determined at all levels for all eternity; in such a world there is no concept of choice, and hence there can be no relevance for intelligence.   |
-|  Non-Axiomatism  | Some mathematicians believe the universe to be fundamentally mathematical, and see the role of science (and mathematics) to find its "ultimate formula". We'll come back to that in a bit.   |+|  "Axiomatic AI"  | Some mathematicians believe the universe to be fundamentally mathematical, and see the role of science (and mathematics) to find its "ultimate formula". Many AI folks seem to subscribe to such a view. We'll come back to that in a bit.   |
  
 \\ \\
Line 83: Line 83:
  
 ====Maxwell's Demon==== ====Maxwell's Demon====
-|  {{public:t-720-atai:510px-maxwell_s_demon.svg.png?500}} \\ Source: [[https://en.wikipedia.org/wiki/Maxwell%27s_demon|Wikipedia]]  ||+|  {{public:t-720-atai:maxwellsdemon.png?500}} \\ Source: [[https://en.wikipedia.org/wiki/Maxwell%27s_demon|Wikipedia]] \\ By User:Htkym - Own work, CC BY 2.5, [[https://commons.wikimedia.org/w/index.php?curid=1625737|REF]] ||
 |  \\ A Thought Experiment  | Imagine a container divided into two parts, A and B. Both parts are filled with the same gas at equal temperatures and placed next to each other. Observing the molecules on both sides, an imaginary demon guards a trapdoor between the two parts. When a faster-than-average molecule from A flies towards the trapdoor, the demon opens it, and the molecule will fly from A to B. Likewise, when a slower-than-average molecule from B flies towards the trapdoor, the demon will let it pass from B to A. The average speed of the molecules in B will have increased while in A they will have slowed down. Since average molecular speed corresponds to temperature, the temperature decreases in A and increases in B, contrary to the second law of thermodynamics. A heat extractor operating between the thermal reservoirs A and B could extract energy from this temperature difference, creating a perpetual motion machine. [ Adapted from [[https://en.wikipedia.org/wiki/Maxwell%27s_demon|Wikipedia]] ]  | |  \\ A Thought Experiment  | Imagine a container divided into two parts, A and B. Both parts are filled with the same gas at equal temperatures and placed next to each other. Observing the molecules on both sides, an imaginary demon guards a trapdoor between the two parts. When a faster-than-average molecule from A flies towards the trapdoor, the demon opens it, and the molecule will fly from A to B. Likewise, when a slower-than-average molecule from B flies towards the trapdoor, the demon will let it pass from B to A. The average speed of the molecules in B will have increased while in A they will have slowed down. Since average molecular speed corresponds to temperature, the temperature decreases in A and increases in B, contrary to the second law of thermodynamics. A heat extractor operating between the thermal reservoirs A and B could extract energy from this temperature difference, creating a perpetual motion machine. [ Adapted from [[https://en.wikipedia.org/wiki/Maxwell%27s_demon|Wikipedia]] ]  |
-|  \\ The Error  | The thought experiment is flawed because the demon must be part of the same system that the container is part of; thinking (or computation, if the demon is a robot) requires time and energy, and so whatever heat is saved in the container will be spent to run the demon's thinking processes. (This was first proposed in 1929 by Leo Szilard.) |+|  \\ The Error  | The thought experiment is flawed because the demon must be part of the same system that the container is part of; thinking (or computation, if the demon is a robot) requires time and energy, and so whatever heat is saved in the container will be spent to run the demon's thinking processes. (This was first proposed in {{/public:t-720-atai:szilard-1929-entropy-intelligent-beings.pdf|1929 by Leo Szilard}}.) | 
 + 
  
 \\ \\
Line 111: Line 113:
  
 |  Task  | A //Problem// that can be assigned in an //Environment//. Typically comes with //Instructions// (Some form of encoding of the task must exist for it to be assignable to a performing agent). \\ An //assigned Task// has a designated Agent to do it, start-time, end-time, and maximum duration.  || |  Task  | A //Problem// that can be assigned in an //Environment//. Typically comes with //Instructions// (Some form of encoding of the task must exist for it to be assignable to a performing agent). \\ An //assigned Task// has a designated Agent to do it, start-time, end-time, and maximum duration.  ||
-|  Problem  | A Goal with (all) relevant constraints imposed by a Task-Environment.  ||+|  Problem  | A Goal with (all) relevant constraints imposed by a particular Task-Environment (where particular ranges of variable values, and particular groupings of elements, can be assumed).  ||
 |  Goal  | <m>G subset S</m>. A (future) (sub-) State to be attained during period //t//, plus other //optional// constraints on the Goal.  || |  Goal  | <m>G subset S</m>. A (future) (sub-) State to be attained during period //t//, plus other //optional// constraints on the Goal.  ||
 |  **LTE**  | All tasks have a limited time & energy: No Task exists that can be performed with infinite energy, or for for which infinite time is available for achieving.  || |  **LTE**  | All tasks have a limited time & energy: No Task exists that can be performed with infinite energy, or for for which infinite time is available for achieving.  ||
Line 121: Line 123:
  
  
-|  \\ Environment  |<m>e ~=~ {lbrace {V ~subset ~V_W}, ~{F ~subset ~F_W} rbrace} ~+ ~C</m>  \\ where <m>C</m> are additional constraints on <m>V,~ F</m> and \\ some values for <m>V</m> are (typically) fixed.   ||+|  \\ Environment  |<m>e ~=~ {lbrace {V ~subset ~V_W}, ~{F ~subset ~F_W} rbrace} ~+ ~C</m>  \\ where <m>C</m> are additional constraints on <m>V,~ F</m> and \\ some values for <m>V</m> are (typically) fixed within ranges.   ||
 |  Task-Environment  | <m>T_e ~subset ~W</m> \\ An Environment in which one or more Tasks may be assigned and performed.   || |  Task-Environment  | <m>T_e ~subset ~W</m> \\ An Environment in which one or more Tasks may be assigned and performed.   ||
  
Line 167: Line 169:
 | | Alternative formulation | A known //Goal// for which there exists a known //{Agent, Task}// tuple that can achieve it, where the //Task// is fully specified in the //{Agent Knowledge}+{Instructions}// tuple (including all required parameters for assigning its completion).   | | | Alternative formulation | A known //Goal// for which there exists a known //{Agent, Task}// tuple that can achieve it, where the //Task// is fully specified in the //{Agent Knowledge}+{Instructions}// tuple (including all required parameters for assigning its completion).   |
 | | Example  | Doing the dishes.   | | | Example  | Doing the dishes.   |
-| | Plans for Closed problems | \\ Can be reliably produced, e.g. in the form of //Instructions// |+| | Plans for Closed problems | Can be reliably produced, e.g. in the form of //Instructions// |
 |  Open Problem  | A //Problem// whose solution is unknown and cannot be obviously assumed from analogy with similar problems whose solution is known. Cannot be guaranteed a solution with LTE.  || |  Open Problem  | A //Problem// whose solution is unknown and cannot be obviously assumed from analogy with similar problems whose solution is known. Cannot be guaranteed a solution with LTE.  ||
 | | Example  | Any //Problem// that is the subject of basic research and for which no known solution yet exists.  | | | Example  | Any //Problem// that is the subject of basic research and for which no known solution yet exists.  |
-| | Plans for Open problems | \\ Cannot be reliably produced.   +| | Plans for Open problems | Cannot be reliably produced.   
  
 \\ \\
Line 181: Line 183:
 |  Solution Space  | The amount of variation allowed on a State while still counting as a Solution to a Problem.  | |  Solution Space  | The amount of variation allowed on a State while still counting as a Solution to a Problem.  |
 |  Task Space  | The size of variations on a Task that would have be explored with no up-front knowledge or information about the Solution Space of a Problem.  | |  Task Space  | The size of variations on a Task that would have be explored with no up-front knowledge or information about the Solution Space of a Problem.  |
-|  Limited \\ time  | No Task, no matter how small, takes zero time. \\ ≈ "Any task worth doing takes time." \\ ≈ All //implemented intelligences// will be subject to the laws of physics, which means that for every //Task// time is limited. +|  Limited \\ time  | No Task, no matter how small, takes zero time. \\ ≈ "Any task worth doing takes time." \\ ≈ All //implemented intelligences// will be subject to the laws of physics, which means that for every //Task//time is limited. 
-|  Limited energy  | "Any task worth doing takes energy." \\ ≈ All //implemented intelligences// will be subject to the laws of physics, which means that for every //Task// energy is limited..   |+|  Limited energy  | "Any task worth doing takes energy." \\ ≈ All //implemented intelligences// will be subject to the laws of physics, which means that for every //Task//energy is limited.   |
 |  \\ \\ No task takes \\ zero time \\ or zero energy  | If <m>te</m> is a function that returns time and energy, an act of perception (reliable measurement) \\ <m>te(p in P) > 0</m> \\ a commitment to a measurement <m>m</m>, <m>com(m in M)</m>, that is, the measurement is deemed reliable, in \\ <m>{te({com(m in M)}) > 0}</m>, and an action \\ <m>te(a in A) > 0</m> to record it (to memory) or announce it (to the external world, e.g. by writing). \\ It follows deductively that any Task requires //at least one of these//, possibly all. \\ For a task <m>T</m> whose //Goals// have already been achieved at the time <m>T</m> is assigned to an agent <m>A</m>, <m>A</m> still needs to do the measurement of the //Goal State// <m>s</m>, <m>s in S</m>, or at least* //commit// to the measurement of this fact, and then record it. Even in this case we have <m>te(T) > 0</m>. \\ Anything that takes zero time or zero energy is by definition not a Task.     | |  \\ \\ No task takes \\ zero time \\ or zero energy  | If <m>te</m> is a function that returns time and energy, an act of perception (reliable measurement) \\ <m>te(p in P) > 0</m> \\ a commitment to a measurement <m>m</m>, <m>com(m in M)</m>, that is, the measurement is deemed reliable, in \\ <m>{te({com(m in M)}) > 0}</m>, and an action \\ <m>te(a in A) > 0</m> to record it (to memory) or announce it (to the external world, e.g. by writing). \\ It follows deductively that any Task requires //at least one of these//, possibly all. \\ For a task <m>T</m> whose //Goals// have already been achieved at the time <m>T</m> is assigned to an agent <m>A</m>, <m>A</m> still needs to do the measurement of the //Goal State// <m>s</m>, <m>s in S</m>, or at least* //commit// to the measurement of this fact, and then record it. Even in this case we have <m>te(T) > 0</m>. \\ Anything that takes zero time or zero energy is by definition not a Task.     |
 |  \\ Limited Time & Energy  | <m>forall ~{lbrace T, ~T_e rbrace}~: ~te(T)~>~0</m>  \\ where <m>T_e subset W</m> is a task-environment in a world, <m>T</m> is a task, and <m>te</m> is a function that returns time and energy.   | |  \\ Limited Time & Energy  | <m>forall ~{lbrace T, ~T_e rbrace}~: ~te(T)~>~0</m>  \\ where <m>T_e subset W</m> is a task-environment in a world, <m>T</m> is a task, and <m>te</m> is a function that returns time and energy.   |
Line 203: Line 205:
 |  | Instructions of varying detail are possible.  | |  | Instructions of varying detail are possible.  |
 |  \\ Novelty  | Novelty is unavoidable. \\ In other words, unforeseen circumstances will be encountered by any AI operating in such circumstances. Accordingly, it should be able to handle it, since that is essentially why intelligence exists in the first place.   | |  \\ Novelty  | Novelty is unavoidable. \\ In other words, unforeseen circumstances will be encountered by any AI operating in such circumstances. Accordingly, it should be able to handle it, since that is essentially why intelligence exists in the first place.   |
-|  \\ What that means  | 1. Since no agent will ever know everything, no agent (artificial or natural) must assume non-axiomatic knowledge. \\ 2. It cannot be assumed that all knowledge that the AI system needs to know is known by its designer up front. This means it must acquire its own knowledge. All advanced AI systems must be //cumulative learners//. \\ 3. Since it must acquire its own knowledge, incrementally, knowledge acquisition will introduce //knowledge gaps and inconsistencies//. \\ 4. A cumulative learning agent will continuously live in a state of //insufficient knowledge and resources// (with respect to perfect knowledge), due to the physical world's //limited time and energy//   |+|  \\ What that means  | 1. Since no agent will ever know everything, no agent (artificial or natural) can assume axiomatic knowledge. \\ 2. It cannot be assumed that all knowledge that the AI system needs to know is known by its designer up front. This means it must acquire its own knowledge. All advanced AI systems must be //cumulative learners//. \\ 3. Since it must acquire its own knowledge, incrementally, knowledge acquisition will introduce //knowledge gaps and inconsistencies//. \\ 4. A cumulative learning agent will continuously live in a state of //insufficient knowledge and resources// (with respect to perfect knowledge), due to the physical world's //limited time and energy//   |
  
 \\ \\
/var/www/cadia.ru.is/wiki/data/attic/public/t-720-atai/atai-22/task-environment.1662027280.txt.gz · Last modified: 2024/04/29 13:32 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki