[[/public:t-720-atai:atai-21:main|T-720-ATAI-2021 Main]] \\ ======WORLDS====== \\ ===="The World"==== | What is it? | A set of constraints on what is and isn't possible. \\ We perceive the physical world through sensors. | | Sensors are physical | Our sensors are made from the same stuff that they perceive. | | Descartes | René Descartes, the French philosopher, famously claimed that "I think, therefore I am." \\ He recognized that the only certainty we have of anything is that we perceive in the here-and-now. | | Artificial Worlds | We may conceive of any "world" which follows different rules of our own. \\ These worlds are potential worlds for AI systems, just as the physical world is. | | However ... | Any //implemented// world, whether abstract or otherwise, but bow to the nature of the physical universe, because implementation means //physical incarnation//. | | Hence | The nature of our physical universe is important in AI. | | A Question of (Un)certainty | The physical world, and in fact many artificial ones also, are uncertain, meaning that there is a lot about them that we don't know. | | A Requirement of Certainty | To do anything //reliably// takes //certainty//. | | How To Address That | Figuring out what can reliably be achieved in uncertain worlds. | | AI Boils Down To | Building machines that can figure out what can be reliably achieved in uncertain worlds. | | Abstract Worlds | We may of course define any kind of "world" of our choosing. However, if it is to be implemented it must obey physical laws. | \\ ====Worlds: How it Hangs Together==== | W: \\ A World | W = {lbrace V,F,S_0,R rbrace} || | V: \\ Variables | \\ V = {lbrace v_1, ~v_2, ~. . . , ~v_{||V||} rbrace} || | F: \\ Transition Functions | \\ F is a set of transition functions / rules describing how the variables can change. \\ The dynamics can intuitively be thought of as the world’s “laws of nature”, continually transforming the world’s current state into the next: S_{t+δ} = F(S_t). || | S_0: \\ Initial State | S_0 is the State that W started out in. \\ In any complex world this is unlikely to be known; for artificial worlds this may be defined. || | \\ R: \\ Relations | R are the relations between variables in the world. These may be unknown or partially known to an //Agent// in the world. || | ::: | Static World | Changes //State// only through //Agent Action//. | | ::: | Dynamic World | Changes //State// through //Agent Action// and through other means. | | \\ State | s_t~subset~V_t. A set of variables x with a set of values, specified to some particular precision (with constraints, e.g. error bounds), for relevant to a //World//. \\ For all practical purposes, in any complex World "State" refers by default to a sub-state, since it is a practical impossibility to know its full state (values of the complete set of variables) of a world; there will always be a vastly higher number of "don't care" variables than the variables listed for e.g. a //Goal State// (a //State// associated with a //Goal//). || | | \\ State \\ definition | s_t~subset~V_t \\ where \\ {{lbrace}x_l, ~x_u{rbrace}} ~{|}~{{x_l <= x <= x_u}} \\ \\ define lower and upper bounds on acceptable range for each x to count towards the State, respectively. | | Exposable Variables | Variables in V that are measurable and/or manipulatable //in principle//. || | Observable Variables | Variables in V that can be measured for a particular interval in time are //observable// during that interval. || | Manipulatable Variables | Variables in V whose value can be affected, either directly or indirectly (by an //Agent// or something else). || \\ \\ \\ =====The Physical World===== ====Laplace's Daemon==== | {{/public:t-720-atai:laplace.jpg?100}} || | Concept | If a world is deterministic, and everything in it is caused from the ground up, from the smallest parts, then everything in that world is pre-determined based on its starting state. | | \\ Laplace | "In the history of science, Laplace's demon was the first published articulation of causal or scientific determinism, by Pierre-Simon Laplace in 1814. According to determinism, if someone (the demon) knows the precise location and momentum of every atom in the universe, their past and future values for any given time are entailed; they can be calculated from the laws of classical mechanics." [[https://en.wikipedia.org/wiki/Laplace's_demon|source: Wikipedia]] | \\ ====Determinism vs. Non-Determinism==== | Is our universe deterministic? | This is a major question for physics, but ultimately is not of much consequence for those building GMI (general machine intelligence). This is because any agent situated in the physical world will never know the precise position, direction and momentum of all its smallest particles, and thus must always deal with uncertainty. | | Regularity | A world with no regularity is pure noise. In such a world no intelligence makes sense. | | Pure Determinism | A world that is completely deterministic is pre-determined at all levels for all eternity; in such a world there is no concept of choice, and hence there can be no relevance for intelligence. | | Non-Axiomatism | Some mathematicians believe the universe to be fundamentally mathematical, and see the role of science (and mathematics) to find its "ultimate formula". We'll come back to that in a bit. | \\ ====Formalization==== | The Physical World | V = {lbrace}{x_1, ~x_2, ~... ~x_n}{rbrace}. \\ F={lbrace}{f_1, ~f_2 ~... ~f_n}{rbrace}. \\ x are real-valued variables, f are transition functions, and \\ V_{t+delta} = F(V_{t}) \\ and \\ {lbrace}{x}over{.}_1, ~{x}over{.}_2, ~... ~{x}over{.}_n {rbrace} \\ represent the first derivative of the variables during continuous change. \\ Note that ||V||~=~infty and ||F||~=~infty. | \\ ====Time Scales of the Physical World==== | {{public:t-720-atai:time-scales-newell-et-al.png?700}} | | From Card, Moran & Newell et al. //The Psychology of Human-Computer Interaction// (1983) & K. R. //Thórisson PhD Thesis//, MIT, 1996. | \\ ====Maxwell's Demon==== | {{public:t-720-atai:510px-maxwell_s_demon.svg.png?500}} \\ Source: [[https://en.wikipedia.org/wiki/Maxwell%27s_demon|Wikipedia]] || | \\ A Thought Experiment | Imagine a container divided into two parts, A and B. Both parts are filled with the same gas at equal temperatures and placed next to each other. Observing the molecules on both sides, an imaginary demon guards a trapdoor between the two parts. When a faster-than-average molecule from A flies towards the trapdoor, the demon opens it, and the molecule will fly from A to B. Likewise, when a slower-than-average molecule from B flies towards the trapdoor, the demon will let it pass from B to A. The average speed of the molecules in B will have increased while in A they will have slowed down. Since average molecular speed corresponds to temperature, the temperature decreases in A and increases in B, contrary to the second law of thermodynamics. A heat extractor operating between the thermal reservoirs A and B could extract energy from this temperature difference, creating a perpetual motion machine. [ Adapted from [[https://en.wikipedia.org/wiki/Maxwell%27s_demon|Wikipedia]] ] | | \\ The Error | The thought experiment is flawed because the demon must be part of the same system that the container is part of; thinking (or computation, if the demon is a robot) requires time and energy, and so whatever heat is saved in the container will be spent to run the demon's thinking processes. (This was first proposed in 1929 by Leo Szilard.) | \\ ====The Physical World & AI Research==== | Physical World | The field of physics has been systematically studying the physical world since the ancient Greeks. Over 2000 years later physics is now the most advanced scientific field of all of science. Any proper scientific theory of anything must ultimately rest on its shoulders. | | \\ What is it? | Physics seeks to uncover the "ultimate" rules which determine how the universe behaves, including life, intelligence and everything else. \\ No (general or limited) intelligence in a complex environment such as the physical world can be granted access to a full set of axioms of the system it’s controlling, let alone the ⟨agent,environment⟩ tuple, and thus the behavior of a practical generally intelligent artificial agent as a whole simply cannot be captured formally. (see [[http://alumni.media.mit.edu/~kris/ftp/AGI16_growing_recursive_self-improvers.pdf|Steunebrink et al 2016]]) | | Useful AI | To be useful, an AI must **do** something. Ultimately, to be of any use, it must do something in the //physical// world, be it building skyscrapers, auto-generating whole movies from scratch, doing experimental science or inventing the next AI. | | What this means | A key target environment of the present work is the physical world. | | Uncertainty | Since any agent in the physical universe will never know //everything:// Some things will always be uncertain. | | Novelty | In such a world novelty abounds—most things that a learner encounters will contain some form of novelty. | | \\ Learning | Some unobservable //exposable variables// can be made //observable// (exposed) through //manipulation//. For any //Agent// with a set of //Goals// and limited knowledge but an ability to learn, which variables may be made observable and how is the subject of the //Agent's// learning. || \\ \\ \\ =====Task-Environments===== \\ ====Task==== | Task | A //Problem// that can be assigned in an //Environment//. Typically comes with //Instructions//. \\ An //assigned Task// has a designated Agent to do it, start-time, end-time, and maximum duration. || | Problem | A Goal with (all) relevant constraints imposed by a Task-Environment. || | Goal | G subset S. A (future) (sub-) State to be attained during period //t//, plus other //optional// constraints on the Goal. || | **LTE** | All tasks have a limited time & energy: No Task exists that can be performed with infinite energy, or for for which infinite time is available for achieving. || \\ ====Environment==== | \\ Environment |e ~=~ {lbrace {V ~subset ~V_W}, ~{F ~subset ~F_W} rbrace} ~+ ~C \\ where C are additional constraints on V,~ F and \\ some values for V are (typically) fixed. || | Task-Environment | T_e ~subset ~W \\ An Environment in which one or more Tasks may be assigned and performed. || \\ ====Task-Environments: Key Concepts==== | Task \\ Family | A set of tasks that are similar in some (important) ways. \\ Since there exists no proper "Task Theory" as of yet, comparisons of similarity or other dimensions are still an art form. || | Problem | A //Goal// with (all) relevant constraints (≈ requirements). || | | Problem Family | A set of problems that are similar in some (important) ways; a //Problem// plus variations of that Problem. | | Goal | A (future) //State// to be attained, plus optional constraints on the //Goal//. || | | Goal State | A set of values (with error bounds) for a set of variables relevant to a //Goal//. | | | Goal Family | A set of goals that are similar in some (important) ways. | | Environment | A set of constraints relevant to a //Task// but not counted as part of a //Task// proper. \\ Also conceived as negative //Goals// (i.e. //States// to be avoided). || | Constraint | A set of factors that limit the flexibility (state space) of that which it constrains. || | Solution | The set of (atomic) actions that can achieve a //Goal//. || | Action | The changes an //Agent// can make to variables relevant to a //Task-Environment//. || | Plan | A partial way (incomplete //Instructions)// to accomplish a //Task//. || | Instructions | Partial //Plan// for accomplishing a //Task//, typically given to an //Agent// along with a //Task// by a //Teacher//. || | Teacher | The //Agent// (or interactive process) assigning a //Task// to another //Agent// (student), optionally in charge of //Instructions//. || \\ ====Other High-Level Concepts==== | Constraint | A set of factors that limit the flexibility of that which it constrains. \\ Best conceived of as a //negative Goal// (//State// that is to be avoided). || | Family | A set whose elements share one or more common trait (within some "sensible" (pre-defined or definable) allowed variability on one or more of: the types of variables, number of variables, the ranges of these variables). || | \\ \\ Domains | A Family of Environments, D ⊂ W. \\ The concept of ‘domains’ as subsets of the world, where a particular bias of distributions of variables, values and ranges exists, may be useful in the context of tasks that can be systematically impacted by such a bias (e.g. gravity vs. zero-gravity). Each variable v ~∈~ D may take on any value from the associated domain D; for physical domains we can take the domain of variables to be a subset of real numbers, bbR. || | Solution | The set of (atomic) //Actions// that can achieve one or more //Goals// in one or more //Task-Environments//. || | Action | The changes an //Agent// can make to variables relevant to a //Task-Environment//. || | Plan | Description of a partial //Solution// to accomplishing a //Task//. || | Teacher | The Agent assigning a //Task// to another Agent (student). || | | Instructions | //Plan// (partial) for accomplishing a //Task// typically given to an //Agent// by a //Teacher//, along with a //Task//. \\ Guide to //Solutions// (partial or full) - at some level of maximum available detail. | \\ ===='Problem'==== | \\ Definition | In everyday use the term is a flexible one that generally be applied to any part of a task that, more or less, prevents goals (or subgoals) from being achieved effortlessly or directly. It assumes a non-zero set of goals of agents for which it is relevant (otherwise anything and everything would be a 'problem'). || | Closed Problem | A //Problem// that may be assigned as a //Task// to a particular //Agent// with known **time & energy** for producing a //Solution// (achieving a //Goal)//. || | | Alternative formulation | A known //Goal// for which there exists a known //{Agent, Task}// tuple that can achieve it, where the //Task// is fully specified in the //{Agent Knowledge}+{Instructions}// tuple (including all required parameters for assigning its completion). | | | Example | Doing the dishes. | | | Plans for Closed problems | \\ Can be reliably produced, e.g. in the form of //Instructions//. | | Open Problem | A //Problem// whose solution is unknown and cannot be obviously assumed from analogy with similar problems whose solution is known. Cannot be guaranteed a solution with LTE. || | | Example | Any //Problem// that is the subject of basic research and for which no known solution yet exists. | | | Plans for Open problems | \\ Cannot be reliably produced. | \\ ====Task-Environment Constraints on Tasks==== | Solution Constraint | Reduces the flexibility for producing a Solution. | | Task Constraint | Limits the allowed Solution Space for a Problem. Can help or hinder a Task to be achieved. | | Solution Space | The amount of variation allowed on a State while still counting as a Solution to a Problem. | | Task Space | The size of variations on a Task that would have be explored with no up-front knowledge or information about the Solution Space of a Problem. | | Limited \\ time | No Task, no matter how small, takes zero time. \\ ≈ "Any task worth doing takes time." \\ ≈ All //implemented intelligences// will be subject to the laws of physics, which means that for every //Task// time is limited. | | Limited energy | "Any task worth doing takes energy." \\ ≈ All //implemented intelligences// will be subject to the laws of physics, which means that for every //Task// energy is limited.. | | \\ \\ No task takes \\ zero time \\ or zero energy | If te is a function that returns time and energy, an act of perception (reliable measurement) \\ te(p in P) > 0 \\ a commitment to a measurement m, com(m in M), that is, the measurement is deemed reliable, in \\ {te({com(m in M)}) > 0}, and an action \\ te(a in A) > 0 to record it (to memory) or announce it (to the external world, e.g. by writing). \\ It follows deductively that any Task requires //at least one of these//, possibly all. \\ For a task T whose //Goals// have already been achieved at the time T is assigned to an agent A, A still needs to do the measurement of the //Goal State// s, s in S, or at least* //commit// to the measurement of this fact, and then record it. Even in this case we have te(T) > 0. \\ Anything that takes zero time or zero energy is by definition not a Task. | | \\ Limited Time & Energy | forall ~{lbrace T, ~T_e rbrace}~: ~te(T)~>~0 \\ where T_e subset W is a task-environment in a world, T is a task, and te is a function that returns time and energy. | //*This could be the case if, for instance, task// T_1 //assigned to agent// A //at time // t_1 //comes with **Instructions** telling // A // that // T_1 // has already been done at the time of its assignment// t_1 //. Sort of like you were to get a shopping list for use on your upcoming shopping trip where at least one item was crossed out. // \\ ====What Kinds of Task-Environments are Relevant for AI?==== | Intelligence = Learning | Completely static: No (or little) need for learning. \\ Completely random: Learning is of no use. | | \\ Worlds | Worlds where intelligence is relevant: \\ - Sit somewhere between deterministic and completely random (pure noise). \\ - Strike a balance between completely dynamic and completely static. \\ - Some regularity must exist at a level that is observable initially by a learning agent. | | Hierarchy | Any world that is hierarchically decomposable into smaller detail along time and space. \\ (See "Time Scales of the Physical World", above.) | | Environment | Large number of potentially relevant variables. | | Task | Ditto (large number of potentially relevant variables). | | | Medium or few number of Solutions. | | | Instructions of varying detail are possible. | | \\ Novelty | Novelty is unavoidable. \\ In other words, unforeseen circumstances will be encountered by any AI operating in such circumstances. Accordingly, it should be able to handle it, since that is essentially why intelligence exists in the first place. | | \\ What that means | 1. Since no agent will ever know everything, no agent (artificial or natural) must assume non-axiomatic knowledge. \\ 2. It cannot be assumed that all knowledge that the AI system needs to know is known by its designer up front. This means it must acquire its own knowledge. All advanced AI systems must be //cumulative learners//. \\ 3. Since it must acquire its own knowledge, incrementally, knowledge acquisition will introduce //knowledge gaps and inconsistencies//. \\ 4. A cumulative learning agent will continuously live in a state of //insufficient knowledge and resources// (with respect to perfect knowledge), due to the physical world's //limited time and energy//. | \\ \\ \\ \\ \\ \\ 2021(c)K.R.Thórisson