[[/public:t-709-aies:AIES-24:main|DCS-T-709-AIES-2024 Main]] \\ [[/public:t-709-aies:AIES-24:lecture_notes|Link to Lecture Notes]] \\ \\ ====== REQUIREMENTS FOR NEXT-GEN AI ====== //Autonomy, Cause-Effect Knowledge, Cumulative Learning, Empirical Reasoning, Trustworthiness// \\ \\ ==== Autonomy ==== | What It Is | Autonomy is a key feature of intelligence - the ability of a system to "act on its own". | | \\ Self-Inspection | Virtually no systems exist as of yet that has been demonstrated to be able to inspect (measure, quantify, compare, track, make use of) their own development for use in its continued growth - whether learning, goal-generation, selection of variables, resource usage, or other self-X. | | \\ Self-Growth | No System as of yet has been demonstrated to be able to autonomously manage its own **self-growth**. Self-Growth is necessary for autonomous learning in task-environments with complexities far higher than the controller operating in it. It is even more important where certain bootstrapping thresholds are necessary before safe transition into more powerful/different learning schemes. \\ For instance, if only a few bits of knowledge can be programmed into a controller's seed ("DNA"), because we want it to have maximal flexibility in what it can learn, then we want to put something there that is essential to protect the controller while it develops more sophisticated learning. An example is that nature programmed human babies with an innate fear of heights. | | Why It Matters | This table exists to highlight some really key features of autonomy that any human-level intelligence probably must have. We say "probably" because, since we don't have any yet, and because there is no proper theory of intelligence, we cannot be sure. | | Autonomous Learning | We already have machines that learn autonomously, although most of the available methods are limited in that they (a) rely heavily on quality selection of learning material/environments, (b) require careful setup of training, and (c ) careful and detailed specifications of how progress is evaluated. | \\ \\ ==== Three Levels of Autonomy ==== ^ Category ^ Description ^ Uniqueness ^ Examples ^ Learning ^ | **Level 1:** \\ Automation | The lowest level may be called "mechanical". | Fixed architecture. Baked-in goals. Does its job. Does not create. | Watt's Governor. Thermostats. DNNs. | No "learning" AILL (after it leaves the lab). | | Level 1.5: \\ Reinforcement learning | Can change their function at runtime. \\ Cannot accept goal description. \\ Cannot handle unspecified variables. \\ Cannot create sub-goals autonomously. | "Learns" through piecewise Boolean (good/bad) feedback. | Q-learning. | Limited to a handful of predefined variables | | **Level 2:** \\ Cognitive | Handling of novelty. Figures things out. Accepts goal description. Generates goal descriptions. Creates. | Flexible representation of self. High degree of self-modification. | Humans. Parrots. Dogs. | Learns AILL. | | **Level 3:** \\ Biological | \\ Adapts. | Is alive. Subject to evolution. Necessary precursor to lower levels. | Living creatures. | Adapts AILPS (after it leaves the primordial soup). | | Source | [[http://alumni.media.mit.edu/~kris/ftp/Seed-Programmed-General-Learning-Thorisson-PMLR-2020.pdf|Thorisson 2020]] |||| \\ \\ ====Empirical Learning==== | What It Is | When information comes from measurements in the physical world it is "empirical evidence". 'Empirical learning' is thus learning based on empirical data. | | Experience-Based Learning | Learning is the acquisition of knowledge for particular purposes. When this acquisition happens via interaction with an environment it is experience-based. | | Experimentation | To produce data needed for learning about an environment which cannot be fully known a-priori, and may additionally prevent complete knowledge of all its "rules" like the physical world, requires experimentation of some sort. | | The Physical World | The world we live in, often referred to as the "real world", is highly complex, and rarely if ever do we have perfect models of how it behaves when we interact with it, whether it is to experiment with how it works or simply achieve some goal like buying bread. | | \\ Limited Time & Energy \\ (LTE) | An important limitation on any agent's ability to model the real world is its enormous state space, which vastly outdoes any known agent's memory capacity, even for relatively simple environments. Even if the models were sufficiently detailed, pre-computing everything beforehand is prohibited due to memory. On top of that, even if memory would suffice for pre-computing everything and anything necessary to go about our tasks, we would have to retrieve the pre-computed data in time when it's needed - the larger the state space the more demands on retrieval times this puts. | | Why Empirical Learning Matters | Under LTE (limited time and energy) in a plentiful task-environment it is impossible to know everything all at once, including causal relations. Therefore, most of the time an intelligent agent capable of some reasoning will be working with uncertain assumptions where nothing is certain, only some things are more probable than others. | \\ \\ ====Causation, Correlation & AI==== | Correlation | Correlation is the apparent relationship between two or more variables, when observed repeatedly, that is, the value of one seems to follow the other (and vice versa). Correlation is not directional, that is, we cannot say about two variables that are correlated whether one of them causes the other. | | Correlation Supports Prediction | Correlation is sufficient for simple prediction (if **A** and **B** correlate highly, then it does not matter if we see an **A** //OR// a **B**, we can predict that the other is likely on the scene). | | Causation | Causation is the directed relationship between two variables A and B, such that if you change the value of variable A, then the value of variable B changes also, according to some function. | | Knowledge of Causation Supports Action | We may know that **A** and **B** correlate, but if we don't know whether B is a result of **A** or vice versa, and we want **B** to disappear, we don't know whether it will suffice to modify **A**. \\ //Example: The position of the light switch and the state of the light bulb correlate. Only by knowing that the light switch controls the bulb can we go directly to the switch if we want the light to turn on. // | | **Causal Models** \\ Are Necessary To Guide Action | While correlation gives us indication of causation, the direction of the "causal arrow" is critically necessary for guiding action. \\ Luckily, knowing which way the arrows point in any large set of correlated variables is usually not too hard to find out, by **empirical experimentation**. | | Judea Pearl | Most Fervent Advocate of causality in AI, and the inventor of the Do Calculus. \\ C.f. [[https://ftp.cs.ucla.edu/pub/stat_ser/r284-reprint.pdf|BAYESIANISM AND CAUSALITY, OR, WHY I AM ONLY A HALF-BAYESIAN]]. | | \\ State Of The Art | Recent work by Judea Pearl demonstrates clearly the fallaciousness of the statistical stance, and fixes some important gaps in our knowledge on this subject which hopefully will rectify the situation in the coming years. \\ [[https://www.youtube.com/watch?v=8nHVUFqI0zk|YouTube lecture by J. Pearl on causation]]. | \\ \\ ==== Explanation & Explainability ==== | What It Is | The ability (of anyone or anything) to explain, after the fact, during, or before, why something happened the way it did, how it could have happened differently but didn't (in general and/or this time), and why. | | In AI | The ability of a controller to explain, after the fact, during, or before, why it did something or intends to do it. | | Explanation Depends On Causation | \\ It is impossible to explain anything in any useful way without referring to general causal relations. | | Why It Matters | If a controller does something we don't want it to repeat - e.g. crash an airplane full of people (in simulation mode, hopefully!) - it needs to be able to explain why it did what it did. If it can't, it means it - and //we// - can never be sure of why it did what it did, whether it had any other choice, or under what conditions it might do it again. | | Explanation Depends on Causation | No explanation is without reference to causes; discernible causal structure is a prerequisite for explainability. | | \\ Bottom Line for \\ Human-Level AI | To grow and learn and self-inspect an AI must be able to sort out causal chains. If it can't it will not only be incapable of explaining to others why it is like it is, it will be incapable of explaining to itself why things are the way they are, and thus, it will be incapable of sorting out whether something it did is better for its own growth than something else. Explanation is the big black hole of ANNs: In principle ANNs are black boxes, and thus they are in principle unexplainable - whether to themselves or others. \\ One way to address this is by encapsulating knowledge as hierarchical models that are built up over time, and can be de-constructed at any time (like AERA does). | \\ \\ ==== Self-Explaining Systems ==== | What It Is | The ability of a controller to explain, after the fact or before, why it did something or intends to do it. | | 'Explainability' \\ ≠ \\ 'self-explanation' | If an intelligence X can explain a phenomenon Y, Y is 'explainable' by Y, through some process chosen by Y. \\ \\ In contrast, if an intelligence X can explain itself, its own actions, knowledge, understanding, beliefs, and reasoning, it is capable of self-explanation. The latter is stronger and subsumes the former. | | Why It Matters | If a controller does something we don't want it to repeat - e.g. crash an airplane full of people (in simulation mode, hopefully!) - it needs to be able to explain why it did what it did. If it can't, it means it - and //we// - can never be sure of why it did what it did, whether it had any other choice, whether it is likely to do it again, whether it's an evil machine that actually meant to do it, or even how likely it is to do it again. | | Why It Matters \\ More Than You Think | The 'Explanation Hypothesis' (ExH) states that explanation is in fact a fundamental element in all advanced learning, because explanation is a way to weed out alternative (and incorrect) hypotheses about how the world works. For instance, if the knowledge already exists in a controller to do the right thing -- for the right //reason// -- in an emergency situation, the //explanation// of why it does what it does //already exists embedded in its knowledge//. \\ See [[https://proceedings.mlr.press/v159/thorisson22b/thorisson22b.pdf|Thórisson 2022]] | | \\ Human-Level AI | Even more importantly, to grow and learn and self-inspect the AI system must be able to sort out causal chains. If it can't it will not only be incapable of explaining to others why it is like it is, it will be incapable of explaining to itself why things are the way they are, and thus, it will be incapable of sorting out whether something it did is better for its own growth than something else. Explanation is the big black hole of ANNs: In principle ANNs are black boxes, and thus they are in principle unexplainable - whether to themselves or others. \\ One way to address this is by encapsulating knowledge as hierarchical models that are built up over time, and can be de-constructed at any time (like AERA does). | \\ \\ ====Trustworthiness==== | What It Is | The ability of a machine's owner to trust that the machine will do what it is supposed to do. | | \\ Why It Matters | Any machine created by humans is created for a **purpose**. The more reliably it does its job (and nothing else) and does it well, the more trustworthy it is. Trusting simple machines like thermostats involves mostly durability, since they have very few open variables (unbound variables at time of manufacture), their task is well defined and well known, and their reasonably precise operation can be ensured with simple engineering. | | AI | In contrast to simple machines, AI is supposed to handle diversity in one or more tasks. A learning AI system goes one step further by leaving the machine's **tasks** undefined at manufacturing time. The smarter an AI system is, the more diversity it can handle. A requirement should be that "trustworthiness grows with the mindpower of the machine". | | \\ Human-Level AI | To make human-level AI trustworthy is very different from creating simple machines because so many variables are unbound at manufacture time. What does trustworthiness mean in this context? We can look at human trustworthiness: Numerous methods exist for ensuring trustworthiness (license to drive, air traffic controller training, certification programs, etc.). We can have the same certification programs for all humans because their principles of operation are shared at multiple levels of detail (biology, sociology, psychology). For an AI this is different because the variability in the makeup of the machines is enormous. This makes trustworthiness of AI robots a complex issue. | | Achieving Trustworthiness | Requires **reliability**, and **predictability** at multiple levels of operation. Trustworthiness can be ascertained through special certification programs geared directly at the **kind of robot/AI system in question** (kind of like certifying a particular horse as safe for a particular circumstance and purpose, e.g. horseback riding kids). | | Trustworthiness Methods | For AI are in their infancy. | \\ \\ \\ \\ //2024(c)K.R.Thórisson//