[[http://cadia.ru.is/wiki/public:t-720-atai:atai-19:main|T-720-ATAI-2019 Main]] \\ [[http://cadia.ru.is/wiki/public:t-720-atai:atai-19:Lecture_Notes|Links to Lecture Notes]] =====T-720-ATAI-2019===== ====Lecture Notes: AI Architectures==== \\ \\ \\ --------------- \\ \\ ====System Architecture==== | What it is | In CS: the organization of the software that implements a system. \\ In AI: The total system that has direct and independent control of the behavior of an Agent via its sensors and effectors. | | Why it's important | The system architecture determines what kind of information processing can be done, and what the system as a whole is capable of in a particular Task-Environemnt. | | Key concepts | process types; process initiation; information storage; information flow. | | Graph representation | Common way to represent processes as nodes, information flow as edges. | | \\ Relation to AI | The term "system" not only includes the processing components, the functions these implement, their input and output, and relationships, but also temporal aspects of the system's behavior as a whole. This is important in AI because any controller of an agent is supposed to control it in such a way that its behavior can be classified as being "intelligent". But what are the necessary and sufficient components of that behavior set? | | \\ Rationality | The "rationality hypothesis" models an intelligent agent as a "rational" agent: An agent that would always do the most "sensible" thing at any point in time. \\ The problem with the rationality hypothesis is that given insufficient resources, including time, the concept of rationality doesn't hold up, because it assumes you have time to weigh all alternatives (or, if you have limited time, that you can choose to evaluate the most relevant options and choose among those). But since such decisions are always about the future, and we cannot predict the future perfectly, for most decisions that we get a choice in how to proceed there is no such thing as a rational choice. | | Satisficing | Herbert Simon proposed the concept of "satisficing" to replace the concept of "optimizing" when talking about intelligent action in a complex task-environment. Actions that meet a particular minimum requirement in light of a particular goal 'satisfy' and 'suffice' for the purposes of that goal. | | Intelligence is in part a systemic phenomenon | Thought experiment: Take any system we deem intelligent, e.g. a 10-year old human, and isolate any of his/her skills and features. A machine that implements any //single// one of these is unlikely to seem worthy of being called "intelligent" (viz chess programs), without further qualification (e.g. "a limited expert in a sub-field"). \\ //"The intelligence **is** the architecture."// - KRTh | \\ \\ ====Inferred AGI Architectural Features ==== | \\ Large architecture | An architecture that is considerably more complex than systems being built in most AI labs today is likely unavoidable. In a complex architecture the issue of concurrency of processes must be addressed, a problem that has not yet been sufficiently resolved in present software and hardware. This scaling problem cannot be addressed by the usual “we’ll wait for Moore’s law to catch up” because the issue does not primarily revolve around //speed of execution// but around the //nature of the architectural principles of the system and their runtime operation//. || | \\ Predictable Robustness in Novel Circumstances | The system must have a robustness in light of all kinds of task-environment and embodiment perturbations, otherwise no reliable plans can be made, and thus no reliable execution of tasks can ever be reached, no matter how powerful the learning capacity. This robustness must be predictable a-priori at some level of abstraction -- for a wide range of novel circumstances it cannot be a complete surprise that the system "holds up". (If this were the case then the system itself would not be able to predict its chances of success in face of novel circumstances, thus eliminating an important part of the "G" from its "AGI" label.) || | \\ Graceful Degradation | Part of the robustness requirement is that the system be constructed in a way as to minimize potential for catastrophic (and upredictable) failure. A programmer forgets to delimit a command in a compiled program and the whole application crashes; this kind of brittleness is not an option for cognitive systems operating in partially stochastic environments, where perturbations may come in any form at any time (and perfect prediction is impossible). || | Transversal Functions | The system must have pan-architectural characteristics that enable it to operate consistently as a whole, to be highly adaptive (yet robust) in its own operation across the board, including metacognitive abilities. Some functions likely to be needed to achieve this include attention, learning, analogy-making capabilities, and self-inspection. || | | \\ Transversal Time | Ignoring (general) temporal constraints is not an option if we want AGI. Move over Turing! Time is a semantic property, and the system must be able to understand – and be able to //learn to understand// – time as a real-world phenomenon in relation to its own skills and architectural operation. Time is everywhere, and is different from other resources in that there is a global clock which cannot, for many task-environments, be turned backwards. Energy must also be addressed, but may not be as fundamentally detrimental to ignore as time while we are in the early stages of exploring methods for developing auto-catalytic knowledge acquisition and cognitive growth mechanisms. \\ Time must be a tightly integrated phenomenon in any AGI architecture - managing and understanding time cannot be retrofitted to a complex architecture! | | | \\ Transversal Learning | The system should be able to learn anything and everything, which means learning probably not best located in a particular "module" or "modules" in the architecture. \\ Learning must be a tightly integrated phenomenon in any AGI architecture, and must be part of the design from the beginning - implementing general learning into an existing architecture is out of the question: Learning cannot be retrofitted to a complex architecture! | | | Transversal Resource Management | Resource management - //attention// - must be tightly integrated.\\ Attention must be part of the system design from the beginning - retrofitting resource management into a architecture that didn't include this from the beginning is next to impossible! | | | Transversal Analogies | Analogies must be included in the system design from the beginning - retrofitting the ability to make general analogies between anything and everything is impossible! | | | \\ Transversal Self-Inspection | Reflectivity, as it is known, is a fundamental property of knowledge representation. The fact that we humans can talk about the stuff that we think about, and can talk about the fact that we talk about the fact that we can talk about it, is a strong implication that reflectivity is a key property of AGI systems. \\ Reflectivity must be part of the architecture from the beginning - retrofitting this ability into any architecture is virtually impossible! | | | Transversal Integration | A general-purpose system must tightly and finely coordinate a host of skills, including their acquisition, transitions between skills at runtime, how to combine two or more skills, and transfer of learning between them over time at many levels of temporal and topical detail. | \\ \\ \\ \\ ====Self-Programming ==== | What it is | //Self-programming// here means, with respect to some virtual machine M, the production of one or more programs created by M itself, whose //principles// for creation were provided to M at design time, but whose details were //decided by// M //at runtime // based on its //experience//. | | Self-Generated Program | Determined by some factors in the interaction between the system and its environment. | | Historical note | Concept of self-programming is old (J. von Neumann one of the first to talk about self-replication in machines). However, few if any proposals for how to achieve this has been fielded. [[https://en.wikipedia.org/wiki/Von_Neumann_universal_constructor|Von Neumann's universal constructor on Wikipedia]] | | No guarantee | The fact that a system has the ability to program itself is not a guarantee that it is in a better position than a traditional system. In fact, it is in a worse situation because in this case there are more ways in which its performance can go wrong. | | Why we need it | The inherent limitations of hand-coding methods make traditional manual programming approaches unlikely to reach a level of a human-grade generally intelligent system, simply because to be able to adapt to a wide range of tasks, situations, and domains, a system must be able to modify itself in more fundamental ways than a traditional software system is capable of. | | Remedy | Sufficiently powerful principles are needed to insure against the system going rogue. | | The Self of a machine | **C1:** The processes that act on the world and the self (via senctors) evaluate the structure and execution of code in the system and, respectively, synthesize new code. \\ **C2:** The models that describe the processes in C1, entities and phenomena in the world -- including the self in the world -- and processes in the self. Goals contextualize models and they also belong to C2. \\ **C3:** The states of the self and of the world -- past, present and anticipated -- including the inputs/outputs of the machine. | | Bootstrap code | A.k.a. the "seed". Bootstrap code may consist of ontologies, states, models, internal drives, exemplary behaviors and programming skills. | \\ \\ ==== Programming for Self-Programming ==== | Can we use LISP? | Any language with similar features as LISP (e.g. Haskel, Prolog, etc.), i.e. the ability to inspect itself, turn data into code and code into data, should //in theory// be capable of sustaining a self-programming machine. | | Theory vs. practice | "In theory" is most of the time //not good enough// if we want to see something soon (as in the next decade or two), and this is the case here too; what is good for a human programmer is not so good for a system having to synthesize its own code in real-time. | | Why? | Building a machine that can write (sensible, meaningful!) programs means that machine is smart enough to understand the code it produces. If the purpose of its programming is to //become//smart, and the programming language we give to it //assumes it's smart already//, we have defeated the purpose of creating the self-programming machine in the first place. | | What can we do? | We must create a programming language with //simple enough// semantics so that a simple machine (perhaps with some clever emergent properties) can use it to bootstrap itself in learning to write programs. | | Does such a language exist? | Yes. It's called [[http://alumni.media.mit.edu/~kris/ftp/nivel_thorisson_replicode_AGI13.pdf|Replicode]]. | \\ \\ \\ \\ \\ \\ ====The SOAR Architecture==== | What it is | One of the oldest cognitive architectures in history. | | Why is it important | One of the oldest AGI-aspiring systems in history. | | How does it work | Reasoning engine does pattern-matching with hand-coded 'production' rules and 'operators' to solve problems, with an ability to "chunk" - create 'shortcuts' for long transitive reasoning chains. Upon 'impasse' (break in the flow of reasoning/problemsolving) a reasoning process tries to resolve it via successive application of relevant rules. | | Recent Additionns | Reinforcement learning for steering reasoning. Sub-symbolic processing for low-level perception. | | Missing in Action | Attention (resource control, self-control), symbolic learning (other than chunking). | SOAR is a relatively mature cognitive architecture that has been used by many researchers worldwide during its 20 year life span. During this time it has also been revised and extended in a number of ways. The architecture consists of heterogenous components that interact during each decision cycle. These are working memory and three types of long-term memory: semantic, procedural and episodic. Working memory is where information related to the present is stored with its contents being supplied by sensors or copied from other memory structures based on relevancy to the present situation. Working memory also contains an activation mechanism, used in conjunction with episodic memory, that indicates the relevancy and usefulness of working memory elements. Production rules are matched and fired on the contents of working memory during the decision cycle, implementing both an associative memory mechanism (as rules can bring data from long-term memory into working memory) and action selection (as rules propose, evaluate and apply operators). Operators are procedural data stored in procedural memory. The application of an operator is carried out by a production rule and either causes changes in the working memory or triggers an external action. In cases where operator selection fails due to insufficient knowledge, an impasse event occurs and a process to resolve the impasse is started. This process involves reasoning and inference upon existing knowledge using the same decision cycle in a recursive fashion, the results of this process are converted to production rules by a process termed chunking. Reinforcement learning is used for production rules relating to operator selection to maximize future rewards in similar situations. One of the most recent additions to the SOAR architecture is sub-symbolic processing used for visual capabilities, where the bridge between sub-symbolic to symbolic processing consists of feature detection. As the working memory can contain execution traces, introspective abilities are possible. The SOAR architecture provides one of the largest collection of simultaneous running cognitive processes of any cognitive architecture so far. However, there is no explicit mechanism for control of attention and the architecture is not designed for real-time operation. The latter may be especially problematic as execution is in strict step-lock form and in particular, the duration (amount of computation) in each decision cycle can vary greatly due to impasse events that are raised occasionally. One might argue that the development of SOAR has been somewhat characterized by "adding boxes" (components) to the architecture when it might be better to follow a more unified approach putting integration at the forefront. There are a few cognitive architectures that somewhat resemble SOAR and can be placed categorically on the same track. These include ICARUS, which has a strong emphasis on embodiment and has shown promise in terms of generality in a number of toy problems such as in-city driving, and LIDA which was developed for the US Navy to automatically organize and negotiate assignments with sailors but does not have embodiment as a design goal. As in SOAR, both of these implement different types of memory in specialized components and have a step-locked decision cycle. 2013(c)Helgi P. Helgason \\ \\ \\ \\ ====The AERA System==== The Auto-catalytic Endogenous Reflective Architecture – AERA – is an AGI-aspiring architectural blueprint that was produced as part of the HUMANOBS FP7 project. It encompasses several fundamentally new ideas in the history of AI, including a new programming language specifically conceived to solve some major limitations of prior efforts in this respect, including self-inspection and self-representation, distributed representation of knowledge, and distributed reasoning. AERA systems are any-time, real-time, incremental/continuous learning, on-line learning systems. AERA's knowledge is stored in models, which essentially encode transformations on input, to produce output. Models have a trigger side (left-hand side) and a result side (right-hand side). In a forward-chaining scenario, when a particular piece of data matches on the left hand of a model (it is only allowed to test the match if the data has high enough saliency and the program has sufficient activation) the model fires, producing the output specified by its left-hand side and injecting it into a global memory store. The semantics of the output is prediction, and the semantics of the input is either fact or prediction. Notice that a model in AERA is not a production rule; a model relating A to B does not mean “A entails B”, it means A predicts B, and it has an associated confidence value. Such models stem invariably (read: most of the time) from the system's experience, and in early stages of learning an AERA-based system's set of models may mostly consist fairly useless and bad models, all with relatively low confidence values (“not all models are created equal – some are in fact better than others”). In backward-chaining – to implement the process of abduction – models act the other way around, namely, when some data match the right-hand side, a model produces new data patterned after its left side, whose semantics essentially state that “if you want a B (on the right-hand side) perhaps it would help to get an A (term on the left-hand side). The semantics of either (both the input and output) is “goal”. A key principle of AERA operation is that of a distributed production process: Each program in AERA has a level of activation that determines if it is allowed to run or not. Every piece of data has a corresponding saliency level that determines how visible it is inside the system. In AERA there is a single memory, but it (typically) embeds groups that allow sets of data and programs to be addressed, e.g. changing their activation, saliency, or existence (via creation or deletion). \\ \\ ====High-Level View of AERA==== | AERA | The Auto-Catalytic Endogenous Reflective Architecture is an AGI-aspiring self-programming system that combines feedback and feed-forward control in a model-based and model-driven system that is programmed with a seed. | | {{/public:t-720-atai:aera-high-level-2018.png?700}} || | High-level view of the three main functions at work in a running AERA system and their interaction with its knowledge store. || | \\ Models | All models are stored in a central //memory//, and the three processes of //planning//, //attention// (resource management) and //learning// happen as a result of programs that operate on models by matching, activating, and scoring them. Models that predict correctly -- not just "what happens next?" but also "what will happen if I do X?" -- get a success point. Every time a model 'fires' like that it gets counted, so the ratio of success over counts gives you the "goodness" of a model. \\ Models that have the lowest scores are deleted, models with a good score that suddenly fail result in the generation of new versions of itself (think of it as hypotheses for why it failed this time), and this process over time increases the quality and utility of the knowledge of the controller, in other words it //learns//. | | \\ Attention | Attention is nothing more than resource management, in the case of cognitive controllers it typically involves management of knowledge, time, energy, and computing power. Attention in AERA is the set of functions that decides how the controller uses its compute time, how long it "mulls things over", and how far into the future it allows itself to "think". It also involves which models the system works with at any point in time, how much it explores models outside of the obvious candidate set at any point in time. | | \\ Planning | Planning is the set of operations involved with looking at alternative ways of proceeding, based on predictions into the future and the quality of the solutions found so far, at any point in time. The plans produced by AERA are of a mixed opportunistic (short time horizon)/firm commitment (long time horizon) kind, and their stability (subject to change drastically over their course) depend solely on the dependability of the models involved -- i.e. how well the models represent what is actually going on in the world (including the controllers "mind"). | | Learning | Learning happens as a result of the accumulation of models; as they increasingly describe "reality" better (i.e. their target phenomenon) they get better for planning and attention, which in turn improves the learning. | | Memory | AREA's "global knowledge base" is in some ways similar to the idea of blackboards: AERA stores all its knowledge in a "global workspace" or memory. Unlike (Selfridge's idea of) blackboards, the blackboard contains executive functions that manage the knowledge dynamically, in addition to "the experts", which in AERA's case are very tiny and better thought of as "models with codelet helpers". | | Pervasive Use of Codelets | A //codelet// is a piece of code that is smaller than a typical self-contained program, typically a few lines long, and can only be executed in particular contexts. Programs are constructed on the fly by the operation of the whole system selecting which codelets to run when, based on the knowledge of the system, the active goals, and the state it finds itself in at any point in time. | | \\ No "Modules" | Note that the diagram above may imply the false impression that AERA consists of these four software "modules", or "classes", or the like. Nothing could be further from the truth: All of AERA's mechanism above are a set of functions that are "welded in with" the operation of the whole system, distributed in a myriad of mechanisms and actions. \\ Does this mean that AERA is spaghetti code, or a mess of a design? On the contrary, the integration and overlap of various mechanisms to achieve the high-level functions depicted in the diagram are surprisingly clean, simple, and coherent in their implementation and operation. \\ This does not mean, however, that AERA is easy to understand -- mainly because it uses concepts and implements mechanisms and relies on concepts that are //very different// from most traditional software systems commonly recognized in computer science. | \\ \\ \\ \\ 2019(c)K. R. Thórisson \\ \\ //EOF//