[[/public:t-720-atai:atai-22:main|T-720-ATAI-2022 Main]] \\
[[/public:t-720-atai:atai-22:Lecture_Notes|Links to Lecture Notes]]
\\
\\
\\
===== SYMBOLS, MODELS, CAUSALITY: AI Architectures =====
\\
\\
\\
====Refresher: System Architecture====
| What it is | In CS: the organization of the software that implements a system. \\ In AI: The total system that has direct and independent control of the behavior of an Agent via its sensors and effectors. |
| Why it's important | The system architecture determines what kind of information processing can be done, and what the system as a whole is capable of in a particular Task-Environemnt. |
| Key concepts | process types; process initiation; information storage; information flow. |
| Graph Representation | Common way to represent processes as nodes, information flow as edges. |
| \\ Relation to AI | The term "system" not only includes the processing components, the functions these implement, their input and output, and relationships, but also temporal aspects of the system's behavior as a whole. This is important in AI because any controller of an agent is supposed to control it in such a way that its behavior can be classified as being "intelligent". But what are the necessary and sufficient components of that behavior set? |
| \\ \\ Rationality | The "rationality hypothesis" models an intelligent agent as a "rational" agent: An agent that would always do the most "sensible" thing at any point in time. \\ The problem with the rationality hypothesis is that given insufficient resources, including time, the concept of rationality doesn't hold up, because it assumes you have time to weigh all alternatives (or, if you have limited time, that you can choose to evaluate the most relevant options and choose among those). But since such decisions are always about the future, and we cannot predict the future perfectly, for most decisions that we get a choice in how to proceed there is no such thing as a rational choice. |
| \\ Satisficing | Herbert Simon proposed the concept of "satisficing" to replace the concept of "optimizing" when talking about intelligent action in a complex task-environment. Actions that meet a particular minimum requirement in light of a particular goal 'satisfy' and 'suffice' for the purposes of that goal. |
| Intelligence is in part a systemic phenomenon | Thought experiment: Take any system we deem intelligent, e.g. a 10-year old human, and isolate any of his/her skills and features. A machine that implements any //single// one of these is unlikely to seem worthy of being called "intelligent" (viz chess programs), without further qualification (e.g. "a limited expert in a sub-field"). \\ //"The intelligence **is** the architecture."// - KRTh |
\\
====Refresher: Inferred GMI Architectural Features ====
| \\ Large architecture | From the above we can readily infer that if we want GMI, an architecture that is considerably more complex than systems being built in most AI labs today is likely unavoidable. In a complex architecture the issue of concurrency of processes must be addressed, a problem that has not yet been sufficiently resolved in present software and hardware. This scaling problem cannot be addressed by the usual “we’ll wait for Moore’s law to catch up” because the issue does not primarily revolve around //speed of execution// but around the //nature of the architectural principles of the system and their runtime operation//. ||
| \\ Predictable Robustness in Novel Circumstances | The system must have a robustness in light of all kinds of task-environment and embodiment perturbations, otherwise no reliable plans can be made, and thus no reliable execution of tasks can ever be reached, no matter how powerful the learning capacity. This robustness must be predictable a-priori at some level of abstraction -- for a wide range of novel circumstances it cannot be a complete surprise that the system "holds up". (If this were the case then the system itself would not be able to predict its chances of success in face of novel circumstances, thus eliminating an important part of the "G" from its "GMI" label.) ||
| \\ Graceful Degradation | No general autonomous system operating in the physical world for any length of time can perform flawlessly throughout its lifetime. Part of the robustness requirement is that the system be constructed in a way as to minimize potential for catastrophic (and upredictable, unrecoverable) failure. \\ A programmer forgets to delimit a command in a compiled program and the whole application crashes; this kind of brittleness is not an option for cognitive systems operating in partially stochastic environments, where perturbations may come in any form at any time (and perfect prediction is impossible). \\ One way for a cognItive system to achieve graceful degradation is through reflection, which enables it to learn, over time, about its own fallacies, shortcomings, and lack of knowledge. ||
| Transversal Functions | The system must have pan-architectural characteristics that enable it to operate consistently as a whole, to be highly adaptive (yet robust) in its own operation across the board, including metacognitive abilities. Some functions likely to be needed to achieve this include attention, learning, analogy-making capabilities, and self-inspection. ||
| | \\ Transversal Time | Ignoring (general) temporal constraints is not an option if we want GMI. (Move over Turing!) Time is a semantic property, and the system must be able to understand – and be able to //learn to understand// – time as a real-world phenomenon in relation to its own skills and architectural operation. Time is everywhere, and is different from other resources in that there is a global clock which cannot, for many task-environments, be turned backwards. Energy must also be addressed, but may not be as fundamentally detrimental to ignore as time while we are in the early stages of exploring methods for developing auto-catalytic knowledge acquisition and cognitive growth mechanisms. \\ Time must be a tightly integrated phenomenon in any GMI architecture - managing and understanding time cannot be retrofitted to a complex architecture! |
| | \\ Transversal Learning | The system should be able to learn anything and everything, which means learning probably not best located in a particular "module" or "modules" in the architecture. \\ Learning must be a tightly integrated phenomenon in any GMI architecture, and must be part of the design from the beginning - implementing general learning into an existing architecture is out of the question: Learning cannot be retrofitted to a complex architecture! |
| | Transversal Resource Management | Resource management - //attention// - must be tightly integrated.\\ Attention must be part of the system design from the beginning - retrofitting resource management into a architecture that didn't include this from the beginning is next to impossible! |
| | Transversal Analogies | Analogies must be included in the system design from the beginning - retrofitting the ability to make general analogies between anything and everything is impossible! |
| | \\ Transversal Self-Inspection | Reflectivity, as it is known, is a fundamental property of knowledge representation. The fact that we humans can talk about the stuff that we think about, and can talk about the fact that we talk about the fact that we can talk about it, is a strong implication that reflectivity is a key property of GMI systems. \\ Reflectivity must be part of the architecture from the beginning - retrofitting this ability into any architecture is virtually impossible! |
| | Transversal Skill Integration | A general-purpose system must tightly and finely coordinate a host of skills, including their acquisition, transitions between skills at runtime, how to combine two or more skills, and transfer of learning between them over time at many levels of temporal and topical detail. |
\\
==== Autonomy & Closure ====
| Autonomy | The ability to do tasks without interference / help from others in a particular task-environment in a particular world. |
| Cognitive Autonomy | Refers to the mental (control-) independence of agents - the more independent they are (of their designers, of outside aid, etc.) the more autonomous they are. Systems without it could hardly be considered to have general intelligence. |
| Structural Autonomy | Refers to the process through which cognitive autonomy is achieved: Motivations, goals and behaviors as dynamically and continuously (re)constructed by the machine as a result of changes in its internal structure. |
| Operational closure | The system's own operations is all that is required to maintain (and improve) the system itself. |
| \\ Semantic closure | The system's own operations and experience produces/defines the meaning of its constituents. //Meaning// can thus be seen as being defined/given by the operation of the system as a whole: the actions it has taken, is taking, could be taking, and has thought about (simulated) taking, both cognitive actions and external actions in its physical domain. For instance, the **meaning** of the act of punching your best friend are the implications of that act - actual and potential - that this action has/may have, and its impact on your own and others' cognition. |
| Self-Programming \\ in Autonomy | The global process that animates computational structurally autonomous systems, i.e. the implementation of both the operational and semantic closures. |
| System evolution | A controlled and planned reflective process at a higher level of abstraction than (domain-focused) learning; a global and never-terminating process of architectural analysis and synthesis. |
| Autonomous Model Acquisition | \\ The ability to create a model of some target phenomenon //autonomously// (i.e. without "calling home"). |
| \\ \\ Challenge | Unless we (the designers of an intelligent controller) know beforehand which signals from the controller cause desired perturbations in o and can hard-wire these from the get-go, the controller must find these signals. \\ In task-domains where the number of available signals is vastly greater than the controller's resources available to do such search, it may take an unacceptable time for the controller to find good predictive variables to create models with. \\ V_te >> V_mem, where the former is the total number of potentially observable and manipulatable variables in the task-environment and the latter is the number of variables that the agent can hold in its memory at any point in time. |
\\
====SOAR====
| What it is | One of the oldest cognitive architectures in history. |
| Why is it important | One of the oldest AGI-aspiring systems in history. |
| How does it work | Reasoning engine does pattern-matching with hand-coded 'production' rules and 'operators' to solve problems, with an ability to "chunk" - create 'shortcuts' for long transitive reasoning chains. Upon 'impasse' (break in the flow of reasoning/problem solving) a reasoning process tries to resolve it via successive application of relevant rules. |
| Recent Additions | Reinforcement learning for steering reasoning. Sub-symbolic processing for low-level perception. |
| Missing in Action | Attention (resource control, self-control), symbolic learning (other than chunking). |
| \\ \\ General Description | SOAR has been used by many researchers worldwide during its 20 year life span. During this time it has also been revised and extended in a number of ways. The architecture consists of heterogenous components that interact during each decision cycle. These are working memory and three types of long-term memory: **semantic**, **procedural**, and **episodic**. Working memory is where information related to the present is stored with its contents being supplied by sensors or copied from other memory structures based on relevancy to the present situation. Working memory also contains an activation mechanism, used in conjunction with episodic memory, that indicates the relevancy and usefulness of working memory elements. Production rules are matched and fired on the contents of working memory during the decision cycle, implementing both an associative memory mechanism (as rules can bring data from long-term memory into working memory) and action selection (as rules propose, evaluate and apply operators). Operators are procedural data stored in procedural memory. The application of an operator is carried out by a production rule and either causes changes in the working memory or triggers an external action. In cases where operator selection fails due to insufficient knowledge, an impasse event occurs and a process to resolve the impasse is started. This process involves reasoning and inference upon existing knowledge using the same decision cycle in a recursive fashion, the results of this process are converted to production rules by a process termed chunking. Reinforcement learning is used for production rules relating to operator selection to maximize future rewards in similar situations. One of the most recent additions to the SOAR architecture is sub-symbolic processing used for visual capabilities, where the bridge between sub-symbolic to symbolic processing consists of feature detection. As the working memory can contain execution traces, introspective abilities are possible. \\ \\ The SOAR architecture provides one of the largest collection of simultaneous running cognitive processes of any cognitive architecture so far. However, there is no explicit mechanism for control of attention and the architecture is not designed for real-time operation. The latter may be especially problematic as execution is in strict step-lock form and in particular, the duration (amount of computation) in each decision cycle can vary greatly due to impasse events that are raised occasionally. One might argue that the development of SOAR has been somewhat characterized by "adding boxes" (components) to the architecture when it might be better to follow a more unified approach putting integration at the forefront. \\ \\ There are a few cognitive architectures that somewhat resemble SOAR and can be placed categorically on the same track. These include ICARUS, which has a strong emphasis on embodiment and has shown promise in terms of generality in a number of toy problems such as in-city driving, and LIDA which was developed for the US Navy to automatically organize and negotiate assignments with sailors but does not have embodiment as a design goal. As in SOAR, both of these implement different types of memory in specialized components and have a step-locked decision cycle. \\ \\ REF: Helgason, H.P. (2013). [[http://www.ru.is/media/td/Helgi_Pall_Helgason_PhD_CS_HR.pdf|General Attention Mechanisms for Artificial Intelligence Systems]]. Ph.D. Thesis, School of Computer Science, Reykjavik U. |
\\
====Features of SOAR====
| Predictable Robustness in Novel Circumstances | Not really | Since SOAR isn't really designed to operate and learn in novel circumstances, but rather work under variations of what it already knows, this issue hardly comes up. |
| Graceful Degradation | No | The knowledge representation of SOAR is not organized around safe, predictable, or trustworthy operation (SOAR is an //early// experimental architecture). Since SOAR cannot do reflection, and SOAR doesn't learn, there is no way for SOAR to get better about evaluating its own performance over time, with experience. |
| \\ Transversal Functions | \\ No | //Transversal Handling of Time.// No explicit handling of time. \\ //Transversal Learning.// Learning is not a central design target of SOAR; reinforcement learning available as an afterthought. No model-based learning; reasoning present (but highly limited). \\ //Transversal Analogies.// No \\ //Transversal Self-Inspection.// Hardly. \\ //Transversal Skill Integration.// We would be hard-pressed to see any such mechanisms. |
| \\ Symbolic? | \\ CHECK | One of the main features of SOAR is being symbol oriented. However, the symbols do not have very rich semantics as they are limited to simple sentences; few if any mechanisms exist to manage large sets of symbols and statements: The main operations of SOAR are at the level of a dozen sentences or less. |
| \\ Models? | No \\ (but yes) | Any good controller of a system is model of that system. It is, however, unclear what kinds of models SOAR creates. While it bears some surface similarities to NARS in its approach, due to both being based on reasoning, SOAR is fundamentally different because it is axiomatic (i.e. does not have obvious ways for knowledge grounding) and doesn't really support the kind of introspection nor general reasoning that NARS does. It is thus hard to see how it would improve or modify its knowledge over time. |
\\
====Non-Axiomatic Reasoning System (NARS)====
| What it is | A reasoning system for handling complex unknown knowledge, based on non-axiomatic knowledge learned from experience. |
| Why is it important | One of the oldest AGI-aspiring systems in history. |
| How does it work | Reasoning engine does autonomous learning based on what is "experienced". Experirience must be encoded in NARSese - NARS's native knowledge representation language. |
| Recent Versions | ONA (OpenNARS for Applications) implements a version of NARS that is closer to what we might want for controlling robots ("regular" NARS is more of a "philosopher" than an "engineer"). |
| Missing in Action | An approach for handling continuous information. \\ A more explicit way for reasoning control (resource management). |
| \\ \\ General Description | NARS is fundamentally different from traditional reasoning systems, mainly because of its assumption of insufficient knowledge and resources. The mathematical logic came from the study of theorem proving in mathematics, where the domain knowledge is summarized in axioms, the inference rules are truth-preserving, and the resource cost of an inference process is ignored, as far as it is finite. The logic of NARS is named "Non-Axiomatic Logic" (NAL), because none of the knowledge it processes (as premise or conclusion) can be considered as "axiom", as with a fixed truth-value. Instead, the system's beliefs are summaries of the system's experience, and are always revisable. \\ In NARS, a "term" names a "concept" that represents a recurring pattern in the system's experience, and a "statement" represents the substitutability of one term to another. Each statement is "true" to a degree, indicating the evidential support the statement gets from available evidence. An inference rule specifies how new statements can be derived from certain existing statements. The memory and control mechanism of the system attempt to use the time-space resources of the system in the most efficient way by dynamically distributing the resources according to the experience of the system and the current context. The overall architecture and working cycle of NARS is explained here. \\ NARS is adaptive to its experience, and therefore is situated and embodied. Its beliefs summarize the system's experience (rather than describe the world as it is), and its concepts represent patterns in the experience (rather than denote the objects in the world). Its inference rules are valid, because each conclusion is supported by the evidence provided by the premises (rather than because they derive absolute truth from absolute truth). The system is rational, because its conclusions are the best the system can find under the current knowledge and resources restriction (rather than because they are always absolutely correct or optimal). \\ \\ REF: https://cis.temple.edu/~pwang/NARS-Intro.html |
\\
====Features of NARS====
| Predictable Robustness in Novel Circumstances | \\ Yes | NARS is explicitly designed to operate and learn novel things in novel circumstances. It is the only architecture (besides AERA) that is directly based on, and specifically designed to, an **assumption of insufficient knowledge and resources** (AIKR). |
| \\ Graceful Degradation | \\ Yes | While the knowledge representation of NARS is not specifically aimed at achieving safe, predictable, or trustworthy operation, NARS can do reflection, so NARS could learn to get better about evaluating its own performance over time, which means it would be increasingly knowledgeable about its failure modes, making it increasingly more likley to fail gracefully. |
| \\ Transversal Functions | \\ Yes | //Transversal Handling of Time.// Time is handled in a very general and relative manner, like any other reasoning. \\ //Transversal Learning.// Learning is a central design target of NARS. While knowledge in NARS is not explicitly model-based, its knowledge is symbolic and NARSese statements can be thought of as micro-models; reasoning is a fundamental (some would say the only) principle of its operation. \\ //Transversal Analogies.// Yes \\ //Transversal Self-Inspection.// Yes. Via reasoning. \\ //Transversal Skill Integration.// Yes. Via reasoning. |
| Symbolic? | CHECK | One of the main features of NARS is deep symbol orientation. |
| Models? | No \\ (but yes) | Any good controller of a system is model of that system. The smallest unit in NARS that could be called 'models' are NARSese statements. |
\\
\\
====AERA====
| Description | The Auto-Catalytic Endogenous Reflective Architecture is an AGI-aspiring self-programming system that combines reactive, predictive and reflective control in a model-based and model-driven system that is programmed with a seed. |
| {{/public:t-720-atai:aera-high-level-2018.png?600}} ||
| **FIG 1.** High-level view of the three main functions at work in a running AERA system and their interaction with its knowledge store. ||
| \\ Models | All models are stored in a central //memory//, and the three processes of //planning//, //attention// (resource management) and //learning// happen as a result of programs that operate on models by matching, activating, and scoring them. Models that predict correctly -- not just "what happens next?" but also "what will happen if I do X?" -- get a success point. Every time a model 'fires' like that it gets counted, so the ratio of success over counts gives you the "goodness" of a model. \\ Models that have the lowest scores are deleted, models with a good score that suddenly fail result in the generation of new versions of itself (think of it as hypotheses for why it failed this time), and this process over time increases the quality and utility of the knowledge of the controller, in other words it //learns//. |
| \\ Attention | Attention is nothing more than resource management, in the case of cognitive controllers it typically involves management of knowledge, time, energy, and computing power. Attention in AERA is the set of functions that decides how the controller uses its compute time, how long it "mulls things over", and how far into the future it allows itself to "think". It also involves which models the system works with at any point in time, how much it explores models outside of the obvious candidate set at any point in time. |
| \\ Planning | Planning is the set of operations involved with looking at alternative ways of proceeding, based on predictions into the future and the quality of the solutions found so far, at any point in time. The plans produced by AERA are of a mixed opportunistic (short time horizon)/firm commitment (long time horizon) kind, and their stability (subject to change drastically over their course) depend solely on the dependability of the models involved -- i.e. how well the models represent what is actually going on in the world (including the controllers "mind"). |
| Learning | Learning happens as a result of the accumulation of models; as they increasingly describe "reality" better (i.e. their target phenomenon) they get better for planning and attention, which in turn improves the learning. |
| \\ Memory | AREA's "global knowledge base" is in some ways similar to the idea of blackboards: AERA stores all its knowledge in a "global workspace" or memory. Unlike (Selfridge's idea of) blackboards, the blackboard contains executive functions that manage the knowledge dynamically, in addition to "the experts", which in AERA's case are very tiny and better thought of as "models with codelet helpers". |
| Pervasive Use of Codelets | A //codelet// is a piece of code that is smaller than a typical self-contained program, typically a few lines long, and can only be executed in particular contexts. Programs are constructed on the fly by the operation of the whole system selecting which codelets to run when, based on the knowledge of the system, the active goals, and the state it finds itself in at any point in time. |
| \\ No "Modules" | Note that the diagram above may imply the false impression that AERA consists of these four software "modules", or "classes", or the like. Nothing could be further from the truth: All of AERA's mechanism above are a set of functions that are "welded in with" the operation of the whole system, distributed in a myriad of mechanisms and actions. \\ Does this mean that AERA is spaghetti code, or a mess of a design? On the contrary, the integration and overlap of various mechanisms to achieve the high-level functions depicted in the diagram are surprisingly clean, simple, and coherent in their implementation and operation. \\ This does not mean, however, that AERA is easy to understand -- mainly because it uses concepts and implements mechanisms and relies on concepts that are //very different// from most traditional software systems commonly recognized in computer science. \\ \\ Example Demonstration \\ [[https://www.youtube.com/watch?v=2NQtEJbQCdw|Human-human interaction]] (what S1 observes and learns from) \\ [[https://www.youtube.com/watch?v=SH6tQ4fgWA4|Human-S1 interaction]] (S1 interviewing a human) \\ [[https://www.youtube.com/watch?v=x96HXLPLORg|S1-Human Interaction]] (S1 being interviewed by a human) |
\\
====Features of AERA====
| Predictable Robustness in Novel Circumstances | \\ Yes | \\ Since AERA's learning is goal driven, its target operational environment are (semi-)novel circumstances. |
| Graceful Degradation | Yes | Knowledge representation in AERA is based around causal relations, which are essential for mapping out "how the world works". Because AERA's knowledge processing is organized around goals, with increased knowledge AERA will get closer and closer to "perfect operation" (i.e. meeting its top-level drives/goals, for which each instance was created). Furthermore, AERA can do reflection, so it gets better at evaluating its own performance over time, meaning it makes (causal) models of its own failure modes, increasing its chances of graceful degradation. |
| \\ Transversal Functions | \\ Yes | //Transversal Handling of Time.// Time is transversal. \\ //Transversal Learning.// Yes. Learning can happen at the smallest level as well as the largest, but generally learning proceeds in small increments. Model-based learning is built in; ampliative (mixed) reasoning is present. \\ //Transversal Analogies.// Yes, but remains to be developed further. \\ //Transversal Self-Inspection.// Yes. AERA can inspect a large part of its internal operations (but not everything). \\ //Transversal Skill Integration.// Yes. This follows naturally from the fact that all models are sharable between anything and everything that AERA learns and does. |
| \\ Symbolic? | \\ CHECK | One of the main features of AERA is that its knowledge is declarable by being symbol-oriented. AERA can learn language in the same way it learns anything else (i.e. goal-directed, pragmatic). AERA has been implemented to handle 20k models, but so far the most complex demonstration used only approx 1400 models. |
| Models? | Yes | Explicit model building is the main learning mechanism. |
\\
\\
\\
\\
//2022(c)K.R.Thórisson//