T-720-ATAI-2020 Main
Links to Lecture Notes
What it is | In CS: the organization of the software that implements a system. In AI: The total system that has direct and independent control of the behavior of an Agent via its sensors and effectors. |
Why it's important | The system architecture determines what kind of information processing can be done, and what the system as a whole is capable of in a particular Task-Environemnt. |
Key concepts | process types; process initiation; information storage; information flow. |
Graph Representation | Common way to represent processes as nodes, information flow as edges. |
Relation to AI | The term “system” not only includes the processing components, the functions these implement, their input and output, and relationships, but also temporal aspects of the system's behavior as a whole. This is important in AI because any controller of an agent is supposed to control it in such a way that its behavior can be classified as being “intelligent”. But what are the necessary and sufficient components of that behavior set? |
Rationality | The “rationality hypothesis” models an intelligent agent as a “rational” agent: An agent that would always do the most “sensible” thing at any point in time. The problem with the rationality hypothesis is that given insufficient resources, including time, the concept of rationality doesn't hold up, because it assumes you have time to weigh all alternatives (or, if you have limited time, that you can choose to evaluate the most relevant options and choose among those). But since such decisions are always about the future, and we cannot predict the future perfectly, for most decisions that we get a choice in how to proceed there is no such thing as a rational choice. |
Satisficing | Herbert Simon proposed the concept of “satisficing” to replace the concept of “optimizing” when talking about intelligent action in a complex task-environment. Actions that meet a particular minimum requirement in light of a particular goal 'satisfy' and 'suffice' for the purposes of that goal. |
Intelligence is in part a systemic phenomenon | Thought experiment: Take any system we deem intelligent, e.g. a 10-year old human, and isolate any of his/her skills and features. A machine that implements any single one of these is unlikely to seem worthy of being called “intelligent” (viz chess programs), without further qualification (e.g. “a limited expert in a sub-field”). “The intelligence is the architecture.” - KRTh |
Large architecture | From the above we can readily infer that if we want GMI, an architecture that is considerably more complex than systems being built in most AI labs today is likely unavoidable. In a complex architecture the issue of concurrency of processes must be addressed, a problem that has not yet been sufficiently resolved in present software and hardware. This scaling problem cannot be addressed by the usual “we’ll wait for Moore’s law to catch up” because the issue does not primarily revolve around speed of execution but around the nature of the architectural principles of the system and their runtime operation. | |
Predictable Robustness in Novel Circumstances | The system must have a robustness in light of all kinds of task-environment and embodiment perturbations, otherwise no reliable plans can be made, and thus no reliable execution of tasks can ever be reached, no matter how powerful the learning capacity. This robustness must be predictable a-priori at some level of abstraction – for a wide range of novel circumstances it cannot be a complete surprise that the system “holds up”. (If this were the case then the system itself would not be able to predict its chances of success in face of novel circumstances, thus eliminating an important part of the “G” from its “AGI” label.) | |
Graceful Degradation | Part of the robustness requirement is that the system be constructed in a way as to minimize potential for catastrophic (and upredictable) failure. A programmer forgets to delimit a command in a compiled program and the whole application crashes; this kind of brittleness is not an option for cognitive systems operating in partially stochastic environments, where perturbations may come in any form at any time (and perfect prediction is impossible). | |
Transversal Functions | The system must have pan-architectural characteristics that enable it to operate consistently as a whole, to be highly adaptive (yet robust) in its own operation across the board, including metacognitive abilities. Some functions likely to be needed to achieve this include attention, learning, analogy-making capabilities, and self-inspection. | |
Transversal Time | Ignoring (general) temporal constraints is not an option if we want AGI. (Move over Turing!) Time is a semantic property, and the system must be able to understand – and be able to learn to understand – time as a real-world phenomenon in relation to its own skills and architectural operation. Time is everywhere, and is different from other resources in that there is a global clock which cannot, for many task-environments, be turned backwards. Energy must also be addressed, but may not be as fundamentally detrimental to ignore as time while we are in the early stages of exploring methods for developing auto-catalytic knowledge acquisition and cognitive growth mechanisms. Time must be a tightly integrated phenomenon in any AGI architecture - managing and understanding time cannot be retrofitted to a complex architecture! |
|
Transversal Learning | The system should be able to learn anything and everything, which means learning probably not best located in a particular “module” or “modules” in the architecture. Learning must be a tightly integrated phenomenon in any AGI architecture, and must be part of the design from the beginning - implementing general learning into an existing architecture is out of the question: Learning cannot be retrofitted to a complex architecture! |
|
Transversal Resource Management | Resource management - attention - must be tightly integrated. Attention must be part of the system design from the beginning - retrofitting resource management into a architecture that didn't include this from the beginning is next to impossible! |
|
Transversal Analogies | Analogies must be included in the system design from the beginning - retrofitting the ability to make general analogies between anything and everything is impossible! | |
Transversal Self-Inspection | Reflectivity, as it is known, is a fundamental property of knowledge representation. The fact that we humans can talk about the stuff that we think about, and can talk about the fact that we talk about the fact that we can talk about it, is a strong implication that reflectivity is a key property of AGI systems. Reflectivity must be part of the architecture from the beginning - retrofitting this ability into any architecture is virtually impossible! |
|
Transversal Skill Integration | A general-purpose system must tightly and finely coordinate a host of skills, including their acquisition, transitions between skills at runtime, how to combine two or more skills, and transfer of learning between them over time at many levels of temporal and topical detail. |
Autonomy | The ability to do tasks without interference / help from others in a particular task-environment in a particular world. |
Cognitive Autonomy | Refers to the mental (control-) independence of agents - the more independent they are (of their designers, of outside aid, etc.) the more autonomous they are. Systems without it could hardly be considered to have general intelligence. |
Structural Autonomy | Refers to the process through which cognitive autonomy is achieved: Motivations, goals and behaviors as dynamically and continuously (re)constructed by the machine as a result of changes in its internal structure. |
Operational closure | The system's own operations is all that is required to maintain (and improve) the system itself. |
Semantic closure | The system's own operations and experience produces/defines the meaning of its constituents. Meaning can thus be seen as being defined/given by the operation of the system as a whole: the actions it has taken, is taking, could be taking, and has thought about (simulated) taking, both cognitive actions and external actions in its physical domain. For instance, the meaning of the act of punching your best friend are the implications of that act - actual and potential - that this action has/may have, and its impact on your own and others' cognition. |
Self-Programming in Autonomy | The global process that animates computational structurally autonomous systems, i.e. the implementation of both the operational and semantic closures. |
System evolution | A controlled and planned reflective process; a global and never-terminating process of architectural synthesis. |
Autonomous Model Acquisition | The ability to create a model of some target phenomenon automatically. |
Challenge | Unless we (the designers of an intelligent controller) know beforehand which signals from the controller cause desired perturbations in <m>o</m> and can hard-wire these from the get-go, the controller must find these signals. In task-domains where the number of available signals is vastly greater than the controller's resources available to do such search, it may take an unacceptable time for the controller to find good predictive variables to create models with. <m>V_te » V_mem</m>, where the former is the total number of potentially observable and manipulatable variables in the task-environment and the latter is the number of variables that the agent can hold in its memory at any point in time. |
What it is | One of the oldest cognitive architectures in history. |
Why is it important | One of the oldest AGI-aspiring systems in history. |
How does it work | Reasoning engine does pattern-matching with hand-coded 'production' rules and 'operators' to solve problems, with an ability to “chunk” - create 'shortcuts' for long transitive reasoning chains. Upon 'impasse' (break in the flow of reasoning/problem solving) a reasoning process tries to resolve it via successive application of relevant rules. |
Recent Additions | Reinforcement learning for steering reasoning. Sub-symbolic processing for low-level perception. |
Missing in Action | Attention (resource control, self-control), symbolic learning (other than chunking). |
General Description | SOAR has been used by many researchers worldwide during its 20 year life span. During this time it has also been revised and extended in a number of ways. The architecture consists of heterogenous components that interact during each decision cycle. These are working memory and three types of long-term memory: semantic, procedural, and episodic. Working memory is where information related to the present is stored with its contents being supplied by sensors or copied from other memory structures based on relevancy to the present situation. Working memory also contains an activation mechanism, used in conjunction with episodic memory, that indicates the relevancy and usefulness of working memory elements. Production rules are matched and fired on the contents of working memory during the decision cycle, implementing both an associative memory mechanism (as rules can bring data from long-term memory into working memory) and action selection (as rules propose, evaluate and apply operators). Operators are procedural data stored in procedural memory. The application of an operator is carried out by a production rule and either causes changes in the working memory or triggers an external action. In cases where operator selection fails due to insufficient knowledge, an impasse event occurs and a process to resolve the impasse is started. This process involves reasoning and inference upon existing knowledge using the same decision cycle in a recursive fashion, the results of this process are converted to production rules by a process termed chunking. Reinforcement learning is used for production rules relating to operator selection to maximize future rewards in similar situations. One of the most recent additions to the SOAR architecture is sub-symbolic processing used for visual capabilities, where the bridge between sub-symbolic to symbolic processing consists of feature detection. As the working memory can contain execution traces, introspective abilities are possible. The SOAR architecture provides one of the largest collection of simultaneous running cognitive processes of any cognitive architecture so far. However, there is no explicit mechanism for control of attention and the architecture is not designed for real-time operation. The latter may be especially problematic as execution is in strict step-lock form and in particular, the duration (amount of computation) in each decision cycle can vary greatly due to impasse events that are raised occasionally. One might argue that the development of SOAR has been somewhat characterized by “adding boxes” (components) to the architecture when it might be better to follow a more unified approach putting integration at the forefront. There are a few cognitive architectures that somewhat resemble SOAR and can be placed categorically on the same track. These include ICARUS, which has a strong emphasis on embodiment and has shown promise in terms of generality in a number of toy problems such as in-city driving, and LIDA which was developed for the US Navy to automatically organize and negotiate assignments with sailors but does not have embodiment as a design goal. As in SOAR, both of these implement different types of memory in specialized components and have a step-locked decision cycle. REF: Helgason, H.P. (2013). General Attention Mechanisms for Artificial Intelligence Systems. Ph.D. Thesis, School of Computer Science, Reykjavik U. |
Large Architecture | Yes | Comparatively large. SOAR is as “large” as they come (or was - equally large cognitive architectures are getting more common). |
Predictable Robustness in Novel Circumstances | Not really | Since SOAR isn't really designed to operate and learn in novel circumstances, but rather work under variations of what it already knows, this issue hardly comes up. |
Graceful Degradation | No | |
Transversal Functions | No | Transversal Handling of Time. No explicit handling of time. Transversal Learning. Learning is not a central design target of SOAR; reinforcement learning available as an afterthought. No model-based learning; reasoning present (but highly limited). Transversal Analogies. No Transversal Self-Inspection. Hardly. Transversal Skill Integration. We would be hard-pressed to see any such mechanisms. |
Symbolic? | CHECK | One of the main features of SOAR is being symbol oriented. However, the symbols do not have very rich semantics as they are limited to simple sentences; few if any mechanisms exist to manage large sets of symbols and statements: The main operations of SOAR are at the level of a dozen sentences or less. |
Models? | No (but yes) | Any good controller of a system is model of that system. It is, however, unclear what kinds of models SOAR creates. While similar to NARS in its approach, SOAR is axiomatic (i.e. does not have obvious ways for knowledge grounding) and thus it is hard to see how it would improve or modify its knowledge over time. |
Large Architecture | Yes | Comparatively large. AERA is as “large” as they come (or was - equally large cognitive architectures are getting more common). |
Predictable Robustness in Novel Circumstances | Yes | Since AERA's learning is goal driven, its target operational environment are (semi-)novel circumstances. |
Graceful Degradation | ||
Transversal Functions | Yes | Transversal Handling of Time. Time is transversal. Transversal Learning. Yes. Learning can happen at the smallest level as well as the largest, but generally learning proceeds in small increments. Model-based learning is built in; ampliative (mixed) reasoning is present. Transversal Analogies. Yes, but remains to be developed further. Transversal Self-Inspection. Yes. AERA can inspect a large part of its internal operations (but not everything). Transversal Skill Integration. Yes. This follows naturally from the fact that all models are sharable between anything and everything that AERA learns and does. |
Symbolic? | CHECK | One of the main features of AERA is that its knowledge is declarable by being symbol-oriented. The symbols do not have very rich semantics: AERA can learn language in the same way it learns anything else (e.g. goal-directed, pragmatic). AERA has been implemented to handle 20k models, but so far the most complex demonstration uses only approx 1400 models. |
Models? | Yes | Explicit model building is the main learning mechanism. |
2020©K.R.Thórisson