Both sides previous revisionPrevious revisionNext revision | Previous revision |
public:t-720-atai:atai-21:ai_architectures [2021/11/01 11:08] – [AERA] thorisson | public:t-720-atai:atai-21:ai_architectures [2024/04/29 13:33] (current) – external edit 127.0.0.1 |
---|
| \\ Large architecture | From the above we can readily infer that if we want GMI, an architecture that is considerably more complex than systems being built in most AI labs today is likely unavoidable. In a complex architecture the issue of concurrency of processes must be addressed, a problem that has not yet been sufficiently resolved in present software and hardware. This scaling problem cannot be addressed by the usual “we’ll wait for Moore’s law to catch up” because the issue does not primarily revolve around //speed of execution// but around the //nature of the architectural principles of the system and their runtime operation//. || | | \\ Large architecture | From the above we can readily infer that if we want GMI, an architecture that is considerably more complex than systems being built in most AI labs today is likely unavoidable. In a complex architecture the issue of concurrency of processes must be addressed, a problem that has not yet been sufficiently resolved in present software and hardware. This scaling problem cannot be addressed by the usual “we’ll wait for Moore’s law to catch up” because the issue does not primarily revolve around //speed of execution// but around the //nature of the architectural principles of the system and their runtime operation//. || |
| \\ Predictable Robustness in Novel Circumstances | The system must have a robustness in light of all kinds of task-environment and embodiment perturbations, otherwise no reliable plans can be made, and thus no reliable execution of tasks can ever be reached, no matter how powerful the learning capacity. This robustness must be predictable a-priori at some level of abstraction -- for a wide range of novel circumstances it cannot be a complete surprise that the system "holds up". (If this were the case then the system itself would not be able to predict its chances of success in face of novel circumstances, thus eliminating an important part of the "G" from its "AGI" label.) || | | \\ Predictable Robustness in Novel Circumstances | The system must have a robustness in light of all kinds of task-environment and embodiment perturbations, otherwise no reliable plans can be made, and thus no reliable execution of tasks can ever be reached, no matter how powerful the learning capacity. This robustness must be predictable a-priori at some level of abstraction -- for a wide range of novel circumstances it cannot be a complete surprise that the system "holds up". (If this were the case then the system itself would not be able to predict its chances of success in face of novel circumstances, thus eliminating an important part of the "G" from its "AGI" label.) || |
| \\ Graceful Degradation | Part of the robustness requirement is that the system be constructed in a way as to minimize potential for catastrophic (and upredictable) failure. A programmer forgets to delimit a command in a compiled program and the whole application crashes; this kind of brittleness is not an option for cognitive systems operating in partially stochastic environments, where perturbations may come in any form at any time (and perfect prediction is impossible). || | | \\ Graceful Degradation | No general autonomous system operating in the physical world for any length of time can perform flawlessly throughout its lifetime. Part of the robustness requirement is that the system be constructed in a way as to minimize potential for catastrophic (and upredictable, unrecoverable) failure. \\ A programmer forgets to delimit a command in a compiled program and the whole application crashes; this kind of brittleness is not an option for cognitive systems operating in partially stochastic environments, where perturbations may come in any form at any time (and perfect prediction is impossible). \\ One way for a cogntive system to achieve graceful degradation is through reflection, which enables it to learn, over time, about its own fallacies, shortcomings, and lack of knowledge. || |
| Transversal Functions | The system must have pan-architectural characteristics that enable it to operate consistently as a whole, to be highly adaptive (yet robust) in its own operation across the board, including metacognitive abilities. Some functions likely to be needed to achieve this include attention, learning, analogy-making capabilities, and self-inspection. || | | Transversal Functions | The system must have pan-architectural characteristics that enable it to operate consistently as a whole, to be highly adaptive (yet robust) in its own operation across the board, including metacognitive abilities. Some functions likely to be needed to achieve this include attention, learning, analogy-making capabilities, and self-inspection. || |
| | \\ Transversal Time | Ignoring (general) temporal constraints is not an option if we want AGI. (Move over Turing!) Time is a semantic property, and the system must be able to understand – and be able to //learn to understand// – time as a real-world phenomenon in relation to its own skills and architectural operation. Time is everywhere, and is different from other resources in that there is a global clock which cannot, for many task-environments, be turned backwards. Energy must also be addressed, but may not be as fundamentally detrimental to ignore as time while we are in the early stages of exploring methods for developing auto-catalytic knowledge acquisition and cognitive growth mechanisms. \\ Time must be a tightly integrated phenomenon in any AGI architecture - managing and understanding time cannot be retrofitted to a complex architecture! | | | | \\ Transversal Time | Ignoring (general) temporal constraints is not an option if we want AGI. (Move over Turing!) Time is a semantic property, and the system must be able to understand – and be able to //learn to understand// – time as a real-world phenomenon in relation to its own skills and architectural operation. Time is everywhere, and is different from other resources in that there is a global clock which cannot, for many task-environments, be turned backwards. Energy must also be addressed, but may not be as fundamentally detrimental to ignore as time while we are in the early stages of exploring methods for developing auto-catalytic knowledge acquisition and cognitive growth mechanisms. \\ Time must be a tightly integrated phenomenon in any AGI architecture - managing and understanding time cannot be retrofitted to a complex architecture! | |
====Features of SOAR==== | ====Features of SOAR==== |
| Predictable Robustness in Novel Circumstances | Not really | Since SOAR isn't really designed to operate and learn in novel circumstances, but rather work under variations of what it already knows, this issue hardly comes up. | | | Predictable Robustness in Novel Circumstances | Not really | Since SOAR isn't really designed to operate and learn in novel circumstances, but rather work under variations of what it already knows, this issue hardly comes up. | |
| Graceful Degradation | No | | | | Graceful Degradation | No | The knowledge representation of SOAR is not organized around safe, predictable, or trustworthy operation (SOAR is an //early// experimental architecture). Since SOAR cannot do reflection, and SOAR doesn't learn, there is no way for SOAR to get better about evaluating its own performance over time, with experience. | |
| \\ Transversal Functions | \\ No | //Transversal Handling of Time.// No explicit handling of time. \\ //Transversal Learning.// Learning is not a central design target of SOAR; reinforcement learning available as an afterthought. No model-based learning; reasoning present (but highly limited). \\ //Transversal Analogies.// No \\ //Transversal Self-Inspection.// Hardly. \\ //Transversal Skill Integration.// We would be hard-pressed to see any such mechanisms. | | | \\ Transversal Functions | \\ No | //Transversal Handling of Time.// No explicit handling of time. \\ //Transversal Learning.// Learning is not a central design target of SOAR; reinforcement learning available as an afterthought. No model-based learning; reasoning present (but highly limited). \\ //Transversal Analogies.// No \\ //Transversal Self-Inspection.// Hardly. \\ //Transversal Skill Integration.// We would be hard-pressed to see any such mechanisms. | |
| \\ Symbolic? | \\ CHECK | One of the main features of SOAR is being symbol oriented. However, the symbols do not have very rich semantics as they are limited to simple sentences; few if any mechanisms exist to manage large sets of symbols and statements: The main operations of SOAR are at the level of a dozen sentences or less. | | | \\ Symbolic? | \\ CHECK | One of the main features of SOAR is being symbol oriented. However, the symbols do not have very rich semantics as they are limited to simple sentences; few if any mechanisms exist to manage large sets of symbols and statements: The main operations of SOAR are at the level of a dozen sentences or less. | |
\\ | \\ |
====Features of NARS==== | ====Features of NARS==== |
| Predictable Robustness in Novel Circumstances | Yes | NARS is explicitly designed to operate and learn novel things in novel circumstances. It is the only architecture (besides AERA) that is directly based on, and specifically designed to, an **assumption of insufficient knowledge and resources** (AIKR). | | | Predictable Robustness in Novel Circumstances | \\ Yes | NARS is explicitly designed to operate and learn novel things in novel circumstances. It is the only architecture (besides AERA) that is directly based on, and specifically designed to, an **assumption of insufficient knowledge and resources** (AIKR). | |
| Graceful Degradation | Yes | | | | Graceful Degradation | \\ Yes | While the knowledge representation of NARS is not specifically aimed at achieving safe, predictable, or trustworthy operation, NARS can do reflection, so NARS could learn to get better about evaluating its own performance over time, which means it would be increasingly knowledgeable about its failure modes, making it increasingly more likley to fail gracefully. | |
| \\ Transversal Functions | \\ Yes | //Transversal Handling of Time.// Time is handled in a very general and relative manner, like any other reasoning. \\ //Transversal Learning.// Learning is a central design target of NARS. While knowledge in NARS is not explicitly model-based, its knowledge is symbolic and NARSese statements can be thought of as micro-models; reasoning is a fundamental (some would say the only) principle of its operation. \\ //Transversal Analogies.// Yes \\ //Transversal Self-Inspection.// Yes. Via reasoning. \\ //Transversal Skill Integration.// Yes. Via reasoning. | | | \\ Transversal Functions | \\ Yes | //Transversal Handling of Time.// Time is handled in a very general and relative manner, like any other reasoning. \\ //Transversal Learning.// Learning is a central design target of NARS. While knowledge in NARS is not explicitly model-based, its knowledge is symbolic and NARSese statements can be thought of as micro-models; reasoning is a fundamental (some would say the only) principle of its operation. \\ //Transversal Analogies.// Yes \\ //Transversal Self-Inspection.// Yes. Via reasoning. \\ //Transversal Skill Integration.// Yes. Via reasoning. | |
| \\ Symbolic? | \\ CHECK | One of the main features of NARS is deep symbol orientation. | | | \\ Symbolic? | \\ CHECK | One of the main features of NARS is deep symbol orientation. | |
====Features of AERA==== | ====Features of AERA==== |
| Predictable Robustness in Novel Circumstances | \\ Yes | \\ Since AERA's learning is goal driven, its target operational environment are (semi-)novel circumstances. | | | Predictable Robustness in Novel Circumstances | \\ Yes | \\ Since AERA's learning is goal driven, its target operational environment are (semi-)novel circumstances. | |
| Graceful Degradation | | | | | Graceful Degradation | Yes | Knowledge representation in AERA is based around causal relations, which are essential for mapping out "how the world works". Because AERA's knowledge processing is organized around goals, with increased knowledge AERA will get closer and closer to "perfect operation" (i.e. meeting its top-level drives/goals, for which each instance was created). Furthermore, AERA can do reflection, so it gets better at evaluating its own performance over time, meaning it makes (causal) models of its own failure modes, increasing its chances of graceful degradation. | |
| \\ Transversal Functions | \\ Yes | //Transversal Handling of Time.// Time is transversal. \\ //Transversal Learning.// Yes. Learning can happen at the smallest level as well as the largest, but generally learning proceeds in small increments. Model-based learning is built in; ampliative (mixed) reasoning is present. \\ //Transversal Analogies.// Yes, but remains to be developed further. \\ //Transversal Self-Inspection.// Yes. AERA can inspect a large part of its internal operations (but not everything). \\ //Transversal Skill Integration.// Yes. This follows naturally from the fact that all models are sharable between anything and everything that AERA learns and does. | | | \\ Transversal Functions | \\ Yes | //Transversal Handling of Time.// Time is transversal. \\ //Transversal Learning.// Yes. Learning can happen at the smallest level as well as the largest, but generally learning proceeds in small increments. Model-based learning is built in; ampliative (mixed) reasoning is present. \\ //Transversal Analogies.// Yes, but remains to be developed further. \\ //Transversal Self-Inspection.// Yes. AERA can inspect a large part of its internal operations (but not everything). \\ //Transversal Skill Integration.// Yes. This follows naturally from the fact that all models are sharable between anything and everything that AERA learns and does. | |
| \\ Symbolic? | \\ CHECK | One of the main features of AERA is that its knowledge is declarable by being symbol-oriented. AERA can learn language in the same way it learns anything else (i.e. goal-directed, pragmatic). AERA has been implemented to handle 20k models, but so far the most complex demonstration used only approx 1400 models. | | | \\ Symbolic? | \\ CHECK | One of the main features of AERA is that its knowledge is declarable by being symbol-oriented. AERA can learn language in the same way it learns anything else (i.e. goal-directed, pragmatic). AERA has been implemented to handle 20k models, but so far the most complex demonstration used only approx 1400 models. | |