Next revision | Previous revision |
public:t-720-atai:atai-22:ai_architectures [2022/09/16 13:11] – created thorisson | public:t-720-atai:atai-22:ai_architectures [2024/04/29 13:33] (current) – external edit 127.0.0.1 |
---|
====Refresher: Inferred GMI Architectural Features ==== | ====Refresher: Inferred GMI Architectural Features ==== |
| \\ Large architecture | From the above we can readily infer that if we want GMI, an architecture that is considerably more complex than systems being built in most AI labs today is likely unavoidable. In a complex architecture the issue of concurrency of processes must be addressed, a problem that has not yet been sufficiently resolved in present software and hardware. This scaling problem cannot be addressed by the usual “we’ll wait for Moore’s law to catch up” because the issue does not primarily revolve around //speed of execution// but around the //nature of the architectural principles of the system and their runtime operation//. || | | \\ Large architecture | From the above we can readily infer that if we want GMI, an architecture that is considerably more complex than systems being built in most AI labs today is likely unavoidable. In a complex architecture the issue of concurrency of processes must be addressed, a problem that has not yet been sufficiently resolved in present software and hardware. This scaling problem cannot be addressed by the usual “we’ll wait for Moore’s law to catch up” because the issue does not primarily revolve around //speed of execution// but around the //nature of the architectural principles of the system and their runtime operation//. || |
| \\ Predictable Robustness in Novel Circumstances | The system must have a robustness in light of all kinds of task-environment and embodiment perturbations, otherwise no reliable plans can be made, and thus no reliable execution of tasks can ever be reached, no matter how powerful the learning capacity. This robustness must be predictable a-priori at some level of abstraction -- for a wide range of novel circumstances it cannot be a complete surprise that the system "holds up". (If this were the case then the system itself would not be able to predict its chances of success in face of novel circumstances, thus eliminating an important part of the "G" from its "AGI" label.) || | | \\ Predictable Robustness in Novel Circumstances | The system must have a robustness in light of all kinds of task-environment and embodiment perturbations, otherwise no reliable plans can be made, and thus no reliable execution of tasks can ever be reached, no matter how powerful the learning capacity. This robustness must be predictable a-priori at some level of abstraction -- for a wide range of novel circumstances it cannot be a complete surprise that the system "holds up". (If this were the case then the system itself would not be able to predict its chances of success in face of novel circumstances, thus eliminating an important part of the "G" from its "GMI" label.) || |
| \\ Graceful Degradation | No general autonomous system operating in the physical world for any length of time can perform flawlessly throughout its lifetime. Part of the robustness requirement is that the system be constructed in a way as to minimize potential for catastrophic (and upredictable, unrecoverable) failure. \\ A programmer forgets to delimit a command in a compiled program and the whole application crashes; this kind of brittleness is not an option for cognitive systems operating in partially stochastic environments, where perturbations may come in any form at any time (and perfect prediction is impossible). \\ One way for a cogntive system to achieve graceful degradation is through reflection, which enables it to learn, over time, about its own fallacies, shortcomings, and lack of knowledge. || | | \\ Graceful Degradation | No general autonomous system operating in the physical world for any length of time can perform flawlessly throughout its lifetime. Part of the robustness requirement is that the system be constructed in a way as to minimize potential for catastrophic (and upredictable, unrecoverable) failure. \\ A programmer forgets to delimit a command in a compiled program and the whole application crashes; this kind of brittleness is not an option for cognitive systems operating in partially stochastic environments, where perturbations may come in any form at any time (and perfect prediction is impossible). \\ One way for a cognItive system to achieve graceful degradation is through reflection, which enables it to learn, over time, about its own fallacies, shortcomings, and lack of knowledge. || |
| Transversal Functions | The system must have pan-architectural characteristics that enable it to operate consistently as a whole, to be highly adaptive (yet robust) in its own operation across the board, including metacognitive abilities. Some functions likely to be needed to achieve this include attention, learning, analogy-making capabilities, and self-inspection. || | | Transversal Functions | The system must have pan-architectural characteristics that enable it to operate consistently as a whole, to be highly adaptive (yet robust) in its own operation across the board, including metacognitive abilities. Some functions likely to be needed to achieve this include attention, learning, analogy-making capabilities, and self-inspection. || |
| | \\ Transversal Time | Ignoring (general) temporal constraints is not an option if we want AGI. (Move over Turing!) Time is a semantic property, and the system must be able to understand – and be able to //learn to understand// – time as a real-world phenomenon in relation to its own skills and architectural operation. Time is everywhere, and is different from other resources in that there is a global clock which cannot, for many task-environments, be turned backwards. Energy must also be addressed, but may not be as fundamentally detrimental to ignore as time while we are in the early stages of exploring methods for developing auto-catalytic knowledge acquisition and cognitive growth mechanisms. \\ Time must be a tightly integrated phenomenon in any AGI architecture - managing and understanding time cannot be retrofitted to a complex architecture! | | | | \\ Transversal Time | Ignoring (general) temporal constraints is not an option if we want GMI. (Move over Turing!) Time is a semantic property, and the system must be able to understand – and be able to //learn to understand// – time as a real-world phenomenon in relation to its own skills and architectural operation. Time is everywhere, and is different from other resources in that there is a global clock which cannot, for many task-environments, be turned backwards. Energy must also be addressed, but may not be as fundamentally detrimental to ignore as time while we are in the early stages of exploring methods for developing auto-catalytic knowledge acquisition and cognitive growth mechanisms. \\ Time must be a tightly integrated phenomenon in any GMI architecture - managing and understanding time cannot be retrofitted to a complex architecture! | |
| | \\ Transversal Learning | The system should be able to learn anything and everything, which means learning probably not best located in a particular "module" or "modules" in the architecture. \\ Learning must be a tightly integrated phenomenon in any AGI architecture, and must be part of the design from the beginning - implementing general learning into an existing architecture is out of the question: Learning cannot be retrofitted to a complex architecture! | | | | \\ Transversal Learning | The system should be able to learn anything and everything, which means learning probably not best located in a particular "module" or "modules" in the architecture. \\ Learning must be a tightly integrated phenomenon in any GMI architecture, and must be part of the design from the beginning - implementing general learning into an existing architecture is out of the question: Learning cannot be retrofitted to a complex architecture! | |
| | Transversal Resource Management | Resource management - //attention// - must be tightly integrated.\\ Attention must be part of the system design from the beginning - retrofitting resource management into a architecture that didn't include this from the beginning is next to impossible! | | | | Transversal Resource Management | Resource management - //attention// - must be tightly integrated.\\ Attention must be part of the system design from the beginning - retrofitting resource management into a architecture that didn't include this from the beginning is next to impossible! | |
| | Transversal Analogies | Analogies must be included in the system design from the beginning - retrofitting the ability to make general analogies between anything and everything is impossible! | | | | Transversal Analogies | Analogies must be included in the system design from the beginning - retrofitting the ability to make general analogies between anything and everything is impossible! | |
| | \\ Transversal Self-Inspection | Reflectivity, as it is known, is a fundamental property of knowledge representation. The fact that we humans can talk about the stuff that we think about, and can talk about the fact that we talk about the fact that we can talk about it, is a strong implication that reflectivity is a key property of AGI systems. \\ Reflectivity must be part of the architecture from the beginning - retrofitting this ability into any architecture is virtually impossible! | | | | \\ Transversal Self-Inspection | Reflectivity, as it is known, is a fundamental property of knowledge representation. The fact that we humans can talk about the stuff that we think about, and can talk about the fact that we talk about the fact that we can talk about it, is a strong implication that reflectivity is a key property of GMI systems. \\ Reflectivity must be part of the architecture from the beginning - retrofitting this ability into any architecture is virtually impossible! | |
| | Transversal Skill Integration | A general-purpose system must tightly and finely coordinate a host of skills, including their acquisition, transitions between skills at runtime, how to combine two or more skills, and transfer of learning between them over time at many levels of temporal and topical detail. | | | | Transversal Skill Integration | A general-purpose system must tightly and finely coordinate a host of skills, including their acquisition, transitions between skills at runtime, how to combine two or more skills, and transfer of learning between them over time at many levels of temporal and topical detail. | |
| |
| \\ Semantic closure | The system's own operations and experience produces/defines the meaning of its constituents. //Meaning// can thus be seen as being defined/given by the operation of the system as a whole: the actions it has taken, is taking, could be taking, and has thought about (simulated) taking, both cognitive actions and external actions in its physical domain. For instance, the **meaning** of the act of punching your best friend are the implications of that act - actual and potential - that this action has/may have, and its impact on your own and others' cognition. | | | \\ Semantic closure | The system's own operations and experience produces/defines the meaning of its constituents. //Meaning// can thus be seen as being defined/given by the operation of the system as a whole: the actions it has taken, is taking, could be taking, and has thought about (simulated) taking, both cognitive actions and external actions in its physical domain. For instance, the **meaning** of the act of punching your best friend are the implications of that act - actual and potential - that this action has/may have, and its impact on your own and others' cognition. | |
| Self-Programming \\ in Autonomy | The global process that animates computational structurally autonomous systems, i.e. the implementation of both the operational and semantic closures. | | | Self-Programming \\ in Autonomy | The global process that animates computational structurally autonomous systems, i.e. the implementation of both the operational and semantic closures. | |
| System evolution | A controlled and planned reflective process; a global and never-terminating process of architectural synthesis. | | | System evolution | A controlled and planned reflective process at a higher level of abstraction than (domain-focused) learning; a global and never-terminating process of architectural analysis and synthesis. | |
| Autonomous Model Acquisition | \\ The ability to create a model of some target phenomenon //automatically//. | | | Autonomous Model Acquisition | \\ The ability to create a model of some target phenomenon //autonomously// (i.e. without "calling home"). | |
| \\ \\ Challenge | Unless we (the designers of an intelligent controller) know beforehand which signals from the controller cause desired perturbations in <m>o</m> and can hard-wire these from the get-go, the controller must find these signals. \\ In task-domains where the number of available signals is vastly greater than the controller's resources available to do such search, it may take an unacceptable time for the controller to find good predictive variables to create models with. \\ <m>V_te >> V_mem</m>, where the former is the total number of potentially observable and manipulatable variables in the task-environment and the latter is the number of variables that the agent can hold in its memory at any point in time. | | | \\ \\ Challenge | Unless we (the designers of an intelligent controller) know beforehand which signals from the controller cause desired perturbations in <m>o</m> and can hard-wire these from the get-go, the controller must find these signals. \\ In task-domains where the number of available signals is vastly greater than the controller's resources available to do such search, it may take an unacceptable time for the controller to find good predictive variables to create models with. \\ <m>V_te >> V_mem</m>, where the former is the total number of potentially observable and manipulatable variables in the task-environment and the latter is the number of variables that the agent can hold in its memory at any point in time. | |
| |
====Features of NARS==== | ====Features of NARS==== |
| Predictable Robustness in Novel Circumstances | \\ Yes | NARS is explicitly designed to operate and learn novel things in novel circumstances. It is the only architecture (besides AERA) that is directly based on, and specifically designed to, an **assumption of insufficient knowledge and resources** (AIKR). | | | Predictable Robustness in Novel Circumstances | \\ Yes | NARS is explicitly designed to operate and learn novel things in novel circumstances. It is the only architecture (besides AERA) that is directly based on, and specifically designed to, an **assumption of insufficient knowledge and resources** (AIKR). | |
| Graceful Degradation | \\ Yes | While the knowledge representation of NARS is not specifically aimed at achieving safe, predictable, or trustworthy operation, NARS can do reflection, so NARS could learn to get better about evaluating its own performance over time, which means it would be increasingly knowledgeable about its failure modes, making it increasingly more likley to fail gracefully. | | | \\ Graceful Degradation | \\ Yes | While the knowledge representation of NARS is not specifically aimed at achieving safe, predictable, or trustworthy operation, NARS can do reflection, so NARS could learn to get better about evaluating its own performance over time, which means it would be increasingly knowledgeable about its failure modes, making it increasingly more likley to fail gracefully. | |
| \\ Transversal Functions | \\ Yes | //Transversal Handling of Time.// Time is handled in a very general and relative manner, like any other reasoning. \\ //Transversal Learning.// Learning is a central design target of NARS. While knowledge in NARS is not explicitly model-based, its knowledge is symbolic and NARSese statements can be thought of as micro-models; reasoning is a fundamental (some would say the only) principle of its operation. \\ //Transversal Analogies.// Yes \\ //Transversal Self-Inspection.// Yes. Via reasoning. \\ //Transversal Skill Integration.// Yes. Via reasoning. | | | \\ Transversal Functions | \\ Yes | //Transversal Handling of Time.// Time is handled in a very general and relative manner, like any other reasoning. \\ //Transversal Learning.// Learning is a central design target of NARS. While knowledge in NARS is not explicitly model-based, its knowledge is symbolic and NARSese statements can be thought of as micro-models; reasoning is a fundamental (some would say the only) principle of its operation. \\ //Transversal Analogies.// Yes \\ //Transversal Self-Inspection.// Yes. Via reasoning. \\ //Transversal Skill Integration.// Yes. Via reasoning. | |
| \\ Symbolic? | \\ CHECK | One of the main features of NARS is deep symbol orientation. | | | Symbolic? | CHECK | One of the main features of NARS is deep symbol orientation. | |
| \\ Models? | No \\ (but yes) | Any good controller of a system is model of that system. The smallest unit in NARS that could be called 'models' are NARSese statements. | | | Models? | No \\ (but yes) | Any good controller of a system is model of that system. The smallest unit in NARS that could be called 'models' are NARSese statements. | |
| |
\\ | \\ |