User Tools

Site Tools


public:t-713-mers:mers-24:ai-architectures

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
public:t-713-mers:mers-24:ai-architectures [2024/10/29 15:15] thorissonpublic:t-713-mers:mers-24:ai-architectures [2024/11/05 11:43] (current) thorisson
Line 7: Line 7:
 \\ \\
 \\ \\
-===== AI Architectures =====+====== Empirical Reasoning IV: AI Architectures ======
 \\ \\
 \\ \\
Line 14: Line 14:
  
  
-====System Architecture====+=====System Architecture=====
 |  What it is  | In CS: the organization of the software that implements a system.  \\ In AI: The total system that has direct and independent control of the behavior of an Agent via its sensors and effectors.   | |  What it is  | In CS: the organization of the software that implements a system.  \\ In AI: The total system that has direct and independent control of the behavior of an Agent via its sensors and effectors.   |
 |  Why it's important  | The system architecture determines what kind of information processing can be done, and what the system as a whole is capable of in a particular Task-Environemnt.   | |  Why it's important  | The system architecture determines what kind of information processing can be done, and what the system as a whole is capable of in a particular Task-Environemnt.   |
Line 27: Line 27:
 \\ \\
  
-====Desired Empirical Reasoning Architectural Features ====+===== Desired Empirical Reasoning Architectural Features =====
  
 |  \\ Predictable Robustness in Novel Circumstances  | The system must have a robustness in light of all kinds of task-environment and embodiment perturbations, otherwise no reliable plans can be made, and thus no reliable execution of tasks can ever be reached, no matter how powerful the learning capacity. This robustness must be predictable a-priori at some level of abstraction -- for a wide range of novel circumstances it cannot be a complete surprise that the system "holds up". (If this were the case then the system itself would not be able to predict its chances of success in face of novel circumstances, thus eliminating an important part of the "G" from its "GMI" label.)   || |  \\ Predictable Robustness in Novel Circumstances  | The system must have a robustness in light of all kinds of task-environment and embodiment perturbations, otherwise no reliable plans can be made, and thus no reliable execution of tasks can ever be reached, no matter how powerful the learning capacity. This robustness must be predictable a-priori at some level of abstraction -- for a wide range of novel circumstances it cannot be a complete surprise that the system "holds up". (If this were the case then the system itself would not be able to predict its chances of success in face of novel circumstances, thus eliminating an important part of the "G" from its "GMI" label.)   ||
Line 42: Line 42:
 \\ \\
  
-====SOAR====+=====SOAR=====
  
 |  What it is  | One of the oldest cognitive architectures in history.   | |  What it is  | One of the oldest cognitive architectures in history.   |
Line 54: Line 54:
 \\ \\
  
-====Features of SOAR====+=====Features of SOAR=====
 |  Predictable Robustness in Novel Circumstances  | Not really    | Since SOAR isn't really designed to operate and learn in novel circumstances, but rather work under variations of what it already knows, this issue hardly comes up.    | |  Predictable Robustness in Novel Circumstances  | Not really    | Since SOAR isn't really designed to operate and learn in novel circumstances, but rather work under variations of what it already knows, this issue hardly comes up.    |
 |  Graceful Degradation  | No | The knowledge representation of SOAR is not organized around safe, predictable, or trustworthy operation (SOAR is an //early// experimental architecture). Since SOAR cannot do reflection, and SOAR doesn't learn, there is no way for SOAR to get better about evaluating its own performance over time, with experience.    | |  Graceful Degradation  | No | The knowledge representation of SOAR is not organized around safe, predictable, or trustworthy operation (SOAR is an //early// experimental architecture). Since SOAR cannot do reflection, and SOAR doesn't learn, there is no way for SOAR to get better about evaluating its own performance over time, with experience.    |
Line 64: Line 64:
 \\ \\
  
-====Non-Axiomatic Reasoning System (NARS)====+=====Non-Axiomatic Reasoning System (NARS)=====
  
 |  What it is  | A reasoning system for handling complex unknown knowledge, based on non-axiomatic knowledge learned from experience.   | |  What it is  | A reasoning system for handling complex unknown knowledge, based on non-axiomatic knowledge learned from experience.   |
Line 74: Line 74:
  
 \\ \\
-====Features of NARS====+=====Features of NARS=====
 |  Predictable Robustness in Novel Circumstances  | \\ Yes    | NARS is explicitly designed to operate and learn novel things in novel circumstances. It is the only architecture (besides AERA) that is directly based on, and specifically designed to, an **assumption of insufficient knowledge and resources** (AIKR).    | |  Predictable Robustness in Novel Circumstances  | \\ Yes    | NARS is explicitly designed to operate and learn novel things in novel circumstances. It is the only architecture (besides AERA) that is directly based on, and specifically designed to, an **assumption of insufficient knowledge and resources** (AIKR).    |
 |  \\ Graceful Degradation  | \\ Yes | While the knowledge representation of NARS is not specifically aimed at achieving safe, predictable, or trustworthy operation, NARS can do reflection, so NARS could learn to get better about evaluating its own performance over time, which means it would be increasingly knowledgeable about its failure modes, making it increasingly more likley to fail gracefully.   | |  \\ Graceful Degradation  | \\ Yes | While the knowledge representation of NARS is not specifically aimed at achieving safe, predictable, or trustworthy operation, NARS can do reflection, so NARS could learn to get better about evaluating its own performance over time, which means it would be increasingly knowledgeable about its failure modes, making it increasingly more likley to fail gracefully.   |
Line 83: Line 83:
 \\  \\ 
 \\  \\ 
-====AERA====+=====AERA=====
 |  Description  | The Auto-Catalytic Endogenous Reflective Architecture is an AGI-aspiring self-programming system that combines reactive, predictive and reflective control in a model-based and model-driven system that is programmed with a seed.    | |  Description  | The Auto-Catalytic Endogenous Reflective Architecture is an AGI-aspiring self-programming system that combines reactive, predictive and reflective control in a model-based and model-driven system that is programmed with a seed.    |
 |  {{/public:t-720-atai:aera-high-level-2018.png?600}}  || |  {{/public:t-720-atai:aera-high-level-2018.png?600}}  ||
Line 96: Line 96:
  
 \\ \\
-====Features of AERA====+=====Features of AERA=====
 |  Predictable Robustness in Novel Circumstances  | \\ Yes    | \\ Since AERA's learning is goal driven, its target operational environment are (semi-)novel circumstances.    | |  Predictable Robustness in Novel Circumstances  | \\ Yes    | \\ Since AERA's learning is goal driven, its target operational environment are (semi-)novel circumstances.    |
 |  Graceful Degradation  | Yes    | Knowledge representation in AERA is based around causal relations, which are essential for mapping out "how the world works". Because AERA's knowledge processing is organized around goals, with increased knowledge AERA will get closer and closer to "perfect operation" (i.e. meeting its top-level drives/goals, for which each instance was created). Furthermore, AERA can do reflection, so it gets better at evaluating its own performance over time, meaning it makes (causal) models of its own failure modes, increasing its chances of graceful degradation.   | |  Graceful Degradation  | Yes    | Knowledge representation in AERA is based around causal relations, which are essential for mapping out "how the world works". Because AERA's knowledge processing is organized around goals, with increased knowledge AERA will get closer and closer to "perfect operation" (i.e. meeting its top-level drives/goals, for which each instance was created). Furthermore, AERA can do reflection, so it gets better at evaluating its own performance over time, meaning it makes (causal) models of its own failure modes, increasing its chances of graceful degradation.   |
/var/www/cadia.ru.is/wiki/data/attic/public/t-713-mers/mers-24/ai-architectures.1730214924.txt.gz · Last modified: 2024/10/29 15:15 by thorisson

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki