User Tools

Site Tools


public:t_720_atai:atai-19:lecture_notes_methodologies

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
public:t_720_atai:atai-19:lecture_notes_methodologies [2020/10/14 11:14] – [Examples of Task-Environments Targeted by Constructivist AI] thorissonpublic:t_720_atai:atai-19:lecture_notes_methodologies [2024/04/29 13:33] (current) – external edit 127.0.0.1
Line 226: Line 226:
 \\ \\
 ==== Constructivist AI ==== ==== Constructivist AI ====
-|  Foundation  | Constructivist AI is concerned with the operational characteristics that the system we aim to build – the architecture – must have.  |+|  Foundation  | Constructivist AI is concerned with the operational characteristics that the system we aim to build – the AGI architecture – must have.  |
 |  \\ \\ Behavioral Characteristics  | Refer back to the requirements for AGI systems; it must be able to: \\ - handle novel task-environments. \\ - handle a wide range of task-environments (in the same system, and be able to switch / mix-and-match. \\ - transfer knowledge between task-environmets. \\ - perform reasoning: induction, deduction and abduction.  \\ - handle realtime, dynamic worlds. \\ - introspect. \\ - .... and more.     | |  \\ \\ Behavioral Characteristics  | Refer back to the requirements for AGI systems; it must be able to: \\ - handle novel task-environments. \\ - handle a wide range of task-environments (in the same system, and be able to switch / mix-and-match. \\ - transfer knowledge between task-environmets. \\ - perform reasoning: induction, deduction and abduction.  \\ - handle realtime, dynamic worlds. \\ - introspect. \\ - .... and more.     |
 |  Constructivist AI: No particular architecture  | Constructivist AI does not rest on, and does not need to rest on, assumptions about the particular //kind of architecture// that exists in the human and animal mind. We assume that many kinds of architectures can achieve the above AGI requirements.  | |  Constructivist AI: No particular architecture  | Constructivist AI does not rest on, and does not need to rest on, assumptions about the particular //kind of architecture// that exists in the human and animal mind. We assume that many kinds of architectures can achieve the above AGI requirements.  |
  
-\\ 
-\\ 
-\\ 
-\\ 
-\\ 
-====Examples of Task-Environments Targeted by Constructivist AI==== 
-|  Diversity  | Earth offers great diversity. This is in large part why intelligence is even needed at all.   | 
-|   | Desert   | 
-|   | Ocean floor  | 
-|   | Air   | 
-|   | Interplanetary travel   | 
-|  The Same System at the Same Time  | These task-environments should be handled by a single system at a single period in time, without the designers coming anywhere close.  | 
-|  Baby Machines  | While the mechanisms constituting an autonomous learning "baby" machine may not be complex compared to a "fully grown" cognitive system, they are likely to result in what nevertheless will seem large in comparison to the AI systems built today, though this perceived size may stem from the complexity of the mechanisms and their interactions, rather than the sheer number of lines of code.    | 
- 
-\\ 
 \\ \\
  
Line 251: Line 236:
 ====Architectural Principles of AGI Systems / CAIM==== ====Architectural Principles of AGI Systems / CAIM====
 |  Self-Construction  | It is assumed that a system must amass the vast majority of its knowledge autonomously. This is partly due to the fact that it is (practically) impossible for any human or team(s) of humans to construct by hand the knowledge needed for an AGI system, and even if this were possible it would still leave unanswered the question of how the system will acquire knowledge of truly novel things, which we consider a fundamental requirement for a system to be called an AGI system.   | |  Self-Construction  | It is assumed that a system must amass the vast majority of its knowledge autonomously. This is partly due to the fact that it is (practically) impossible for any human or team(s) of humans to construct by hand the knowledge needed for an AGI system, and even if this were possible it would still leave unanswered the question of how the system will acquire knowledge of truly novel things, which we consider a fundamental requirement for a system to be called an AGI system.   |
 +|  Baby Machines  | To some extent an AGI capable of growing throughout its lifetime will be what may be called a "baby machine", because relative to later stages in life, such a machine will initially seem "baby like". \\ While the mechanisms constituting an autonomous learning baby machine may not be complex compared to a "fully grown" cognitive system, they are nevetheless likely to result in what will seem large in comparison to the AI systems built today, though this perceived size may stem from the complexity of the mechanisms and their interactions, rather than the sheer number of lines of code.    |
 |  Semiotic Opaqueness  | No communication between two agents / components in a system can take place unless they share a common language, or encoding-decoding principles. Without this they are semantically opaque to each other. Without communication, no coordination can take place.    | |  Semiotic Opaqueness  | No communication between two agents / components in a system can take place unless they share a common language, or encoding-decoding principles. Without this they are semantically opaque to each other. Without communication, no coordination can take place.    |
 |  Systems Engineering   | Due to the complexity of building a large system (picture, e.g. an airplane), a clear and concise bookkeeping of each part, and which parts it interacts with, must be kept so as to ensure the holistic operation of the resulting system. In a (cognitively) growing system in a dynamic world, where the system is auto-generating models of the phenomena that it sees, each which must be tightly integrated yet easily manipulatable and clearly separable, the system must itself ensure the semiotic transparency of its constituents parts. This can only be achieved by automatic mechanisms residing in the system itself, it cannot be ensured manually by a human engineer, or even a large team of them.     | |  Systems Engineering   | Due to the complexity of building a large system (picture, e.g. an airplane), a clear and concise bookkeeping of each part, and which parts it interacts with, must be kept so as to ensure the holistic operation of the resulting system. In a (cognitively) growing system in a dynamic world, where the system is auto-generating models of the phenomena that it sees, each which must be tightly integrated yet easily manipulatable and clearly separable, the system must itself ensure the semiotic transparency of its constituents parts. This can only be achieved by automatic mechanisms residing in the system itself, it cannot be ensured manually by a human engineer, or even a large team of them.     |
Line 257: Line 243:
 |  Pan-Architectural Pattern Matching  | To enable autonomous //holistic integration// the architecture must be capable of comparing (copies of) itself to parts of itself, in part or in whole, whether the comparison is contrasting structure, the effects of time, or some other aspect or characteristics of the architecture. To decide, for instance, if a new attention mechanism is better than the old one, various forms of comparison must be possible.    | |  Pan-Architectural Pattern Matching  | To enable autonomous //holistic integration// the architecture must be capable of comparing (copies of) itself to parts of itself, in part or in whole, whether the comparison is contrasting structure, the effects of time, or some other aspect or characteristics of the architecture. To decide, for instance, if a new attention mechanism is better than the old one, various forms of comparison must be possible.    |
 |  The "Golden Screw"  | An architecture meeting all of the above principles is not likely to be "based on a key principle" or even two -- it is very likely to involve a whole set of //new// and fundamentally foreign principles that make their realization possible!  | |  The "Golden Screw"  | An architecture meeting all of the above principles is not likely to be "based on a key principle" or even two -- it is very likely to involve a whole set of //new// and fundamentally foreign principles that make their realization possible!  |
 +
 +
  
 \\ \\
/var/www/cadia.ru.is/wiki/data/attic/public/t_720_atai/atai-19/lecture_notes_methodologies.1602674093.txt.gz · Last modified: 2024/04/29 13:33 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki