User Tools

Site Tools


public:t-720-atai:atai-22:methodologies

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
public:t-720-atai:atai-22:methodologies [2022/10/20 09:24] – [ConstructiVist AI Methodology] thorissonpublic:t-720-atai:atai-22:methodologies [2024/10/29 00:19] (current) – [First Things First: What It a Methodology?] thorisson
Line 13: Line 13:
 ===== Methodology ===== ===== Methodology =====
 \\ \\
-==== First Things First: What It a Methodology? ====+==== First Things First: What is a Methodology? ====
 |   What it is   | The methods - tools and techniques - we use to study a phenomenon.  | |   What it is   | The methods - tools and techniques - we use to study a phenomenon.  |
 |  \\ Examples  | - Comparative experiments (for the answers we want Nature to ultimately give). \\ - Telescopes (for things far away). \\ - Microscopes (for all things smaller than the human eye can see unaided). \\ - Simulations (for complex interconnected systems that are hard to untangle).   | |  \\ Examples  | - Comparative experiments (for the answers we want Nature to ultimately give). \\ - Telescopes (for things far away). \\ - Microscopes (for all things smaller than the human eye can see unaided). \\ - Simulations (for complex interconnected systems that are hard to untangle).   |
Line 114: Line 114:
 |  \\ Self-Construction  | It is assumed that a system must amass the vast majority of its knowledge autonomously. This is partly due to the fact that it is (practically) impossible for any human or team(s) of humans to construct by hand the knowledge needed for an AGI system, and even if this were possible it would still leave unanswered the question of how the system will acquire knowledge of truly novel things, which we consider a fundamental requirement for a system to be called an AGI system.   | |  \\ Self-Construction  | It is assumed that a system must amass the vast majority of its knowledge autonomously. This is partly due to the fact that it is (practically) impossible for any human or team(s) of humans to construct by hand the knowledge needed for an AGI system, and even if this were possible it would still leave unanswered the question of how the system will acquire knowledge of truly novel things, which we consider a fundamental requirement for a system to be called an AGI system.   |
 |  \\ Baby Machines  | To some extent an AGI capable of growing throughout its lifetime will be what may be called a "baby machine", because relative to later stages in life, such a machine will initially seem "baby like". \\ While the mechanisms constituting an autonomous learning baby machine may not be complex compared to a "fully grown" cognitive system, they are nevetheless likely to result in what will seem large in comparison to the AI systems built today, though this perceived size may stem from the complexity of the mechanisms and their interactions, rather than the sheer number of lines of code.    | |  \\ Baby Machines  | To some extent an AGI capable of growing throughout its lifetime will be what may be called a "baby machine", because relative to later stages in life, such a machine will initially seem "baby like". \\ While the mechanisms constituting an autonomous learning baby machine may not be complex compared to a "fully grown" cognitive system, they are nevetheless likely to result in what will seem large in comparison to the AI systems built today, though this perceived size may stem from the complexity of the mechanisms and their interactions, rather than the sheer number of lines of code.    |
-|  Semiotic Opaqueness  | No communication between two agents / components in a system can take place unless they share a common language, or encoding-decoding principles. Without this they are semantically opaque to each other. Without communication, no coordination can take place.    | +|  Semantic Transparency  | No communication between two agents / components in a system can take place unless they share a common language, or encoding-decoding principles. Without this they are semantically opaque to each other. Without communication, no coordination can take place.    | 
-|  \\ Systems Engineering   Due to the complexity of building a large system (picture, e.g. an airplane), a clear and concise bookkeeping of each partand which parts it interacts withmust be kept so as to ensure the holistic operation of the resulting system. In a (cognitively) growing system in a dynamic world, where the system is auto-generating models of the phenomena that it sees, each which must be tightly integrated yet easily manipulatable and clearly separable, the system must itself ensure the semiotic transparency of its constituents parts. This can only be achieved by automatic mechanisms residing in the system itself, it cannot be ensured manually by a human engineer, or even a large team of them.     |+|  \\ Whole-Systems \\ Systems Engineering   Retrofitting a fundamental principle unto an already-designed architecture is impossible, due to the complexity of building a large system (picture, e.g. an airplane). Examples include timelearningpattern matchingattention (resource management). In a (cognitively) growing system in a dynamic world, where the system is auto-generating models of the phenomena that it sees, each which must be tightly integrated yet easily manipulatable and clearly separable, the system must itself ensure the semiotic transparency of its constituents parts. This can only be achieved by automatic mechanisms residing in the system itself, it cannot be ensured manually by a human engineer, or even a large team of them.     |
 |  \\ Self-Modeling   | To enable cognitive growth, in which the cognitive functions themselves improve with training, can only be supported by a self-modifying mechanism based on self-modeling. If there is no model of self there can be no targeted improvement of existing mechanisms.    | |  \\ Self-Modeling   | To enable cognitive growth, in which the cognitive functions themselves improve with training, can only be supported by a self-modifying mechanism based on self-modeling. If there is no model of self there can be no targeted improvement of existing mechanisms.    |
 |  Self-Programming  | The system must be able to invent, inspect, compare, integrate, and evaluate architectural structures, in part or in whole.   | |  Self-Programming  | The system must be able to invent, inspect, compare, integrate, and evaluate architectural structures, in part or in whole.   |
/var/www/cadia.ru.is/wiki/data/attic/public/t-720-atai/atai-22/methodologies.1666257848.txt.gz · Last modified: 2024/04/29 13:32 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki