| |
public:t_720_atai:atai-18:lecture_notes_methodologies [2018/10/06 17:01] – [Constructionist AI] thorisson | public:t_720_atai:atai-18:lecture_notes_methodologies [2024/04/29 13:33] (current) – external edit 127.0.0.1 |
---|
| Why it's important | Virtually all methodologies we have for creating software are of this kind. | | | Why it's important | Virtually all methodologies we have for creating software are of this kind. | |
| Fundamental CS methodology | On the theory side, for the most part mathematical methodologies (not natural science). On the practical side, hand-coding programs and manual invention and implementation of algorithms. Systems creation in CS is "co-owned" by the field of engineering. | | | Fundamental CS methodology | On the theory side, for the most part mathematical methodologies (not natural science). On the practical side, hand-coding programs and manual invention and implementation of algorithms. Systems creation in CS is "co-owned" by the field of engineering. | |
| The main methodology/ies in CS | Constructionist. | | | The main methodology/ies in CS | \\ Constructionist. | |
| |
| |
| What it is | Refers to AI system development methodologies that require an intelligent designer -- the software programmer as "construction worker". | | | What it is | Refers to AI system development methodologies that require an intelligent designer -- the software programmer as "construction worker". | |
| Why it's important | All traditional software development methodologies, and by extension all traditional AI methodologies, are constructionist methodologies. | | | Why it's important | All traditional software development methodologies, and by extension all traditional AI methodologies, are constructionist methodologies. | |
| \\ What it's good for | Works well for constructing //controllers// of Closed Problems where (a) the Solution Space can be defined fully or largely before the controller is constructed, (b) there clearly definable Goal hierarchies and measurements that, when used, fully implement the main purpose of the AI system, and ( c) the Task assigned to the controller will not change throughout its lifetime (i.e. the controller does not have to generate novel sub-Goals). | | | \\ What it's good for | Works well for constructing //controllers// of Closed Problems where (a) the Solution Space can be defined fully or largely before the controller is constructed, (b) there exist clearly definable Goal hierarchies and measurements that, when used, fully implement the main purpose of the AI system, and ( c) the Task assigned to the controller will not change throughout its lifetime (i.e. the controller does not have to generate novel sub-Goals). | |
| Key Implementation Method | Hand-coding using programming language and methods created to be used by human-level intelligences. | | | Key Implementation Method | Hand-coding using programming language and methods created to be used by human-level intelligences. | |
| |
| The AFSMs are arranged in "layers" | Layers separate functional parts of the architecture from each other | | | The AFSMs are arranged in "layers" | Layers separate functional parts of the architecture from each other | |
| | | | | | | |
| | {{/public:t-720-atai:subsumption-arch-2.jpg}} | | || {{/public:t-720-atai:subsumption-arch-2.jpg?700}} || |
| | Example subsumption architecture with layers. | | || Example subsumption architecture with layers. || |
| |
\\ | \\ |
\\ | \\ |
\\ | \\ |
====Key Limitations of Constructionist Methodologies)==== | ====Key Limitations of Constructionist Methodologies==== |
| Static | System components that are fairly static. Manual construction limits the complexity that can be built into each component. | | | Static | System components that are fairly static. Manual construction limits the complexity that can be built into each component. | |
| Size | The sheer number of components that can form a single architecture is limited by what a designer or team can handle. | | | Size | The sheer number of components that can form a single architecture is limited by what a designer or team can handle. | |
| | von Glasersfeld | "...‘empirical teleology’ ... is based on the empirical fact that human subjects abstract ‘efficient’ causal connections from their experience and formulate them as rules which can be projected into the future." [[http://www.univie.ac.at/constructivism/EvG/papers/225.pdf|REF]] \\ CAIM was developed in tandem with this architecture/architectural blueprint. | | | | von Glasersfeld | "...‘empirical teleology’ ... is based on the empirical fact that human subjects abstract ‘efficient’ causal connections from their experience and formulate them as rules which can be projected into the future." [[http://www.univie.ac.at/constructivism/EvG/papers/225.pdf|REF]] \\ CAIM was developed in tandem with this architecture/architectural blueprint. | |
| Architectures built using CAIM | AERA | Autocatalytic, Endogenous, Reflective Architecture [[http://cadia.ru.is/wiki/_media/public:publications:aera-rutr-scs13002.pdf|REF]] \\ Built before CAIM emerged, but based on many of the assumptions consolidated in CAIM. | | | Architectures built using CAIM | AERA | Autocatalytic, Endogenous, Reflective Architecture [[http://cadia.ru.is/wiki/_media/public:publications:aera-rutr-scs13002.pdf|REF]] \\ Built before CAIM emerged, but based on many of the assumptions consolidated in CAIM. | |
| | NARS | Non-Axiomatic Reasoning System [[https://sites.google.com/site/narswang/|REF]] | | | | NARS | Non-Axiomatic Reasoning System [[https://sites.google.com/site/narswang/|REF]] \\ //“If the existing domain-specific AI techniques are seen as tools, each of which is designed to solve a special problem, then to get a general-purpose intelligent system, it is not enough to put these tools into a toolbox. What we need here is a hand. To build an integrated system that is self-consistent, it is crucial to build the system around a general and flexible core, as the hand that uses the tools [assuming] different forms and shapes.”// -- Wang, 2004 | |
| Limitations | As a young methodology very little hard data is available to its effectiveness. What does exist, however, is more promising than constructionist methodologies for achieving AGI. || | | Limitations | As a young methodology very little hard data is available to its effectiveness. What does exist, however, is more promising than constructionist methodologies for achieving AGI. || |
| |
\\ | \\ |
\\ | \\ |
| |
==== Constructivist AI ==== | ==== Constructivist AI ==== |
| Foundation | Constructivist AI is concerned with the operational characteristics that the system we aim to build – the architecture – must have. | | | Foundation | Constructivist AI is concerned with the operational characteristics that the system we aim to build – the architecture – must have. | |
| |
====Examples of Task-Environments Targeted by Constructivist AI==== | ====Examples of Task-Environments Targeted by Constructivist AI==== |
| Diversity | Earth offers great diversity. This is in large part why intelligence is even needed at all. | | | Diversity | Earth offers great diversity. This is in large part why intelligence is even needed at all. | |
| | Desert | | | | Desert | |
| | Ocean floor | | | | Ocean floor | |
| |
====Architectural Principles of AGI Systems / CAIM==== | ====Architectural Principles of AGI Systems / CAIM==== |
| Self-Construction | It is assumed that a system must amass the vast majority of its knowledge autonomously. This is partly due to the fact that it is (practically) impossible for any human or team(s) of humans to construct by hand the knowledge needed for an AGI system, and even this were possible it would still leave unanswered the question of how the system will acquire knowledge of truly novel things, which we consider a fundamental requirement for a system to be called an AGI system. | | | Self-Construction | It is assumed that a system must amass the vast majority of its knowledge autonomously. This is partly due to the fact that it is (practically) impossible for any human or team(s) of humans to construct by hand the knowledge needed for an AGI system, and even if this were possible it would still leave unanswered the question of how the system will acquire knowledge of truly novel things, which we consider a fundamental requirement for a system to be called an AGI system. | |
| Semiotic Opaqueness | No communication between two agents / components in a system can take place unless they share a common language, or encoding-decoding principles. Without this they are semantically opaque to each other. Without communication, no coordination can take place. | | | Semiotic Opaqueness | No communication between two agents / components in a system can take place unless they share a common language, or encoding-decoding principles. Without this they are semantically opaque to each other. Without communication, no coordination can take place. | |
| Systems Engineering | Due to the complexity of building a large system (picture, e.g. an airplane), a clear and concise bookkeeping of each part, and which parts it interacts with, must be kept so as to ensure the holistic operation of the resulting system. In a (cognitively) growing system in a dynamic world, where the system is auto-generating models of the phenomena that it sees, each which must be tightly integrated yet easily manipulatable and clearly separable, the system must itself ensure the semiotic transparency of its constituents parts. This can only be achieved by automatic mechanisms residing in the system itself, it cannot be ensured manually by a human engineer, or even a large team of them. | | | Systems Engineering | Due to the complexity of building a large system (picture, e.g. an airplane), a clear and concise bookkeeping of each part, and which parts it interacts with, must be kept so as to ensure the holistic operation of the resulting system. In a (cognitively) growing system in a dynamic world, where the system is auto-generating models of the phenomena that it sees, each which must be tightly integrated yet easily manipulatable and clearly separable, the system must itself ensure the semiotic transparency of its constituents parts. This can only be achieved by automatic mechanisms residing in the system itself, it cannot be ensured manually by a human engineer, or even a large team of them. | |