User Tools

Site Tools


public:t-720-atai:atai-20:methodologies

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
public:t-720-atai:atai-20:methodologies [2020/10/14 14:28] – [Current Methodologies: ConstructiONist] thorissonpublic:t-720-atai:atai-20:methodologies [2024/04/29 13:33] (current) – external edit 127.0.0.1
Line 15: Line 15:
 ==== What It a Methodology? ==== ==== What It a Methodology? ====
 |   What it is   | The methods - tools and techniques - we use to study a phenomenon.  | |   What it is   | The methods - tools and techniques - we use to study a phenomenon.  |
-|  Examples  | - Comparative experiments (for the answers we want Nature to ultimately give). \\ - Telescopes (for things far away). \\ - Microscopes (for all things smaller than the human eye can see unaided). \\ - Simulations (for complex interconnected systems that are hard to untangle).   | +|  \\ Examples  | - Comparative experiments (for the answers we want Nature to ultimately give). \\ - Telescopes (for things far away). \\ - Microscopes (for all things smaller than the human eye can see unaided). \\ - Simulations (for complex interconnected systems that are hard to untangle).   | 
-|   Why it Matters   | Methodology directly determines our progress when studying a phenomenon -- what we do with respect to that phenomenon to figure it out. \\ Methodology affects how we think about a phenomenon, including our solutions, expectations, and imagination. \\ Methodology determines the possible scope of outcomes. \\ Methodology directly influences the shape of our solution - our answers to scientific questions. \\ Methodology directly determines the speed with which we can make progress when studying a phenomenon. \\ //Methodology is therefore a **primary determinant of scientific progress.** //  | +|   \\ Why it Matters   | Methodology directly determines our progress when studying a phenomenon -- what we do with respect to that phenomenon to figure it out. \\ Methodology affects how we think about a phenomenon, including our solutions, expectations, and imagination. \\ Methodology determines the possible scope of outcomes. \\ Methodology directly influences the shape of our solution - our answers to scientific questions. \\ Methodology directly determines the speed with which we can make progress when studying a phenomenon. \\ //Methodology is therefore a **primary determinant of scientific progress.** //  | 
-|   The main AI methodology   | AI never really had a proper methodology discussion as part of its mainstream scientific discourse. Only 2 or 3 approaches to AI can be properly called 'methodologies': //BDI// (belief, desire, intention), //subsumption//, //decision theory//. As a result AI inherited the run of the mill CS methodology/ies by default.  |+|   \\ The main AI methodology   | AI never really had a proper methodology discussion as part of its mainstream scientific discourse. Only 2 or 3 approaches to AI can be properly called 'methodologies': //BDI// (belief, desire, intention), //subsumption//, //decision theory//. As a result AI inherited the run of the mill CS methodology/ies by default.  |
 |  Constructi//on//ist AI  | Methods used to build AI systems by hand.   | |  Constructi//on//ist AI  | Methods used to build AI systems by hand.   |
 |  Constructi//v//ist AI  | Methods aimed at creating AI systems that autonomously generate, manage, and use their knowledge.   | |  Constructi//v//ist AI  | Methods aimed at creating AI systems that autonomously generate, manage, and use their knowledge.   |
Line 26: Line 26:
 |  Applying a Methodology  | results in a family of architectures: The methodology "allows" ("sets the stage") for what //should// and //may// be included when we design our architecture. The methodology is the "tool for thinking" about a design space. (Contrast with requirements, which describe the goals and constraints (negative goals)).   | |  Applying a Methodology  | results in a family of architectures: The methodology "allows" ("sets the stage") for what //should// and //may// be included when we design our architecture. The methodology is the "tool for thinking" about a design space. (Contrast with requirements, which describe the goals and constraints (negative goals)).   |
 |  Following a Methodology  | results in a particular //architecture//   | |  Following a Methodology  | results in a particular //architecture//   |
-|  CAIM Relies on Models  | CAIM takes Conant & Ashby's proof (that every good controller of a system is a model of that system - the Good X Theorem) seriously, putting //models// at its center  \\ This stance was prevalent in the early days of AI (first two decades) but fell into disfavor due to behaviorism (in psychology and AI).    | +|  \\ CAIM Relies on Models  | CAIM takes Conant & Ashby's proof (that every good controller of a system is a model of that system - the Good X Theorem) seriously, putting //models// at its center  \\ This stance was prevalent in the early days of AI (first two decades) but fell into disfavor due to behaviorism (in psychology and AI).    | 
-|  Example  | The Auto-Catalytic Endogenous Reflective Architecture - AERA - is the only architecture to result directly from the application of CAIM. It is //model-based// and //model-driven// (in an even-driven way: The models left-hand terms are matched to situations to determine their relevance at any point in time, when they match their right-hand term is injected into memory - more on this below).       |+|  \\ Example  | The Auto-Catalytic Endogenous Reflective Architecture - AERA - is the only architecture to result directly from the application of CAIM. It is //model-based// and //model-driven// (in an even-driven way: The models left-hand terms are matched to situations to determine their relevance at any point in time, when they match their right-hand term is injected into memory - more on this below).       |
 |  In Other Words  | AERA Models are a way to represent knowledge. \\ But what are models, really, and what might they look like in this context?    | |  In Other Words  | AERA Models are a way to represent knowledge. \\ But what are models, really, and what might they look like in this context?    |
  
Line 59: Line 59:
 |  \\ HeLD  | Cannot be studied by the standard application of reductionism/Occam's Razor, because some emergent properties are likely to get lost. Instead, corollaries of the system -- while ensuring some commonality to the original system //in toto// -- must be studied to gain insights into the target system. For this we use models and simulations.   | |  \\ HeLD  | Cannot be studied by the standard application of reductionism/Occam's Razor, because some emergent properties are likely to get lost. Instead, corollaries of the system -- while ensuring some commonality to the original system //in toto// -- must be studied to gain insights into the target system. For this we use models and simulations.   |
 |   {{public:t-720-atai:simple-system1.png}}   || |   {{public:t-720-atai:simple-system1.png}}   ||
-|  How to tease apart HeLDs: \\ Finding the boundary between a novel //system// and its //environment// may be done by isolating the smallest number of interaction edges between the sub-systems of the two.   ||+|  How to tease apart HeLDs: \\ //Finding the boundary between a novel //system// and its //environment// may be done by isolating the smallest number of interaction edges between the sub-systems of the two.//   ||
 |   {{public:t-720-atai:system-env-world-1.png}}   || |   {{public:t-720-atai:system-env-world-1.png}}   ||
-|  Illustration of the relationship between a system, its task-environment, and its world. \\ Task-environments will always inherit the "laws" of the world; the world puts constraints on the state-space of the task-environment.  ||+|  //Illustration of the relationship between a system, its task-environment, and its world. \\ Task-environments will always inherit the "laws" of the world; the world puts constraints on the state-space of the task-environment.//  ||
 |  Agent & Environment  | We try to characterize the agent and its task-environment as two interacting complex systems. If we keep the task-environment constant, the remaining system to study is the agent and its controller. Together they form a sort of "super-HeLD" because for any learning system the environment is tightly coupled with the agent's seed and learning mechanisms.    | |  Agent & Environment  | We try to characterize the agent and its task-environment as two interacting complex systems. If we keep the task-environment constant, the remaining system to study is the agent and its controller. Together they form a sort of "super-HeLD" because for any learning system the environment is tightly coupled with the agent's seed and learning mechanisms.    |
 \\ \\
Line 68: Line 68:
 |  \\ Self-Construction  | It is assumed that a system must amass the vast majority of its knowledge autonomously. This is partly due to the fact that it is (practically) impossible for any human or team(s) of humans to construct by hand the knowledge needed for an AGI system, and even if this were possible it would still leave unanswered the question of how the system will acquire knowledge of truly novel things, which we consider a fundamental requirement for a system to be called an AGI system.   | |  \\ Self-Construction  | It is assumed that a system must amass the vast majority of its knowledge autonomously. This is partly due to the fact that it is (practically) impossible for any human or team(s) of humans to construct by hand the knowledge needed for an AGI system, and even if this were possible it would still leave unanswered the question of how the system will acquire knowledge of truly novel things, which we consider a fundamental requirement for a system to be called an AGI system.   |
 |  Semiotic Opaqueness  | No communication between two agents / components in a system can take place unless they share a common language, or encoding-decoding principles. Without this they are semantically opaque to each other. Without communication, no coordination can take place.    | |  Semiotic Opaqueness  | No communication between two agents / components in a system can take place unless they share a common language, or encoding-decoding principles. Without this they are semantically opaque to each other. Without communication, no coordination can take place.    |
-|  Systems Engineering  | Due to the complexity of building a large system (say, an airplane), a clear and concise bookkeeping of each part, and which parts it interacts with, must be kept so as to ensure the holistic operation of the resulting system. In a (cognitively) growing system in a dynamic world, where the system is auto-generating models of the phenomena that it sees, each which must be tightly integrated yet easily manipulatable and clearly separable, the system must itself ensure the semiotic transparency of its constituents parts. This can only be achieved by automatic mechanisms residing in the system itself, it cannot be ensured manually by a human engineer, or even a large team of them.     | +|  \\ Systems Engineering  | Due to the complexity of building a large system (say, an airplane), a clear and concise bookkeeping of each part, and which parts it interacts with, must be kept so as to ensure the holistic operation of the resulting system. In a (cognitively) growing system in a dynamic world, where the system is auto-generating models of the phenomena that it sees, each which must be tightly integrated yet easily manipulatable and clearly separable, the system must itself ensure the semiotic transparency of its constituents parts. This can only be achieved by automatic mechanisms residing in the system itself, it cannot be ensured manually by a human engineer, or even a large team of them.     | 
-|  Self-Modeling  | To enable cognitive growth, in which the cognitive functions themselves improve with training, can only be supported by a self-modifying mechanism based on self-modeling. If there is no model of self there can be no targeted improvement of existing mechanisms.    |+|  \\ Self-Modeling  | To enable cognitive growth, in which the cognitive functions themselves improve with training, can only be supported by a self-modifying mechanism based on self-modeling. If there is no model of self there can be no targeted improvement of existing mechanisms.    |
 |  Self-Programming  | The system must be able to invent, inspect, compare, integrate, and evaluate architectural structures, in part or in whole.   | |  Self-Programming  | The system must be able to invent, inspect, compare, integrate, and evaluate architectural structures, in part or in whole.   |
 |  Pan-Architectural Pattern Matching  | To enable autonomous //holistic integration// the architecture must be capable of comparing (copies of) itself to parts of itself, in part or in whole, whether the comparison is contrasting structure, the effects of time, or some other aspect or characteristics of the architecture. To decide, for instance, if a new attention mechanism is better than the old one, various forms of comparison must be possible.    | |  Pan-Architectural Pattern Matching  | To enable autonomous //holistic integration// the architecture must be capable of comparing (copies of) itself to parts of itself, in part or in whole, whether the comparison is contrasting structure, the effects of time, or some other aspect or characteristics of the architecture. To decide, for instance, if a new attention mechanism is better than the old one, various forms of comparison must be possible.    |
Line 79: Line 79:
 |  Constructionist Methods  | A constructionist methodology requires an //intelligent designer// that manually (or via scripts) arranges selected //components// that together makes up a //system of parts// (i.e. architecture) that can act in particular ways. \\ //Examples: automobiles, telephone networks, computers, operating systems, the Internet, mobile phones, apps, etc.//    || |  Constructionist Methods  | A constructionist methodology requires an //intelligent designer// that manually (or via scripts) arranges selected //components// that together makes up a //system of parts// (i.e. architecture) that can act in particular ways. \\ //Examples: automobiles, telephone networks, computers, operating systems, the Internet, mobile phones, apps, etc.//    ||
 |    | \\ Traditional CS Software Development Methods  | On the theoretical side, the majority of mathematical methodologies are of the constructionist kind (with some applied math for natural sciences counting as exceptions). On the practical side, programs and manual invention and implementation of algorithms are all largely hand-coded. \\ Systems creation in CS is "co-owned" by the field of engineering.\\ All programming languages designed under the assumption that they will be used by a human-level programmer.   | |    | \\ Traditional CS Software Development Methods  | On the theoretical side, the majority of mathematical methodologies are of the constructionist kind (with some applied math for natural sciences counting as exceptions). On the practical side, programs and manual invention and implementation of algorithms are all largely hand-coded. \\ Systems creation in CS is "co-owned" by the field of engineering.\\ All programming languages designed under the assumption that they will be used by a human-level programmer.   |
-|    | \\ Belief, Desire, Intention  | BDI can hardly be called a "methodology" and is more of a framework for inspiration. Picking three terms out of psychology, BDI methods emphasize goals (desire), plans (intention) and revisable knowledge (beliefs), all of which are good and fine. Methodologically speaking, however, none of the basic features of a true scientific methodology (algorithms, systems engineering principles, or strategies) are to be found in papers on this approach.    | +|    | \\ BDI: Belief, Desire, Intention  | BDI can hardly be called a "methodology" and is more of a framework for inspiration. Picking three terms out of psychology, BDI methods emphasize goals (desire), plans (intention) and revisable knowledge (beliefs), all of which are good and fine. Methodologically speaking, however, none of the basic features of a true scientific methodology (algorithms, systems engineering principles, or strategies) are to be found in papers on this approach.    | 
 |    | \\ Subsumption Architecture   | This is perhaps the best known AI-specific methodology worthy of being categorized as a 'methodology'. Presented as an "architecture" originally, it is more of an approach that results in architectures where subsumption operating under particular principles are a major organizational feature.      |  |    | \\ Subsumption Architecture   | This is perhaps the best known AI-specific methodology worthy of being categorized as a 'methodology'. Presented as an "architecture" originally, it is more of an approach that results in architectures where subsumption operating under particular principles are a major organizational feature.      | 
 |  Why it's important  | Virtually all methodologies we have for creating software are methodologies of the constructionist kind. \\ Unfortunately, few methodologies step outside of that frame.    | |  Why it's important  | Virtually all methodologies we have for creating software are methodologies of the constructionist kind. \\ Unfortunately, few methodologies step outside of that frame.    |
Line 112: Line 112:
  
 ====Architectural Principles of a CAIM-Developed System (What CAIM Targets) ==== ====Architectural Principles of a CAIM-Developed System (What CAIM Targets) ====
-|  Self-Construction  | It is assumed that a system must amass the vast majority of its knowledge autonomously. This is partly due to the fact that it is (practically) impossible for any human or team(s) of humans to construct by hand the knowledge needed for an AGI system, and even if this were possible it would still leave unanswered the question of how the system will acquire knowledge of truly novel things, which we consider a fundamental requirement for a system to be called an AGI system.   |+|  \\ Self-Construction  | It is assumed that a system must amass the vast majority of its knowledge autonomously. This is partly due to the fact that it is (practically) impossible for any human or team(s) of humans to construct by hand the knowledge needed for an AGI system, and even if this were possible it would still leave unanswered the question of how the system will acquire knowledge of truly novel things, which we consider a fundamental requirement for a system to be called an AGI system.   |
 |  \\ Baby Machines  | To some extent an AGI capable of growing throughout its lifetime will be what may be called a "baby machine", because relative to later stages in life, such a machine will initially seem "baby like". \\ While the mechanisms constituting an autonomous learning baby machine may not be complex compared to a "fully grown" cognitive system, they are nevetheless likely to result in what will seem large in comparison to the AI systems built today, though this perceived size may stem from the complexity of the mechanisms and their interactions, rather than the sheer number of lines of code.    | |  \\ Baby Machines  | To some extent an AGI capable of growing throughout its lifetime will be what may be called a "baby machine", because relative to later stages in life, such a machine will initially seem "baby like". \\ While the mechanisms constituting an autonomous learning baby machine may not be complex compared to a "fully grown" cognitive system, they are nevetheless likely to result in what will seem large in comparison to the AI systems built today, though this perceived size may stem from the complexity of the mechanisms and their interactions, rather than the sheer number of lines of code.    |
 |  Semiotic Opaqueness  | No communication between two agents / components in a system can take place unless they share a common language, or encoding-decoding principles. Without this they are semantically opaque to each other. Without communication, no coordination can take place.    | |  Semiotic Opaqueness  | No communication between two agents / components in a system can take place unless they share a common language, or encoding-decoding principles. Without this they are semantically opaque to each other. Without communication, no coordination can take place.    |
Line 172: Line 172:
 ====AERA Demo==== ====AERA Demo====
  
-|  TV Interview  | In the style of a TV interview, the agent S1 watched two humans engaged in a "TV-style" interview about the recycling of six everyday objects made out of various materials. The results are recorded in a set of three videos: \\ [[https://www.youtube.com/watch?v=SH6tQ4fgWA4|Human-human interaction]] (what S1 observes and learns from) \\ [[https://www.youtube.com/watch?v=SH6tQ4fgWA4|Human-S1 interaction]] (S1 interviewing a human) \\ [[https://www.youtube.com/watch?v=x96HXLPLORg|S1-Human Interaction]] (S1 being interviewed by a human)   |+|  TV Interview  | In the style of a TV interview, the agent S1 watched two humans engaged in a "TV-style" interview about the recycling of six everyday objects made out of various materials. The results are recorded in a set of three videos: \\ [[https://www.youtube.com/watch?v=2NQtEJbQCdw|Human-human interaction]] (what S1 observes and learns from) \\ [[https://www.youtube.com/watch?v=SH6tQ4fgWA4|Human-S1 interaction]] (S1 interviewing a human) \\ [[https://www.youtube.com/watch?v=x96HXLPLORg|S1-Human Interaction]] (S1 being interviewed by a human)   |
 |  Data  | S1 received realtime timestamped data from the 3D movement of the humans (digitized via appropriate tracking methods at 20 Hz), words generated by a speech recognizer, and prosody (fundamental pitch of voice at 60 Hz, along with timestamped starts and stops).   | |  Data  | S1 received realtime timestamped data from the 3D movement of the humans (digitized via appropriate tracking methods at 20 Hz), words generated by a speech recognizer, and prosody (fundamental pitch of voice at 60 Hz, along with timestamped starts and stops).   |
 |  Seed  | The seed consisted of a handful of top-level goals for each agent in the interview (interviewer and interviewee), and a small knowledge base about entities in the scene.     | |  Seed  | The seed consisted of a handful of top-level goals for each agent in the interview (interviewer and interviewee), and a small knowledge base about entities in the scene.     |
/var/www/cadia.ru.is/wiki/data/attic/public/t-720-atai/atai-20/methodologies.1602685710.txt.gz · Last modified: 2024/04/29 13:32 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki