DCS-T-713-MERS-2024 Main
Lecture Notes
What it is | The methods - tools and techniques - we use to study a phenomenon. The scientific methodology of any field is derived from the prevailing scientific theory/ies in that field. |
The essence of methodology | It is always philosophical in part, because (a) scientific theories are always rooted in a philosophical context, and (b) they derive from theory in that field, whose bleeding edge is about unanswered questions, which means it is by definition hypothetical, which means it is rooted in a metaphysical context. |
Why Scientific Methodology Matters | Scientific methdology: … directly determines what we do with respect to a phenomenon that we are trying to figure out. … directly affects how we think about a phenomenon, including our solutions, expectations, and imagination. … defines the possible scope of outcomes. … directly influences our answers to scientific questions. … directly determines the speed with which we can make progress when studying a phenomenon. … is therefore a primary determinant of scientific progress. |
The Main AI Methodology | A proper discussion about methodology has never been a regular part of AI mainstream scientific discourse. Only a handful of approaches to AI R&D can be classified as 'methodologies': BDI (belief, desire, intention), subsumption, decision theory. As a result AI inherited the run of the mill CS methodology/ies by default. |
ConstructiONist AI | Methods used to build AI systems by hand. Rely on a third-person view of the phenomenon under study. Methodologies in this category are allonomic. Allonomic methodologies are well-suited for classical engineering, where the model is known. |
Examples | Virtually all methodologies we have for creating software are methodologies of the allonomic kind (including BDI, Subsumption, software engineering, decision theory, etc.). |
ConstructiVist AI | Methods aimed at creating AI systems that autonomously generate, manage, and use their knowledge. Methodologies in this category are autonomic (or constructivist). Autonomic methodologies are well-suited for science-oriented engineering, where the model is not known. |
Examples | NARS and AERA are the only AI systems known to be built using an autonomic methodology. |
What We Have Studied In This Course | A particular philosophical approach - or family of methodologies - emphasizing certain principles over others. It is a constructivist-inspired, requirements-driven, non-axiomatic approach. |
What it is | A term for labeling a methodology for AGI based on two main assumptions: (1) The way knowledge is acquired by systems with general intelligence requires the automatic integration, management, and revision of data in a way that infuses meaning into information structures, and (2) constructionist approaches do not sufficiently address this, and other issues of key importance for systems with high levels of general intelligence and existential autonomy. |
|
Basic tenet | That an self-programming systems must be able to handle new problems in new task-environments, and to do so it must be able to create new knowledge with new goals (and sub-goals), and to do so their architecture must support automatic generation of meaning, and that constructionist methodologies do not support the creation of such system architectures. | |
Why It's Needed | Assumes that the system acquires the vast majority of its knowledge on its own (except for a small seed) and manages its own GROWTH on its own. Also, it may change its own architecture over time, due to experience and learning. | |
What it's good for | Replacing present methods in AI, by and large, as these will not suffice for addressing the full scope of the phenomenon of intelligence, as seen in nature. | |
What It Must Do | We are looking for more than a linear increase in the power of our systems to operate reliably, and in a variety of (unforeseen, novel) circumstances. The methodology should help meet that requirement. | |
Roots | ||
Piaget | proposed the constructivist view of human knowledge acquisition, which states (roughly speaking) that a cognitive agent (i.e. humans) generate their own knowledge through experience. | |
von Glasersfeld | “…‘empirical teleology’ … is based on the empirical fact that human subjects abstract ‘efficient’ causal connections from their experience and formulate them as rules which can be projected into the future.” REF |
|
NARS | Non-Axiomatic Reasoning System REF “If the existing domain-specific AI techniques are seen as tools, each of which is designed to solve a special problem, then to get a general-purpose intelligent system, it is not enough to put these tools into a toolbox. What we need here is a hand. To build an integrated system that is self-consistent, it is crucial to build the system around a general and flexible core, as the hand that uses the tools [assuming] different forms and shapes.” – P. Wang, 2004 |
|
Limitations | As a young methodology, very little hard data is available to its effectiveness. What does exist, however, is more promising than constructionist methodologies for achieving GMI. |
Self-Construction | It is assumed that a system must amass the vast majority of its knowledge autonomously. This is partly due to the fact that it is (practically) impossible for any human or team(s) of humans to construct by hand the knowledge needed for an GMI system, and even if this were possible it would still leave unanswered the question of how the system will acquire knowledge of truly novel things, which we consider a fundamental requirement for a system to be called an GMI system. |
Baby Machines | To some extent an GMI capable of growing throughout its lifetime will be what may be called a “baby machine”, because relative to later stages in life, such a machine will initially seem “baby like”. While the mechanisms constituting an autonomous learning baby machine may not be complex compared to a “fully grown” cognitive system, they are nevetheless likely to result in what will seem large in comparison to the AI systems built today, though this perceived size may stem from the complexity of the mechanisms and their interactions, rather than the sheer number of lines of code. |
Semantic Transparency | No communication between two agents / components in a system can take place unless they share a common language, or encoding-decoding principles. Without this they are semantically opaque to each other. Without communication, no coordination can take place. |
Whole-Systems Systems Engineering | Retrofitting a fundamental principle unto an already-designed architecture is impossible, due to the complexity of building a large system (picture, e.g. an airplane). Examples include time, learning, pattern matching, attention (resource management). In a (cognitively) growing system in a dynamic world, where the system is auto-generating models of the phenomena that it sees, each which must be tightly integrated yet easily manipulatable and clearly separable, the system must itself ensure the semiotic transparency of its constituents parts. This can only be achieved by automatic mechanisms residing in the system itself, it cannot be ensured manually by a human engineer, or even a large team of them. |
Self-Modeling | To enable cognitive growth, in which the cognitive functions themselves improve with training, can only be supported by a self-modifying mechanism based on self-modeling. If there is no model of self there can be no targeted improvement of existing mechanisms. |
Self-Programming | The system must be able to invent, inspect, compare, integrate, and evaluate architectural structures, in part or in whole. |
Pan-Architectural Pattern Matching | To enable autonomous holistic integration the architecture must be capable of comparing (copies of) itself to parts of itself, in part or in whole, whether the comparison is contrasting structure, the effects of time, or some other aspect or characteristics of the architecture. To decide, for instance, if a new attention mechanism is better than the old one, various forms of comparison must be possible. |
The “Golden Screw” | An architecture meeting all of the above principles is not likely to be “based on a key principle” or even two – it is very likely to involve a whole set of new and fundamentally foreign principles that make their realization possible! |
1 | Describe how common forms of reasoning relate to next-generation AI systems |
2 | List key reasons for using automated reasoning processes in AI |
3 | Explain how reasoning relates to cumulative learning, autonomous hypothesis generation and autonomous reflection |
4 | Describe state-of-the-art reasoning projects in industry and academia |
5 | Explain how to build systems that reason through empirical experimentation |
6 | Use a cutting-edge reasoning framework for implementing a system that reasons and understandss |
7 | Understand the difference between autonomic and allonomic AI methodologies |
8 | Understand the relation between reasoning and system autonomy |
9 | Explain how reasoning, cumulative learning, and autonomy can help machines handle novelty |
2024©K.R.Thórisson