With a requirement of a constructivist approach we must come up with principles for how a system can manage its own growth. What we are essentially talking about are meta-construction principles that can be imparted to the system, “top-level” goals and methods for how the system should change over time. If we are to do this while at the same time building a system that addresses an enormous amount of functions, many of which are not very well understood, we better enlist some of the more subtle and surprising tricks of nature.
Any complex system, e.g. a living being, will over millennia have evolved to trim unnecessary features – features are unnecessary if they don't contribute to the being's ultimate purpose for being – their top-level goal of survival. Of course there will be some minor, possibly strange, remnants of functions that used to be useful but are no longer (yet may become so again). Mainly, though, a living being of any species will have have a good reason to sport its various features. Intelligence is clearly one such survival advantage of human beings: if it was not useful it would have been eliminated. Of course, if humanity destroys itself it will have proven that intelligence – at least as it has been implemented in humans – is not that useful, or tends to be only useful for a limited period of time. In any case, we can identify a need for holistic architectures, as inefficiencies in a complex system are likely to be detrimental to a number of related functions, reducing the efficiency of the whole system to meet its top-level goal. For intelligent architectures this would essentially mean that (a) many cognitive functions are likely to support each other in the day-to-day operation of the mind, and (b) the functions are integrated in efficient ways, they are for example likely to share a number of operations or implement operations in similar ways to other cognitive functions, because this means lower complexity for the underlying implementation code (genetic program). So, an AGI is likely to implement:
Living systems are open systems, which means energy flows through them to maintain their order, which are constantly being built and re-built by the energized processes. Many of the processes of living systems are auto-catalytic. In auto-catalytic processes, some of the “fuel” needed for reactions is actually produced by the process itself; something starts – bootstraps – the auto-catalytic process, and once started it “runs itself”. Such a process takes no energy from the “outside” it uses up energy in the process itself, or from outside itself – from its environment. The process runs until the energy runs out; for processes using outside energy this could be a very long time. An example are species: Crocodiles are very long running species. While one could argue that there is no obvious reason for individuals of a species to die, since they implement processes that take energy from the outside, the reason the individuals exist is because of the species, whose blueprint is imprinted in the species' genome.
Since living systems are heavily based on constructivist principles, and brains, and thus natural minds, are essentially a sub-process of life, it is not unlikely that (natural) cognitive architectures employ auto-catalytic principles, either at their core or on the periphery. And since
A methodology that proposes holism in the construction of a complex architecture, while requiring constructivist approach, complexity must be addressed somehow. We must get a handle on the principles of self-organization, for without some understanding of some minimum of self-organizing principles we are not likely to achieve our goal of designing and implementing a comprehensive cognitive architecture. One principle we can be sure of is that the architecture's structure must be, to some degree, input to itself. This is evidenced for example in that when the system needs to evaluate its own growth over time – changes in its own construction – it must be able to inspect its own structures. This requires reflectivity.
While reflectivity is not necessary for self-organization to occur, it may be argued that it is critical for any informed self-organization to occur. If we want to use reason and inference to come up with an organization that better achieves some particular goal, in a complex world where the size of possible mind+world () state combinations is virtually infinitely large (or, in practical terms, vastly bigger than we have time to explore), then this system that is rearranging itself to attain an improved version of its future self must be able to inspect its own state and compare its own structure at various points in time, compare them to the state of the world, and evaluate the effects of the changes that have been made.
“Emergence” is what we call the result of auto-catalytic processes. A good example is the Belusov-Zhabotinsky reaction. Emergent phenomena are often surprising, because it is a classical case of the equation “the whole is more than the sum of the parts”: How the parts interact is not obvious from looking at them in isolation. Emergent phenomena with which are are very familiar are for example automobile engines, lightbulbs, human beings, and societies.
The principles of self-organization have been systematically studied for close to a century in biology, physics, and cybernetics. While this seems like a long time, very few researchers have in fact made it their primary subject. As a result there are not a lot of fundamental results to discuss in this field, at least not in the sense of strongly affecting other research fields. Some attempts have been made to use discoveries like attractors – the study of which belongs to the filed of Dynamic Systems – to understand and explain the operation of minds, neural networks, social systems, and various other phenomena. But since attractors, and in fact many of the ideas and results coming out of complexity research and self-organization, is descriptive in nature: Instead of supplying us with good ideas for how to build systems to achieve thought, for example, this work has not lead to any fundamental insights into how to design thinking minds, or how the human mind works. In general, approaches based on fundamentally descriptive principles can only be used after a system has been built, to compare it to its natural counterpart: It allows us to ask questions about the similarity of the engineered system and the natural system. For example, it can allow us to hypothesize that the artificial system, if correctly implementing principles of a human brain, should exhibit the same attractor behavior when subjected to a particular set of inputs, contexts, or training regime.
While it is clear that these concepts must be relevant to all efficient complex systems, the specifics of how they relate to AGI is not obvious. In fact, it may be argued that the study of self-organization presents no principles of construction because an emergentist view on the world represents a descriptive view of systems: Given that some particular phenomenon, say the natural phenomenon of intelligence, exhibits some features of self-organization and emergence, while we may be able to describe these features in detail, whether they be the control of cognitive development stages or the recall of the right things at the right time when performing a task such as playing tennis or assembling furniture, they tells us little if anything about what kinds of designs could produce such emergent properties.
Nevertheless it seems obvious that when we are dealing with a system that must implement self-organizing principles, via complex nested feedback loops at many levels of detail, it would seem irresponsible at best, and quite possibly self-defeating, to ignore the research on these phenomena so far. We should anticipate having to extend the current work on emergence and self-organization to a point where these concepts can become highly useful for comparisons with naturally intelligent systems, and possibly help us in our design efforts. They are especially relevant to issues of bootstrapping, and to control of ongoing auto-catalytic processes and management of self-organized architectural structures.
A key problem with present methodologies, and especially constructionist approaches, is that they blind us to the importance, and perhaps sometimes even the possibility, of self-organizing auto-catalytic loops. Going beyond current traditions of constructionist methodologies requires us to identify the need for intricate feedback loops and self-organizing principles. A constructionist approach will always try to eliminate complex feedbacks and force our architecture into a linear, pipelined model. Even blackboard architectures do not help us think in loops, as it tends to emphasize the activity of the agents or daemons in using the blackboard and hide the fact that incrementally knowledge builds up via feedback loops between the various types of agents. So, to some extent, the blackboard metaphor augmented with the insight of types of agents, occupying various points along at least two dimensions: level of detail and temporal scope. And we could add a third dimension of data types. A spread-spectrum of agents, each occupying some point, or possibly more than one point, along these dimensions, will at runtime implement feedback loops that produce knowledge in opportunistic ways, in accordance with what the world exposes the system to over time.
2012©Kristinn R. Thórisson