Course notes.
A key message so far is that intelligence requires architecture: Intelligence comes about only if we have a set of components, interconnected in a particular way, operating in part as a single unit – i.e. in concert. Intelligence is also a relative term in that we can say that any acting entity, in a particular environment, is more or less intelligent to some degree, depending on its juxtaposition towards that environment, its available knowledge, and goals. Therefore we must at some point ask ourselves the question “what kinds of system architectures are more likely than others to enable us to build intelligent systems?”
A reasonable hypothesis would be that present methodologies in software engineering, having enabled us to build some of the most advanced and complex systems ever created in the history of mankind, might prove sufficient to realize AGIs. It certainly has been sufficient for making progress in building intelligent systems, and while the ones built so far are clearly not AGIs, they are vastly more capable than prior solutions in handling complex tasks, dealing with missing data, and doing tasks that only humans could solve before.
There are not very many methodologies that have been created in the field of A.I. specifically targeted to build complete AI systems, but a few can be listed. For example, production systems were proposed as a formalism that would enable the creation of general intelligence. Another early idea was blackboard architectures, which presented an architectural principle for building systems with heterogeneous components. Being frustrated with the shortcomings of this approach for building reliable robots, the “subsumption architecture” was proposed, based around augmented finite state machines. An attempt at synthesizing all the best from prior ones is the Constructionist Design Methodology. More recently the “belief, desire, intentions” (BDI) methodology has made increasing appearances in the literature. All of these proposals are firmly rooted in traditional software design methodologies, and while all of them add ideas specific to A.I. development, they are more or less confined to what can be done with current programming languages on current network designs. Therefore they are also subject to the same restrictions as these methodologies. This makes the task of evaluating their potential for creating AGIs a bit easier than otherwise, because the pros and cons of these methodologies are reasonably well documented.
Production systems are based around productions rules with a pattern {P1, P2}, where P1 is an IF part and P2 is a THEN part, so that if P1 matches a particular piece of data it “fires” – P2 is executed (or put on a list for potential subsequent execution). The system matches P1 to data in working memory, if there is a match the production is put in the “fire” list. For a single cycle, when relevant productions have been matched and some of which may match some data in working memory, the matching productions are executed. Thus for any problem there will be a series of matching and executing production rules essentially implementing a search in “solution space” for the solution to a particular goal. When solution to a problem – a path to a goal – has been found, the path can be “compressed” into a more compact format that dismisses the historical facts of the search that was originally necessary to find it. Key important aspects of such systems are (a) the design of the production rules – their form and expressive power; (b) the control of the matching process (computational efficiency can be greatly impacted depending on the method chosen); (c ) the handling of conflicting production rules; (d) the parallelization of the matching and execution process.
What is typically referred to by the term “subsumption architecture” is really more of a methodology for building architectures than a particular architecture or type of them. Subsumption is a method for building layered architectures where each layer is concerned with achieving a particular goal, or a set of them. The goals are implicit, and therefore rigid and non-inspectable by the system itself. Controlled for robots can be made quite robust using a subsumption-based architecture, because interaction between the implicit goals is made explicit to the human designer via the interconnections and interaction between the layers. Problems related to goal conflicts, goal resolution and goal interaction can therefore be more easily inspected and debugged by its human programmers. The methodology has the downside of being rigid: it comes with no principles for how the layers could automatically be generated (development, growth) or how the system itself could inspect them for benefits of improvement or fault detection. Because such systems tend to be forced into a fundamental modularity at the gross architectural level, with implicit goals, it is typically difficult to combine them with alternative ways of control.
The use of concepts like “belief”, “desire” and “intention” have a long history in AI. However, the belief-desires-intensinos (BDI) methodology has recently seen use in the AI literature. BDI emphasizes the importance of separating the selection of plans from the execution of plans. The methodology does not say anything about how plans get created in the first place – nor about anything else that we might want an intelligent agent to do, such as perceiving, reasoning, coordinating arms and grippers, learning, or creating things.
Taking a more abstract approach, some researchers have argued that we should consider the mind as a dynamical system. One source of information on what that means, and how to describe and analyze such systems, is “chaos theory”. It is not clear how this field can inform the development of AGI systems because, like psychology, it primarily focuses on providing descriptions of the phenomena it studies: mathematical methods are describe very well phenomena like phase-space attractors, but they don't say anything in particular about how these attractors came about or how to build systems that display such characteristics.
From an architectural/systems perspective, all methodologies proposed to date for developing AI technologies are based on the assumption that the system is built by hand by human programmers. While some past work has focused on what kinds of programming languages for AI should look like, the discussion on programming languages for AI has not been very visible since LISP and Prolog came about. By and large all research done to day is based on the assumption that the development tools at hand in computer science – what has essentially been used to build the Internet, the World Wide Web, the control systems for the latest Airbus airplanes, traffic control systems, security mechanisms for large corporations, and the latest animated Pixar movies – is sufficient for AI. This goes for AI architecture as well: There is little discussion in mainstream AI questioning whether LISP, C++, Java, or Haskel may perhaps have inherent limitations for implementing AI systems. While many in AGI may have some doubts, even there such discussion is not very apparent. However, as we will soon see, the current programming languages available to AI researchers do not have the right features to realize AGI.
2012©K.R.Thórisson