User Tools

Site Tools


public:t720-atai-2012:constructionistai

Course notes.

What is Constructionist AI?

Main points:

  • The methodology used so far in AI is a constructionist approach
  • The tools used come virtually unmodified from the requirements of standard information technology and engineering practice
  • A hallmark of constructionist AI is that systems are built by hand by human programmers
  • Constructionist methodologies will not be sufficient for building AGIs

Constructionist AI refers methodologies for creating AI that rely on human programmers to build the majority of the knowledge structures and architectural components of the system. This means the vast majority of systems ever built, so far, even those targeting AGI. While some target subsets of designing intelligent agents, such as BDI and production systems, others, like blackboards and the subsumption architecture, address a larger part of the puzzle or, in the case of blackboards, propose a metaphor for building whole architectures. Constructionist Design Methodology (CDM) was proposed in the early 2000's as a way to combine the best from all of these into a more comprehensive all-encompassing methodology for constructing whole agents, so unlike many of the other AI methodologies, it did not exclude perception and action control. When studying the power – as well as limitations – of AI constructionist methodologies, it is sufficient to scrutinize CDM because (a) it incorporates so many of the best ideas from prior efforts at proposing methodologies and (b) its limitations relevant to AGI are shared 100% with all prior constructionist methodologies, and even some AGI methodologies and approaches as well.

CDM is based around the idea that you will have modules that implement particular capabilities in the agent. At design and implementation time, if any module is found to be too isolated, monolithic, or is found to be replicating functions that are needed elsewhere in the system, the designer splits the module up into multiple smaller modules, each having a lesser capability than the mother module that spawned it. The functionality of the mother module is then re-implemented by creating a network of interaction between the smaller modules, possibly managed by a new “management” module that did not exist before. Small modules can also be merged into single ones, in case Principles for constructing perception modules, decision modules, and motor modules/mechanisms are part of CDM. The gross architecture in CDM systems is inspired by the blackboard idea, but instead of forcing the whole system to read and write to a single blackboard, several blackboards are allowed (three are proposed, one for low-level perception and decision, one for high-level perception and decision, and one for behavior requests and behavior scheduling). For any area of expertise imparted to an agent, more blackboards may be instantiated at a system's highest level, the content level. All modules in the system post and read from at least one blackboard – their trigger pattern is stored in a subscription which guarantees that they get data only if data is produced that they have expressed interest in. Upon receiving this data they then compute – producing output or doing nothing (depending on their internal operation); if they produce output they post it back to one or more blackboards, possibly triggering in turn some other module getting this data as input. Any complex system built this way will, upon each “cycle”, trigger many modules to process input and generate output. If the modules are implemented as standalone executables the whole system can essentially run “clockless”, that is, the modules need not be run in lock-step.

The CDM approach mirrors to some extent the subsumption approach because it too uses modularization as a key mechanism. To some extent CDM results in implicit goals being designed into the system from the outset, but by generalizing the kinds of control interconnections between modules as a message bus (rather than strict subsumption) it avoids some of the limitations that subsumption architectures typically come with. Like BDI, CDM also proposes separating execution of plans and actions from their selection, and it too does not say anything about how the plans have been created in the first place, typically this is up to the system designers to come up with and implement by hand. CDM allows long-term and short-term plans to co-exist with behavior-based (reactive) behavior selection and control by way of priority-layering of perception, action and behavior control/execution.

CDM has been used in several systems built to date, including an architecture for the Honda ASIMO humanoid robot, allowing it to play card games with children using gesture and speech in a real-time interaction.

The problem with CDM – and this is shared with all approaches relying on human design and implementation of the details of the knowledge and internal architecture of the system – is that the level of complexity needed for building AGI systems is well beyond what this approach can reasonably expected to produce.



2012©K.R.Thórisson

/var/www/cadia.ru.is/wiki/data/pages/public/t720-atai-2012/constructionistai.txt · Last modified: 2024/04/29 13:33 by 127.0.0.1

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki