The NASA space probe Deep Space One (DS1) is one of history's more interesting AI stories. The probe, which actually was built and sent into outer space where it still circles our sun, was conceived as a testbed for novel technologies, including an ion drive, ultra-compact remote communication devices, new navigation methods, and an on-board AI system. Yes, the slogan could have been “AI on board”. Representing a number of AI ideas from planning, reasoning, and knowledge representation, the AI system, called the Remote Agent (RA) on board DS1 was implemented as a coarse-grain architecture consisting of a mission management module, a main planner assisted by several planning experts, and a module called MIR (Mode identification and reconfiguration). One key module, called the “smart executive”, or EXEC, was the AI system's interface between all these system and the realtime control system for the probe as a whole.
Among the novel ideas in DS1, although far from being new to the AI community of the time, was that the RA contained a model of the probe's body. This model, or collection of models, was painstakingly hand-coded by RA's designers. Written in Lisp, the body of DS1 – it's physical components – were described in Lisp statements, in the style of the expert systems of yore. The MIR module held this model, monitored the states of the system, and maintained an up-to-date software model of the system that best matched the observed system status at any point. Even more importantly, this component uses this model of the spacecraft to propose efficient sequences of actions that be taken to restore the system back to a desired state, given perturbations, failure, or other discrepancies from what is necessary at any point in time. The EXEC is responsible for making requests for such plans to the MIR.
This system was proven to be capable of advanced self-recovery. Because it had heuristics, in the form of propositional logic rules, provided to it by the designers, the MIR module could reason about how to fix even relatively complex problems. As an example, in a series of before-launch ground tests, the system used reasoning to fix a problem with one of the solar panels, which failed to produce electricity when turned on. Of course several problems could cause a solar panel not to produce electricity. In a series of interactions between the EXEC and MIR modules, it was concluded that the switch which turns on the solar panel had gotten “stuck”, because, being a physical thing, stiction can cause a physical thing not to change (physical) state. With its primitive knowledge of mechanics, the system reasoned that it should be able to “wiggle” the switch to “unstick” it, and recommended a series of rapid on-off commands to the switch as a potential remedy for this condition. The solution thus produced worked, and the solar panel finally got connected to the spacecraft's batteries, as planned. Using deduction over its model of self, the system thus solved a unique and unplanned problem, one that had not been explicitly coded beforehand, nor tested in the system before.
What this tells us is this: A model of self can help in solving problems, even unforeseen ones. When an agent makes plans for itself it must include itself in those plans, and there knowledge of self becomes very valuable. For a spacecraft that will never change shape or acquire new hardware functionalities, it is reasonable to assume that a hand-coded model of the body be provided once and for all, at design time. For systems whose physical attributes change, like animals, or whose mental capacities may grow and change radically over time, like human babies', a model provided at design time cannot be a final model. Therefore, self-modeling capabilities for AGIs must be acquired alongside any other cognitive capabilities, skills, and knowledge.
We can learn some other things from this project. The system was written in Lisp. Why? you may ask. Lisp is a fairly high-level language with flexible syntax. Common Lisp has a powerful object oriented library, and is virtually object-oriented from top to bottom (unlike e.g. Java, which mixes object-orientation with primitives). Pure object-orientation means that …
2012©Kristinn R. Thórisson