Intelligence came about because of environmental pressure: By being lawful in complex ways nature thrust upon active processes the requirement that ensuring survival to increasing extents could be solved by increasing amounts of intelligence. Before this started happening to the extent we see today on the planet, single-cell organisms and very simple multi-cell organisms had already emerged and laid the foundation for what we now call “autonomous agents”. Agents are independently identifiable entities that have a way of making decisions for themselves, in light of surrounding information. The more information the agent can take into account when making decisions, the longer timespans included, and the quicker good decisions can be made, the “smarter” the agent is.
There is always an upper bound to which an agent can perceive, decide, and act: it is determined to a large extent by the implementation of its cognitive mechanisms, and by the particular computations the agent has available to do the cognitive work. Perception is of course limited by the types of available sensors (in natural or augmented form – many humans use glasses, a form of sensory augmentation) and the type of body the agent has available to act with in the world. For humans there are all kinds of augmentations to our bodies, for example automobiles, telephones, email, exoskeletons (not very common, though) and various tools we use virtually every day, such as scissors, keys, light switches, etc.
In the beginning of any natural cognitive system's lifecycle the system is born. It is, of course, born without tools or augmentations – only natural means are provided for perceiving, thinking, and acting. The more general a cognitive agent is intended to be — the more domains or “worlds” it is supposed to be able to adapt to – the more must be learned in the formative stages. For example, if the amount of light available in the world into which it is born is not known a-priori the more requirement is put on the initial stages of the agent to “sample” the world at birth and grow the appropriate set of sensory apparati that allow it to operate in that world.
We have already discussed that intelligence came about as a way to ensure survival of individuals (or, as Richard Dawkins has pointed out, the genes that they carry) in a world that obeys laws yet is complex – i.e. has many hidden and hard-to-see causal chains. The real world is thus a world where intelligence is playing catch-up: the world is, and will most likely continue to be, much more complex than any evolved intelligence can or will be. Any intelligent agent in such a world will therefore be limited to perceiving, thinking about, and acting on, only a fraction of the possible things it could be attending to. “Embodiment” in this context means thus that the agent must live with limitations on the data it can take in through the senses at any point in time, because the world presents such vastly greater amounts than the agent could ever attend to per unit time. The same goes for action: Of all the things that the agent could be doing, an agent with limited perceptual capabilities must limit itself to actions that use that input; in order to keep risk of perishing down, the agent's actions are limited to what it has gathered information about. Note that it does not change anything whether the world changes extremely slowly, or if the agent thinks extremely fast, because our premise is that there will always be more information per unit time available in the world than the agent could possibly attend to.
Furthermore, any agent that could practically be envisioned being built in the not-so-distant future will at birth come equipped with a set of primitive actions that it can use to change the world, and to which everything else it does must ultimately be grounded in. Think of it this way: A human has a set of primary “actuators” – or ways to act on the world, its hands, body, arms, and voice, and a set of perceptual apparati, its eyes, ears, touch, and a few others. The human evolves in a world, with the processes related to the signals from these sensors also evolve; after some years several of the more basic functions are in place, and more advanced ones can grow “on top” of these. While wild science fiction could possibly break out of this format, why this will likely hold for future agents with artificial general intelligence is because the cognitive architecture of an agent must come with some bootstrapping information (masterplan), this must take into account the body of the agent, and the development of the agent must be grounded in this masterplan. Therefore, any cognitive agent must have a “lowest common denominator” for perceiving and acting on the world, even though it may later, via its cognitive prowess, invent extensions to its perceptual apparati, such as glasses, night-vision goggles, audio amplifiers, etc.
According to our view, an intelligent agent is thus always embodied: because it lives in a world that is vastly more complex and information rich than it can ever perceive, think about, and act on, it is constrained by its limited percept-actuo-cognitive capabilities. This is what we mean by embodiment. To see why this must be the relation between these definitions of intelligence and embodiment, imagine an agent whose perceptual capabilities are so enormously powerful that it can perceive everything that happens in its world. While this can be imagined, we are still at a loss when its mind proceeds to deduce implications for the future, e.g. implications of events to its future actions or existence: Because the causal relationships and potential implications are likely to be vastly greater than the perceivable events in this world, the cognitive apparatus of this agent must be capable of representing in its head a vastly greater number of states than there are possible perceivable details in the world, at any point in time. For the real world this would mean that such an agent would need more than the world's amount of atoms, because the world does not represent the future, but this agent would, if it is to have any chance of predicting potential threats, or to grow in its cognitive capabilities (to grow it must make predictions based on its own knowledge, and correct/improve its knowledge in fact of differences between actual results and its own predictions). Already we see that this scenario is theoretically completely precluded for the real world. And as if that isn't enough, the actions of this agent would always be limited to its primitive action set, or its own extensions thereof via its own inventions (for humans such extensions include automobiles, tools, telephones, etc.), and if it wasn't we would essentially have a very unique agent on our hands, typically referred to as “God”. This hypothetical agent would be God because it would be omniscient and omnipotent. The only missing component – and probably this could be solved somehow by an ingenious science fiction writer – is the necessary mental capacity to (a) process all the perceptual data and reason about their implications, and (b) make the vastly complex plans for manipulating everything via its actuators.
All in all, this means that when an intelligence is vastly “underpowered” compared to the complexity of the world it inhabits, it is by definition embodied. It is also situated (“embedded”) because it is limited to a snippet in space-time, and such a subset of reality is what we call a “situation”.
2012©Kristinn R. Thórisson