User Tools

Site Tools


public:t720-atai-2012:nars

This is an old revision of the document!


Course notes.

What is NARS?

NARS is an AGI-aspiring cognitive architecture. Here we will discuss the key principles underlying this architecture, and the theory it is built on.

The most important principle of NARS is P. Wang's significant contribution to AI, namely the definition of intelligence as “the ability to reason under constraints of insufficient resources and knowledge”.

What is meant by insufficient resources? Resources are the available means, methods, materials, and abilities for carrying out tasks, including mental operations. The term insufficient, in relation to a cognitive agent, means that the task or tasks that the agent must perform present requirements that, in an ideal situation, demand resources over and above what is actually available for the agent to use for those tasks. Insufficient resources thus refers to a ratio between an environment and a cognitive agent: The agent is given goals in its environment – e.g. the goal of surviving – for which the agent does not have an immediate set of actions or ways of achieving, and must go through a series of steps to derive ways of doing that. Instead of giving the agent a set of pre-determined context-action mappings, the agent has cognitive processes that, given some time, can produce actions that are sufficient (but not necessarily and almost never optimal) to achieve the goals. The goals can of course also be self-imposed, such as getting a promotion.

A human being trying to get a promotion is of course aspiring to a time-constrained goal, as no human will hold the same job forever. This is the other major way in which cognition deals with limited resources: The time for any agent to deal with some situation when trying to reach a goal is limited. In fact, if a cognitive system were not constrained by time, the other constraints would not matter, because with infinite time every possible solution, situation, consequence, etc., could simply be evaluated, and the best one chosen. So, in a very real sense intelligence would be unnecessary if we did not have temporal constraints.

NARS is also based on the key principle that intelligence is at its core reasoning. While the field of mathematics has produced various precise definitions of what reasoning means, in the context of NARS the concept is more broad, more like what we are used to in everyday dialogue, as in, for example, “while understandable, it is not rational to want to have the cake and eat it too”. The reason for the more layman usage of the term in the context of NARS stems from the fact that the real world does not allow for the conceptual crispness – and not for the same level of certainty – that the world of mathematics allows. In the real world we may not know, upon seeing numerous examples of white swans, to take a well-known example, whether or not there exist black swans. (They do in fact exist.) When we understand the genetic control of the bird's color we can say whether black swans are possible, but we still don't know if they exist in nature, until we find one. So knowledge about the real world is always incomplete. This is another fundamental principle behind NARS: That of experience-based reasoning.

The reasoning in cognitive agents cannot be similar to mathematical reasoning because in the real world we never know the “ground truth”. As an example, we know that there probably exist atoms, but at one point these were hypothetical entities. And they were thought to be “atomic” – indivisible. Mathematical reasoning is based on axioms – given “truth”. Clearly, in systems that operate in the real world, certainty cannot be thus encoded, and therefore not assumed. If we want a flexible cognitive system, that can adapt to a variety of environments, that system must have mechanisms for generating its own “ground truth” – this cannot be given by the designer. Hence, the reasoning mechanisms for such systems must be different from axiomatic reasoning systems. That is where the “N” in “NARS” comes from: Non-axiomatic.

What does insufficient knowledge refer to? “Knowledge” here means any information that the agent can collect by relatively direct means, as well as any information that it can derive from that information through various forms of reasoning. The term refers to available information, because the insufficiency of memory resources may render some knowledge inaccessible. A smart agent may invent ways of dealing with limitations of its own internal memory, such as books, alarm clocks, computers, telephones, etc. – this way it does not need to keep everything in its head at all times, or rely on recalling everything at the right time, all the time, which for many tasks, e.g. building skyscrapers, is simply impossible for a human cognitive agent to do. This way the knowledge for how to use these technologies replaces the need to train the cognitive system itself for the specific tasks that they ultimately serve, e.g. to wake up at the right time.

We can now summarize a bit. Intelligence is necessary because of real world time constraints, and because of resource constraints that derive directly from temporal ones, including limited processing power, limited memory, limited mobility (in the case of embodiment), etc. Intelligence is the activity of achieving goals under temporal and resource constraints.



2012©Kristinn R. Thórisson

/var/www/cadia.ru.is/wiki/data/attic/public/t720-atai-2012/nars.1349170506.txt.gz · Last modified: 2024/04/29 13:33 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki