User Tools

Site Tools


public:t720-atai-2012:requirements

Course notes.

AGI Requirements

Before we can answer the question “How do we build an intelligent system?” we must make a fairly big decision: We must decide how high we are aiming. It stands to reason that someone planning to build a small shed in their back yard will have a very different day acquiring materials than someone who is preparing to build a skyscraper. In one case a visit to the lumber yard is all that is needed; in the latter case calls to architects, contractors, city officials, etc. will be the order of the day. A key difference between these two, and something that is identical in the case of building intelligent systems, is that the high-rise builder must spend considerable time on documenting what requirements the building must meet – if its height is over 30 stories high multiple elevators will be needed, which will also be the case if any side of the building is 50 meters long. Thickness of walls, footprint of ground floors relative to higher-up floors, complexity of shape, will all affect the amount of steel, cement, sand and water needed. All these factors interact in complex ways to affect the final price of the building, and thus of each apartment and office planned – the various prices for raw materials, cost of handling, and machines needed, will be too much for any single human to figure out in their heads. The task calls for a team of experts with their graphing calculators and spreadsheets.

If our aim is human-level intelligence we are in the same boat as the high-rise builder: We must think carefully about (a) the nature and structured of the system we are attempting to build, and (b) what kind of methodology seems most promising for achieving it. To start we will look at the former, i.e. the requirements that the features of a human-level intelligence puts on the AI designer.

Before we go any further it might behoove us to stop and ask “Why human-level? – Why not superhuman-like, society-like, nature-like, or alien-like?”. The short answer (it could be made a lot longer),in terms of calling anything “intelligent”, even the most stubborn and conservative scientist or layman must agree that we can ascribe intelligence to humans, at least some of the time, lest the concept becomes something entirely different than what it is intended for. This makes human-level intelligence the only baseline which everybody agrees to, more or less, as a choice of comparison to our artificial intelligences. By saying human-level A.I. we are saying that in major ways, possibly the most obviously beneficial and powerful ones, our A.I. will be capable of what a human is capable of. The ability to pick human-level to compare against is useful in many ways. For one, we do not need to spend time arguing about the boundaries of particular smaller-scale examples of (what many would call) intelligences (e.g. thermostats, squirrels – even dogs), we immediately state that we set the aim quite high regarding many important factors, such as the ability to not only reason or to only move about in a complex world, but also to the extent that we expect our A.I. to be capable of (human-like) creativity and ingenuity. Humans are also more capable than any other animal in extending their intelligence with various tools, and they are capable of applying their mental capabilities in an enormously wide range of ways, from inventing the toothpick to walking on the moon, from learning multiple languages to inventing mathematical and programming languages, use these to increase their own comfort and survivability and – ultimately – create artificial minds.

Due to learning speed and limited lifetime, a single human is only capable of learning a subset of the full potential of what any one of us could potentially set our minds to. But what is the potential of each of us to learn? It is surely much greater than that of any other animal (read: intelligence) this planet has ever seen. If we assume that an average human individual – given sufficient time – can train themselves to reach average performance in 1% of all of the things that all humans have ever trained themselves in (excluding perhaps the most bizarre contortions historically demonstrated by a few circus people) this is still an enormous range of possible fields, tasks, and know-how that a single individual is in principle capable of learning. (1% may even be an underestimate.) How is it that a human mind can be applied to so many things? What makes it the “general-purpose” processor that it seems in this sense to be?

The capabilities of a human mind can be broadly classified into two kinds along the dimensions of (a) knowing what and (b) knowing how. The former is generally thought of as “facts”, but could also be said to be truth statements about the world; the second has traditionally been connected more with robotics, and is about control. It is when both of these are combined in one system, and properly coordinated, that we get a very powerful system for doing all sorts of real-world tasks. It is in these kinds of systems where we start to see the kind of generality associated with human minds. Of course, as we mentioned already, every intelligent system must have some minimum of “knowing how”, since it must be able to act in the world; for the purposes of painting broad strokes in the present discussion we can ignore such obvious issues inherent in the classification. But some might argue that the classification is bogus, because both are obviously needed. I tend to agree to some extent. A control system that cannot do any kind of reasoning is going to be very limited, as it will probably lose out on the “G” in “AGI”. But conversely, a system that can only do reasoning (as we know it from e.g. academic and scientific work) will can never be expected to learn how to control a time-bounded activity in a complex world, as many never have the control capabilities called for. It is only if we stretch these concepts beyond their very “decent” limits, as used in everyday language, that we can agree to either extreme being sufficient for achieving AGI, e.g. saying that inventing a control system for controlling e.g. a hexapod body in the swamps can be done via reasoning-only, as long as it is combined with some sort of advanced self-programming capabilities. This stretches at least my own understanding of what “reasoning” is generally used to mean. Conversely, we might try to argue that implementing advanced control systems capable of some sort of self-description and reasoning could get us away from having to impart reasoning to the system from the outset – in which case we would only have replaced definitions with tautology.

According to the preceding analysis it is not sufficient to refer only to reasoning when trying to define what is intelligent and what is not, as reasoning alone will not account for the many necessary control functions that can be fond in a human mind – attention being prime among them. Conversely, an advanced control system devoid of reasoning capabilities – the ability to abstract, analyze, and adjust itself – will likely never reach the advanced architectural sophistication required for AGIs. It may seem that by talking about growth we are diverting the attention to something unrelated. But no. This discussion actually becomes much simpler by introducing the requirement of growth capability into our AGI-system-to-be: Assuming that any and all AGI systems, to be able to meet the high demands of multiple – a-priori unknown – environments, must be capable of advanced levels of self-reorganization, removes the conceptual shortcomings associated with trying to understand (and define) intelligence only based on a particular limited viewpoint. This argument is not very difficult to uphold, as anyone can see that a system that has trained itself to be good at some complex task under some particular conditions must be significantly handicapped if moved to another environment. Think underwater versus desert; jungle versus outer space. While some of the task's root goals may be the same, the majority of sub-goals may in fact the vastly different. The greater the difference between two or more environments and tasks to be learned, the greater the difference between the state of the system before and after it has mastered both/all.

Architectural self-reorganization, a.k.a. self-programming, is in fact a hallmark of intelligence, and it is quite straight-forward to map this concept onto a diverse set of systems, such as thermostats (no self-programming) and humans (some self-programming). A system that can get better at some task <m>X</m> is called a learning system, or a system capable of learning. A system that can get better at getting better – in other words learn to learn – is a system capable of meta-learning. This is a system whose architecture is capable of growth. Humans are an example implementation of such a system. While metalearning is not strictly necessary for cognitive growth – others being for example the ability to learn things relatively different from what has been learned before through effective application of analogies, and the ability to continuously grow ones own knowledge – metalearning is perhaps the most powerful of functions enabling cognitive growth.








We are not discussing what features allow us comfortably to define something to be “human-level” – now in the realm of what is functionally necessary for an A.I. system to become human-level or AGI: What are the key functions that a human mind implements that make it so different from – and in some ways more valuable than – other animal intelligence and narrow A.I. systems? The human mind is in some way “general” over and above what animal and narrow A.I. systems are – and it is this we want to achieve in our AGI.

There are some functions that are frequently brought up as candidates for being a hallmark of human-level intelligence, and which it would seem prudent to address. Listing these in no particular order: Creativity, inventiveness, insight, intuition, imagination, reasoning with uncertainty, experimentation, calculated risks, curiosity. Some of these may ultimately be highly desired and necessary features of AGIs, others may become optional or adjustable to varying degrees, depending on what we want to use our AGI for. The way these terms are used in everyday language makes it reasonable to assume that imparting them to an artificial system would be of great value. And there probably are more that are worthy of listing here, but let's stick with these.

“Creativity” is a concept that has been thrown around for centuries and there are at least 20 different definitions available in the literature for this term. In general, being creative is (at least) the ability to come up with non-obvious and novel (to varying degrees) ideas, solutions, suggestions, etc. To be termed “creative” a solution cannot be random – creativity will not be ascribed to a randomization process – and it cannot be obvious either. There are at least two ways to assess obviousness. First, in light of what other minds from a group of minds have been or are able to come up with – a population-based measure. The other is in light of what can be deduced, or induced with not too much effort, by a single cognitive system, based on its available information and knowledge. One way to quantify an individual mind's progress on the creativity spectrum is, other things being equal (the difficulty of problems being solved and the rate of idea generation), are the solutions and ideas the mind is producing improving in quality? Clearly, if we want to build an artificial general intelligence it would behoove us to require it having at least some minimum ability to come up with non-obvious solutions to problems we presented it with: An AGI should be creative. Some might venture to argue that AGI cannot be achieved without creativity. That does not mean, however, that creativity must be “manually imparted” or “force-fed” to the AGI – it could just as well be that creativity is a natural corrolary to intelligence, possibly resulting from intelligence and creativity relying on precisely the same underlying mechanisms; in which case it would be a natural impossibility to build a non-creative AGI.

“Inventiveness” can, for the purposes of the present discussions, be deemed a skill which forms a subset of creativity. “Insight” has at least two meanings and use, one referencing a “depth of understanding” the other being more closely related to “intuition”. To address the former meaning, coupled with an ability to “harness” our powers of insight (deep understanding), this ability might give us powers to do what otherwise would be difficult or impossible to do – as in “with keen insight into the human condition, she resolved the tiffs and brawls that some members of the symphony orchestra had been involved with”. Insight, in this sense, can probably be reduced to understanding – we will return to that concept shortly.

“Imagination” is a cognitive function in which a mind can simulate or emulate impossible, unseen, unexplored, non-existing things, as well as alternative views on possible, priorly explored and existing things, events, objects, actions, options, and so on. In general use the term sometimes also alludes to a certain (relatively high) level of creative output – as in “He is very imaginative – he comes up with new ideas all the time”. For this latter meaning we will assume that this conceptualization of cognitive abilities is covered by the terms “creativity” and “inventiveness”.

“Reasoning with uncertainty” has a bit of a different flavor than the other terms. In general, “reasoning” refers to an ability to use logic – in some way – to come up with conclusions based on particular premises. The various types of reasoning, which we will discuss in more detail in a later section, shows that there are a lot more ways to use reasoning than for simple deduction (Socrates is a man; all men are mortal; hence, Socrates is mortal). Deduced knowledge is “inevitable knowledge”, because the conclusions derive directly from the premises. So in some sense deduction is the least interesting use of reasoning. But long deduction chains can have some interesting and unexpected results, and it can be argued that ordinarily people do not do enough deduction in their daily life, as for most people at least one paradox in their behavior, when considering their views, can be found in their behavior every day (take, for example, the person who wants to be 'generous' yet supports no third-world fund). Deduction is essentially the only reasoning that one does with full certainty. All other kinds of reasoning involve some uncertainty, to varying extents. Of course, to be useful, an AGI would need to be logical, to the fullest extent possible. Unfortunately it is difficult to say what that extent is or will be. Two kinds of reasoning that we most certainly would want our AGI to be capable of are abduction and induction. The former refers to the ability to infer causes from observations, e.g. because the grass is wet, it may have rained yesterday“. Induction is essentially the primary basis for scientific inquiry, the ability to generalize from observations. No concrete proposals exist for how to imbue such a skill into an artificial entity, although plenty of ideas have been fielded.

A primary way to test generalizations derived via induction is experimentation – another key method of modern science. Any generalization will imply non-obvious predictions whose truth value is unknown; by testing these predictions we can confirm or disprove the generalization. Continued failures to disprove a generalization, assuming the generalization is logical, provides support for its usefulness (and in some sense correctness); a single disproving result will of course invalidate the generalization. It may, however, continue to be useful, as can be seen in the continued use of Newtonian physics in spite of Einstein providing a more correct theory of physics which subsumes it. Taking calculated risks lies, in a way, on top of all the prior concepts we have covered – applying knowledge, reasoning, creativity, inventiveness, and experimentation, one could enable an artificial system to take “calculated” (really informed) risks. Such behavior is certainly observed in humans, and may become useful for realizing the full potential of AGIs. However, taking calculated risks is a bit more murky a concept than most of the others, and it may be difficult to operationalize, certainly it will be difficult to make this a particular goal of building an AGI – to build an AGI capable of taking calculated risks. For now we will assume that taking calculated risks – in the most general and obvious interpretation of that concept – is likely to be an emergent property of most or all future AGIs, as a function of the fact that they most likely will be asked to do novel things that nobody has done before, and therefore inherently require behavior of the kind that could be given that label.

Curiosity is likely to exist in animals because all intelligent systems must continuously deal with uncertain information – there is now guarantee, for instance, that the stairs to my attic won't break as I ascend, yet I will not spend all morning checking every single joint, every single fiber in the wood, and every single nail, to make absolutely sure, even though this means I might fall and break my leg in the process. Curiosity helps fill in the gaps of our missing knowledge – why is that reflection so strangely peculiar? we may think, as we walk through a revolving chrome-and-glass door, an act that may very well result in new knowledge that helps us later avoid walking into it and hurting ourselves.

What about emotion? Emotions are certainly a real phenomenon. There are at least two sides to the emotion coin that we must address. First, emotions have an experiential component that most people think of when using the word. The experience of feeling sad, of feeling guilt, pain, despair, anger, frustration – these are typically experienced by every person, to some extent, at least once per year, and in many cases much more often. Second, there is the effect part – the shoe flying towards me as my classmate, his face twisted with anger, takes it out on me. I see his facial expression, I feel the shoe hit my head – but I don't expeirence his emotion directly. This may of course create other emotions in myself, but these are mine, not his. It can easily be argued – but we won't spend much space on it here – that it is only the latter effectual part of emotions that is relevant to AGI. As Chalmers has convincingly argued in his thought experiments, it is not difficult to imagine a zombie that feels nothing, yet whose behavior is indistinguishable from that of any human. This is because the only knowledge anyone has of experience is their own experience. When someone tells me they feel pain I have to believe them – I only have their word and their behavior to judge from, I cannot possibly feel their pain, only my own. Therefore, if everyone around me were a really amazing actor, for all I know the only person on the planet who actually experiences pain is me. The role of this experience in actually controlling behavior has been debated for decades; what most agree on, however, is that the effect of emotions on behavior can be cast in a control paradigm: emotions have a role in affecting the way we act, think, and even perceive the world. For the purposes of AGI – since the focus of the present quest is not experience per se but intelligence – we can ignore the experiential part of emotions and focus on the control part. What is the control exerted by emotion in natural cognition? One primary effect that has been discerned is what has been called “focusing of attention” – the steering of our intake of information (and thus what we spend our time thinking about). This effect is often encountered in conditions of stress, frustration and anger. Another is attending to our bodily health – the most obvious example being when we are in physical pain. Emotions seem to control the consolidation of memories – high emotional states tend induce a stronger memorization than states of relaxation. All of these are undoubtedly useful heuristics for evolving and growing up in nature; whether we will end up wanting our AGI to have all of these, or some of these, remains to be seen. For now we will assume that any reasonably powerful cognitive architecture control system should be able to realize such functions. Of course, to implement emotion-like control, the architecture must also be capable of reasonably sophisticated contextual recognition, since rational evocation emotional control functions always relies on the juxtaposition of an agent with its environment.

Is natural language necessary for AGIs? There can be no doubt that natural language has a high level of utility for any artificial system – being able to ask an intelligent system in plain language how it plans to achieve a particular task, to summarize its operations for the past month in a few sentences, or report on its current status, and get a reply back in concise and chiseled language could come in handy on many occasions. Whether it is necessary for an AGI to be language-capable is a different question; some go so far as to argue that (natural) language is necessary for any system capable of symbol manipulation and higher-level thinking. This is, in my opinion, an empirical question for which at present there is not sufficient evidence to argue convincingly either way. Certainly for the kinds of tasks that some dogs, apes, crows, and horses are capable of doing have a high cognitively functional overlap with humans and would seem to be worth of being called symbol manipulation under even the most stringent definition of that term. We could probably spend considerable space speculating on the importance of language to social interaction and “thinking in groups”. Because to a large extent a human individual's cognitive feats, such as inventing a new mathematics or deciphering self-sustaining processes in living systems, are dependent on their socio-historic environment. There can be little doubt that had Turing lived in the bronze age he would not have had the tools or societal context to come up with his ideas about computers. The primary way for allowing such effects in society is via natural language – so presumably, if we wanted to replicate similar effects with AGIs we might have to give them the ability to communicate at levels that are at least as efficient as natural language, and there might be even better ways that we could invent. Suffice it to say at this point that it is possible that certain kinds of thinking are difficult to do, impractical, or even perhaps impossible, without (natural) language – and that there is almost certainly a large set of functions and capabilities that a system might posses which do not – strictly speaking – require language, yet are still both necessary and sufficient for an A.I. to deserve being called AGI.








We can now attempt a bit of summarization. The intelligence of a system is determined by a combination of a system's architecture, and its capabilities, and the cognitive system's use of its prior experiences – in the form of internalized knowledge. A system's ability to access internalized information in a number of ways, each possibly with its pros and cons, including associatively, via similarity (vision, hearing), or symbolically via language and signs. Being able to use this knowledge for various ends, including anything from taking a single step as part of a series (going for a walk) to getting a promotion, from buying the cheapest lunch to driving a car, from cleaning drinking-water to walking on the moon. For convenience, intelligence is often defined as the ability to do <m>X</m>, where <m>X =</m> {chess, vacuuming the floor, driving a car through the desert, chat, … }. This is convenient, as it is simple, but it brings with it a difficulty – the difficulty of separating human capabilities from those of algorithms or less intelligent animals. The only way to do so is along a continuum of task complexity, but constructing a foolproof continuous scale of task complexity for all tasks relevant to human cognition is a very difficult task indeed.

This common understanding of intelligence as “ability to do <m>X</m>” ignores the ability to learn. To be useful in a complex environment an agent needs to be able to adapt. Also, not everything can be known from the outset. Therefore, the ability to learn is critical to any AGI. Some important parameters related to learning that we can list include:

  • Speed of learning
  • Retention of what has been learned - especially in light of learning new things
  • Ability to apply what has been learned for the achievement of goals
  • Ability to learn from a variety of sources - reading books, learning-by-doing, watching TV, from teachers, etc.

So could we rewrite the above “common” explanation of intelligence as “Intelligence is the ability to perform, and learn to perform, <m>X</m>”, where <m>X =</m> {chess, vacuuming the floor, driving a car through the desert, chat, … }.

Actually, since we are listing the skills here a-priori, this description also has shortcomings, most notably ignoring the ability to learn an open-ended variety of things. Learning a variety of things brings up all sorts of concepts in cognitive functioning that have not been mentioned yet, including:

  • learning transfer - the ability to benefit from having learned one skill when learning another one; the amount of transfer is measured by how much is saved in learning time/quality and how different the two skills are
  • balancing all sorts of learning goals
  • managing resources via attention
  • making plans for heterogeneous goals, skills, opportunities, environments, etc.
  • the ability to learn to get better at learning!

Humans not only know a variety of facts, perform a variety of tasks, using a variety of means and a variety of sensors, they can also learn a variety of facts, and a variety of things, through a variety of means using a variety of sensors; humans can also turn skills into facts and facts into skills, generalize from experience, anticipate the future, and draw useful analogies between seemingly completely unrelated things. Possibly all of these are needed to realize creativity and inventiveness. And let's not forget that, perhaps most interestingly, intelligent systems can learn about themselves. Introspection may be one of the marginally necessary features of general intelligence. We will come back to this later.

We must remind ourselves that the many functions of human intelligence are interconnected in intricate ways. Such interlinking and interconnectedness may in fact be part of the explanation of why humans are capable of solving the multitude of challenges that the world around us presents. Some natural and artificial systems – of which intelligent systems are almost certainly a subset – are composed of multiple interconnected functions, each of which, upon removal, will significantly alter the system's operation, and possibly its function and even nature. We can see this in that if we start to remove any of the many functions of an intelligent system it very quickly stops resembling itself and starts resembling something else. This gives us a new concept related to our prior concept of “marginally necessary” – the concept of holistic necessity. We call such systems holistic systems. Removing the spark plugs from an automobile engine is guaranteed to disable it – spark plugs are holistically necessary for automobile engines. Removing the coolant (or, in the case of a some, e.g. a VW Beatle, the cooling function) from an engine will not disable it, only make it more difficult to use, as overheating comes too easily and too often. Yet all modern automobile engines have a cooling function – cooling is marginally necessary to the functioning of an automobile engine (notice, however, that it is not necessary, in the sense of either of those terms, to the definition of an engine or an automobile). Removing the battery in a modern automobile will render it incapable of starting – but most engines, once started, will continue to run without a battery. And engines can be started by rolling them downhill (at least stick shifts). The battery is on at the border of holistic and marginal necessity. In this sense intelligent systems are certainly holistic, but they have a number of marginally necessary features and functions as well – that is, some of the “smaller” cognitive functions of a natural intelligence can quite possibly be removed without severely handicapping the system in question or halting it.

We have now looked at some of the more advanced functions that are desirable candidates – and possibly necessary – for implementing human-level artificial intelligence. Some of those discussed may be difficult to classify as holistically or marginally necessary – more experimentation and theorizing is needed. This is partly because of the interconnectedness of intelligent systems, the interdependencies between cognitive functions in natural systems makes it difficult to say which functions are a must. To be sure, some of them are certainly holistically necessary to the function of a general intelligence. Some, however, may turn out to be marginally necessary, and a few not necessary at all. This issue will get clearer as we discuss architectures.

We have already sunk too deep into convoluted cognitive concepts; let's not forget the key aspects of intelligent systems. A perception-action loop – an ability to perceive and act – coupled with what we could call “understanding”, seems to nicely and concisely capture what cognitive systems are. No other systems, or system categories, have these properties. These are two critical concepts that must be at the center of our discussion at all times. In the following discussion it is useful to distinguish between terms that describe output (externally observable) behaviors of intelligent systems, versus internal (operational, architectural) functions and features. In the former camp are for example creativity, inventiveness, and understanding. In the latter camp are emotions, insightfulness, and intuition. The former are more useful for when we want to talk about what we expect our AGI to be able to do; the latter are more useful when discussing what kinds of architectures we expect to have to build to create machines capable of the former.



2012©K.R.Thórisson

/var/www/cadia.ru.is/wiki/data/pages/public/t720-atai-2012/requirements.txt · Last modified: 2024/04/29 13:33 by 127.0.0.1

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki