User Tools

Site Tools


public:t720-atai-2012:what_is_agi

This is an old revision of the document!


Course notes.

What is Artificial General Intelligence?

One of the most highly developed skills in contemporary Western civilization
is dissection: the split-up of problems into their smallest possible components.
We are good at it. So good, we often forget to put the pieces back together again.
- Alvin Toffler 
[in Prigogine & Stengers, Order Out of Chaos, W. Heineman ltd., UK, 1984]



Main points:

  • Research on fundamental questions about general-purpose (read: widely-applicable) intelligence has taken a back seat in the pursuit of AI systems.
  • There are fundamental differences between systems with a narrow focus and those with general cognitive capabilities.
  • The fundamental question of AI – how to implement human-level intelligence in a machine – must be brought back to the forefront of AI research.


There is nothing about the concept of “artificial intelligence” – i.e. a man-made system capable of the many things we consider calling for “intelligent thought” – that prevents a whole field to develop an aversion to addressing autonomy – a system that functions “by itself”, more or less, without needing constant (or even intermittent) help or input from the outside, in the form of tutoring, adjustment, re-alignment, or resetting. Yet autonomy is essentially a critical part of any intelligent system: The more input that is required from the outside, the less autonomous the system is. A truly intelligent system should be able to “figure things out for itself”, right?

It is very difficult to pry the concept of autonomy away from the concept of intelligence – it is difficult to think of what an intelligent system would look like or behave like that was not also, in some significant way, autonomous. Yet it turns out that most A.I. research, since the Dartmouth conference in 1956, has focused on isolated slices of the “intelligence pie”.

Intelligence is a natural phenomenon. We see many examples of intelligence – the ability of animals to learn new things and then perform these, to adapt to changes in tasks, in their surroundings, and to changes to their own abilities. These provide inspiration for an engineering effort we call “artificial intelligence”, which aims to put such capabilities into artificial man-made objects. However, it turns out that present knowledge for how to build such systems is not nearly sufficient to even come close to replicating some skills exhibited by even a three-year old human child. Therefore the pursuit has another side, this is the research or scientific side, whose focus is to uncover the principles behind intelligent systems, so that we may create engineering principles for how to actually implement them ourselves in artificial things. And the list of potential applications is enormous, because essentially intelligence is about control, and the number of man-made and natural systems which humanity would like to control better increases every year, counting everything from air conditioning and elevators in high-rises, to the latest digital cameras and software applications, from unwanted environmental phenomena like forest fires to highly desired reduction of carbon dioxide in the air we breath.

This is the subject of artificial intelligence. In the early days of the field, looking at least as far back as the days of Turing, the inspiration from nature made people wonder how long until we could create an artificial mind that rivaled the human mind. It turns out the human mind has a lot of potential for creating solutions to challenges, coming up with new ideas for how to live life, what to do with our leisure time, etc. Harnessing some – or all – of this potential to our advantage, applying it to complex challenges large and small, seemed a very promising possibility. The pre-requisits for this being possible – the fact that thought was possible through appropriate implementations of electrical devices (neurons) and the realization that information processing could in fact be made the basis for doing anything from calculations to controlling robot arms – had already been identified and understood to a sufficient level to make it likely that a human-level artificial mind might not be too far off in the future.

Unfortunately the challenge of building a “brain” by hand that rivals the human mind – even on only one or two significant dimensions – seems to some as far off as ever. As a result, and possibly for other reasons, researchers on the scientific side have put aside, for the most part, the dream of a highly general artificial intelligence, and pursued goals whose solutions seem a bit closer in time. After numerous decades of thinking human-level intelligence is only a decade off, this reduction in ambition may seem warranted. And possibly it is. But science has never been considered, at least by those on the forefront of the field, to be at the mercy of the difficulty of its topics. In other words, when choosing to work on a particular topic, question, or domain, science does not ask “How difficult does it seem?” and dismiss it if the answer is “Very” or even “Enormous”. It does also not ask “How useful does this work seem to be?” as a way to decide what to work on. If that were the case Boolean logic would not have predated the electronic calculator (computer), or the flying airplane predated the math behind aerodynamics and the possibility of flight.

And yet, the field of AI – engineering practice and scientific inquiry alike – seems to have decided that the pursuit of human-level intelligence is either too far off in the future, too difficult, or both, to justify working on it as a key focus. The mainstream research community seems to have chosen the topics it works on by looking in their toolbox and asking “What can be done with these present tools?” That is why many people choose their academic career by demarcating it within the confines of what can be done with artificial neural networks, or by application of fuzzy logic, or logic. This is not, I should hasten to say, how other scientific fields decide the bulk of what to work on; they generally try to pick research questions by how important they seem, how fundamental they appear, or by some other factors that are closely tied to the phenomenon they have chosen to focus on. Thus, the questions of which biological phenomena to study in biology is not decided by what kinds of processes cellular automata can be used to model, or by the resolution of the latest imagery equipment, but in fact quite the opposite: The biological processes deemed most important, critical, fundamental or interesting is used to decide what kinds of tools – simulation tools (whether cellular automata or something else), imagery equipment, etc. is actually attempted to be built.

As a phenomenon to be observed in nature, intelligence comes in many flavors, has many sides and forms of expression. This has perhaps made its study even more difficult – how can we say that “intelligence” is one thing, when it has so many realizations and functions? Well, automobiles also have a number of realizations and functions. Even more so do laptop computers. Yet we have no difficulty in saying that something is a “laptop” while something else is not. So to take an analogy, while for a laptop, working on the screen, memory, hard drive, and battery technology separately, thinking that sometime, some day, we will put it all together and make a laptop (the analogy is not perfect of course, because unlike intelligences, laptops are already an artifact that exists), intelligence is a not modularizable in the same way. By reducing intelligence to, for example, the ability to play chess at human grand master, several key capabilities of natural intelligences are cut out of the equation. While the hypothesis put forth by some of the founding fathers of the field of A.I. may have, at some point several decades ago seemed plausible, that if we had a machine that could beat a human grand master at chess that machine would have to be generally intelligent, the evidence is now in. This hypothesis could hardly have been proven more wrong. As a case in point, Deep Blue, the computer/software system that beat grand master and past world-champion Gary Kasparov in 1997, was not only found incapable of doing any other task that generally we consider intelligence necessary for, it was essentially found devoid of any use whatsoever, no matter how hard the IBM reseearchers scratched their head in trying to transfer some of the work that went into Deep Blue over to other tasks, fields, and projects. But, you may ask, cannot these missing ingredients – whatever they may be – added in afterwards? The evidence for this is almost as conclusive: Unfortunately, you can not. This will be clear when we look further at what the “ingredients” of the “intelligence pie are.

By now it should be clear that the “G” in “AGI” is an attempt to put back the emphasis on holistic intelligence, in the pursuit of artificial intelligence. It is there to re-invigorate the hopes and dreams of the founding fathers of A.I., such as alan Turing, Marvin Minsky, Alen Newell, and others, who thought that it might be possible to challenge human intelligence with a man-made information processing machine. Sure, they got some or most of their methodologies, assumptions, and predictions wrong, but we still agree with them, we still think it is possible. But we must choose our tools wisely, our methodology carefully, and most importantly: We must not be tempted to simplify the thing we are studying – intelligence – so much that it starts to differ from the very phenomenon that got us interested in the first place – or even worse, something else entirely.



2012©K.R.Thórisson

/var/www/cadia.ru.is/wiki/data/attic/public/t720-atai-2012/what_is_agi.1372426291.txt.gz · Last modified: 2024/04/29 13:33 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki