User Tools

Site Tools


public:t720-atai-2012:what_is_agi

Course notes.

What is General Machine Intelligence?

One of the most highly developed skills in contemporary Western civilization
is dissection: the split-up of problems into their smallest possible components.
We are good at it. So good, we often forget to put the pieces back together again.
- Alvin Toffler 
[as cited in Prigogine & Stengers, Order Out of Chaos, W. Heineman ltd., UK, 1984]



Main points:

  • Research on fundamental questions about general-purpose (read: widely-applicable) intelligence has taken a back seat in the pursuit of AI systems.
  • There are fundamental differences between systems with a narrow focus and those with general cognitive capabilities.
  • To make real progress in AI, its original question – how to make a truly intelligent machine – must be brought back to the forefront of AI research.


When chosen in 1956 as the name of a new research field, “artificial intelligence” referred to any man-made system capable of the many things considered requiring “intelligent thought”. Unfortunately the challenge of building a mind that rivals the human mind – even one matching only a few of the many features of its natural counterpart – seems to some as far away as ever.

Intelligence is a natural phenomenon. We see many examples of intelligence in nature – the ability of animals to learn new things and then perform these, to adapt to changes in tasks, in their surroundings, and to changes to their own abilities. These provide inspiration for an engineering effort we now call “artificial intelligence”, which aims to put such capabilities into artificial man-made objects. However, it turns out that present knowledge for how to build such systems is not nearly sufficient to even come close to replicating some skills exhibited by even a three-year old human child. Therefore the pursuit has another side, this is the research or scientific side, whose focus is to uncover the principles behind intelligent systems in general, so that we may create engineering principles for how to actually implement them ourselves in artificial things. Should such an effort succeed, the list of applications is potentially enormous, because essentially intelligence is about control, and the number of man-made and natural systems which humanity would like to control, or control better than we can at present, increases every year, counting everything from air conditioning and elevators in high-rises, to the latest digital cameras and software applications, social media filtering and recommendations, to unwanted environmental phenomena like forest fires to highly desired reduction of carbon dioxide in the air we breath.

So this is the subject of artificial intelligence. In the early days of the field, looking even further back in time than the days of Turing, the inspiration from nature made people wonder how long until we could create an artificial mind that rivaled the human mind. It turns out the human mind has a lot of potential for creating solutions to challenges, coming up with new ideas for how to live life, what to do with our leisure time, and so on. Harnessing some – or all – of this potential to our advantage in a machine, applying it to complex challenges large and small, has seemed a very promising possibility. The prerequisites for this being possible – the fact that thought was possible through appropriate implementations of electrical devices (neurons) and the realization that information processing could in fact be made the basis for doing anything from calculations to controlling robot arms – had already been identified and understood to a sufficient level to make it likely that a human-level artificial mind might not be too far off in the future.

A system that is autonomous can function “by itself”, more or less, without needing constant or intermittent help or input from the outside, whether in the form of tutoring, adjustment, re-alignment, or resetting. It is very difficult to pry the concept of autonomy away from the concept of intelligence – it is difficult to think of what an intelligent system would look like or behave like that was not also, in some significant way, autonomous, that is, capable of acting on its own. So autonomy is essential to any intelligent system: The more input that is required from the outside, the less autonomous the system is. A truly intelligent system should be able to “figure things out for itself”, right?

For the past 25 years, researchers on the scientific side had put aside, for the most part, the dream of a highly general artificial intelligence, and pursued goals whose solutions seem a bit closer in time. After numerous decades of thinking human-level intelligence is “only a decade off”, a reduction in ambition may seemed justified. And maybe it was. It just seems so difficult!. However, several companies with deep pockets have recently started to discuss this goal again. Of course, the pursuit of scientific knowledge has never been considered by the most forward-looking scientists to be at the mercy of the difficulty of its many and varied topics; otherwise there could hardly be much progress in science! The scientific method has always delivered the best, most reliable knowledge, especially when taking the long view. But it requires dedication, patience, and creativity, and above all, the correct application of scientific principles. When choosing to work on a particular topic, question, or domain, science should not ask “How difficult does it seem?” and dismisses it if the answer is “very” or “enormously”. How could evolution or DNA been discovered in our search for the origin of the species if that were the case? Neither does science ask “How useful does this knowledge we seek seem to be?” as a main way to decide what are worthy topics of study. If that were the case Boolean logic would not have predated the electronic calculator (computer); Einstein's theory of relativity would not have predated space flight.

And yet, the field of AI – engineering practice and scientific inquiry alike – seems to have decided that the pursuit of human-level intelligence is either too far off in the future, too difficult, or both, to justify making it its main focus. Instead, the mainstream research community seems to have chosen the topics it works on by looking in their toolbox and asking “What can be done with these tools?” That is why many people choose their academic career by that which fits within the confines of what can be done with the various currently available techniques, be it artificial neural networks, application of fuzzy logic, Bayesian networks, by brute-force genetic algorithms, or just simple tricks of programming. This is not, I must emphasize, how other scientific fields decide the key questions to work on; they generally try to order and choose research questions by how important they seem, how fundamental they appear to be, or by some other factors that are closely tied to the key phenomenon they are interested in. Thus, the questions of which phenomena to study in biology is not decided by what kinds of processes cellular automata can model, or by the resolution of the latest imagery equipment, but in fact quite the opposite: The biological processes deemed most important, critical, fundamental or interesting, is used to decide what new kinds of tools – simulation tools (whether cellular automata or something else), imagery equipment, etc. – are actually attempted to be built.

As a phenomenon to be observed in nature, intelligence comes in many flavors, has many sides and forms of expression. This has perhaps made its study even more difficult – how can we say that “intelligence” is one thing, when it has so many realizations and functions? Well, automobiles also have a number of realizations and functions. Even more so do laptop computers. Yet we have little difficulty in saying that something is a “laptop” while something else is not. So to take this analogy, while for a laptop, working on the screen, memory, hard drive, and battery technology separately, thinking that sometime, some day, we will put it all together and make a laptop, intelligence is not modularizable in the same way (this analogy is not perfect of course, because unlike intelligences, laptops are already an artifact that exists – so bear with me). By reducing intelligence to, say, the ability to play chess at human grand master levels, several critical capabilities of natural intelligences are cut out of the equation, to which we will come back later. While the hypothesis – put forth by some of the founding fathers of the field of A.I. – may have, at some point several decades ago, seemed plausible, that if we had a machine that could beat a human grand master at chess such a machine would have to be generally intelligent, the evidence is now in: This hypothesis could hardly have been proven more wrong. As a case in point, Deep Blue, the computer/software system that beat grand master and past world-champion Gary Kasparov in 1997, was not only found incapable of doing any other task that generally we consider intelligence necessary for, it was essentially found devoid of any use whatsoever other than playing chess, no matter how hard the IBM reseearchers scratched their head in trying to transfer some of the massive work that went into it to other tasks, fields, and projects (a team of experts spent two years and millions of dollars to find something else for Deep Blue to do – with no success). But, you may ask, cannot these missing mental capabilities – whatever they are – be added in afterwards to such a system? In short: No. And the evidence for that is almost as conclusive as evidence presented by the Deep Blue story, as will be clear when we look further at what the “ingredients” of the “intelligence pie” are.

By now it should be clear that the “g” in “GMI” (general machine intelligence) is an attempt to put back the emphasis on holistic intelligence, in the pursuit of artificially intelligent systems. It is there to re-invigorate the hopes and dreams of the founding fathers of A.I., such as Alan Turing, Marvin Minsky, Alen Newell, John McCarthy, and others, who thought that it might be possible to challenge human intelligence with a man-made information processing machine. Sure, they got some or most of their methodologies, assumptions, and predictions wrong, but that is inevitable in the early days of any scientific field. And we still agree with their main vision – that this goal is possible to achieve. However, we must choose our methodology carefully, hone our tools thoughtfully, and most importantly: We must not be tempted to simplify the thing we are studying – intelligence – so much so that it starts to differ significantly from the very phenomenon that got us interested this pursuit in the first place – or even worse, starts to look like something else entirely.

P.S. Research in AI sits on two pillars, engineering and science. In engineering the goal is to follow a model – to make the world behave according to the blueprint, be it a bridge, a house, a computer, a network, or something else. Science strives to discover the model, which is not known. These approaches work together in AI, but if your main goal is not science – the uncovering of new knowledge – then it is perfectly acceptable to build a system with a practical purpose. If you're a scientist, it is perfectly acceptable to use any and all engineering tools and tricks in your search for knowledge. Just don't confuse the two end goals, it will confuse everything and everyone, and while you may be able to get away with it (publish a lot of papers, get awards, get rich even), you may confuse the field you belong to – the youngsters coming into the field seeing your legacy – because you make them think that an unclear focus is the norm. This is, I'm afraid, very much the current state of affairs in the field of AI.



2020©K.R.Thórisson

/var/www/cadia.ru.is/wiki/data/pages/public/t720-atai-2012/what_is_agi.txt · Last modified: 2024/04/29 13:33 by 127.0.0.1

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki