This is an old revision of the document!
Course notes.
What is Artificial General Intelligence?
One of the most highly developed skills in contemporary Western civilization is dissection: the split-up of problems into their smallest possible components. We are good at it. So good, we often forget to put the pieces back together again. - Alvin Toffler [as cited in Prigogine & Stengers, Order Out of Chaos, W. Heineman ltd., UK, 1984]
Main points:
- Research on fundamental questions about general-purpose (read: widely-applicable) intelligence has taken a back seat in the pursuit of AI systems.
- There are fundamental differences between systems with a narrow focus and those with general cognitive capabilities.
- To make real progress in AI, its original question – how to make a truly intelligent machine – must be brought back to the forefront of AI research.
When chosen in 1956 as the name of a new research field, “artificial intelligence” referred to any man-made system capable of the many things considered requiring “intelligent thought”. Unfortunately the challenge of building a mind that rivals the human mind – even on only a few of the many ones of its natural counterpart – seems to some as far away as ever.
Intelligence is a natural phenomenon. We see many examples of intelligence in nature – the ability of animals to learn new things and then perform these, to adapt to changes in tasks, in their surroundings, and to changes to their own abilities. These provide inspiration for an engineering effort we call “artificial intelligence”, which aims to put such capabilities into artificial man-made objects. However, it turns out that present knowledge for how to build such systems is not nearly sufficient to even come close to replicating some skills exhibited by even a three-year old human child. Therefore the pursuit has another side, this is the research or scientific side, whose focus is to uncover the principles behind intelligent systems, so that we may create engineering principles for how to actually implement them ourselves in artificial things. Should such an effort succeed the list of potential applications is enormous, because essentially intelligence is about control, and the number of man-made and natural systems which humanity would like to control, or control better, increases every year, counting everything from air conditioning and elevators in high-rises, to the latest digital cameras and software applications, from unwanted environmental phenomena like forest fires to highly desired reduction of carbon dioxide in the air we breath.
So this is the subject of artificial intelligence. In the early days of the field, looking even further back in time than the days of Turing, the inspiration from nature made people wonder how long until we could create an artificial mind that rivaled the human mind. It turns out the human mind has a lot of potential for creating solutions to challenges, coming up with new ideas for how to live life, what to do with our leisure time, and so on. Harnessing some – or all – of this potential to our advantage, applying it to complex challenges large and small, has seemed a very promising possibility. The pre-requisites for this being possible – the fact that thought was possible through appropriate implementations of electrical devices (neurons) and the realization that information processing could in fact be made the basis for doing anything from calculations to controlling robot arms – had already been identified and understood to a sufficient level to make it likely that a human-level artificial mind might not be too far off in the future.
A system that is autonomous can function “by itself”, more or less, without needing constant or intermittent help or input from the outside, whether in the form of tutoring, adjustment, re-alignment, or resetting. It is very difficult to pry the concept of autonomy away from the concept of intelligence – it is difficult to think of what an intelligent system would look like or behave like that was not also, in some significant way, autonomous, that is, capable of acting on its own. So autonomy is essential to any intelligent system: The more input that is required from the outside, the less autonomous the system is. A truly intelligent system should be able to “figure things out for itself”, right?
Researchers on the scientific side have put aside, for the most part, the dream of a highly general artificial intelligence, and pursued goals whose solutions seem a bit closer in time. After numerous decades of thinking human-level intelligence is “only a decade off”, a reduction in ambition may seem justified. And possibly it is. But the pursuit of scientific knowledge has never been considered, at least by the foremost scientists, to be at the mercy of the difficulty of its many and varied topics: The scientific method has always delivered the best, most reliable knowledge, especially when taking the long view. In other words, when choosing to work on a particular topic, question, or domain, science does not ask “How difficult does it seem?” and dismisses it if the answer is “very” or “enormously”. If that were the case we would never have uncovered, for example, evolution or DNA in our search for the origin of the species. Neither does science ask “How useful does this work seem to be?” as a main way to decide what are worthy topics of study. If that were the case Boolean logic would not have predated the electronic calculator (computer); Einstein's theory of relativity would not have predated space flight.
And yet, the field of AI – engineering practice and scientific inquiry alike – seems to have decided that the pursuit of human-level intelligence is either too far off in the future, too difficult, or both, to justify working on it as its main focus. Instead, the mainstream research community seems to have chosen the topics it works on by looking in their toolbox and asking “What can be done with these tools?” That is why many people choose their academic career by that which fits within the confines of what can be done with the various currently available techniques, be it artificial neural networks, application of fuzzy logic, Beyesian networks, by brute-force genetic algorithms, or just simple tricks of programming. This is not, I must emphasize, how other scientific fields decide the bulk of what to work on; they generally try to order and choose research questions by how important they seem, how fundamental they appear to be, or by some other factors that are closely tied to the phenomenon they have chosen to focus their studies on. Thus, the questions of which biological phenomena to study in biology is not decided by what kinds of processes cellular automata can be used to model, or by the resolution of the latest imagery equipment, but in fact quite the opposite: The biological processes deemed most important, critical, fundamental or interesting, is used to decide what kinds of tools – simulation tools (whether cellular automata or something else), imagery equipment, etc. is actually attempted to be built.
As a phenomenon to be observed in nature, intelligence comes in many flavors, has many sides and forms of expression. This has perhaps made its study even more difficult – how can we say that “intelligence” is one thing, when it has so many realizations and functions? Well, automobiles also have a number of realizations and functions. Even more so do laptop computers. Yet we have little difficulty in saying that something is a “laptop” while something else is not. So to take an analogy, while for a laptop, working on the screen, memory, hard drive, and battery technology separately, thinking that sometime, some day, we will put it all together and make a laptop, intelligence is a not modularizable in the same way (this analogy is not perfect of course, because unlike intelligences, laptops are already an artifact that exists – but bear with me). By reducing intelligence to, say, the ability to play chess at human grand master, several critical capabilities of natural intelligences are cut out of the equation, to which we will come back later. While the hypothesis – put forth by some of the founding fathers of the field of A.I. – may have, at some point several decades ago, seemed plausible, that if we had a machine that could beat a human grand master at chess such a machine would have to be generally intelligent, the evidence is now in: This hypothesis could hardly have been proven more wrong. As a case in point, Deep Blue, the computer/software system that beat grand master and past world-champion Gary Kasparov in 1997, was not only found incapable of doing any other task that generally we consider intelligence necessary for, it was essentially found devoid of any use whatsoever other than playing chess, no matter how hard the IBM reseearchers scratched their head in trying to transfer some of the massive work that went into it to other tasks, fields, and projects. But, you may ask, cannot these missing mental capabilities – whatever they are – be added in afterwards? In short: No. The evidence for this is almost as conclusive, as will be clear when we look further at what the “ingredients” of the “intelligence pie” are.
By now it should be clear that the “G” in “AGI” is an attempt to put back the emphasis on holistic intelligence, in the pursuit of artificially intelligent systems. It is there to re-invigorate the hopes and dreams of the founding fathers of A.I., such as Alan Turing, Marvin Minsky, Alen Newell, and others, who thought that it might be possible to challenge human intelligence with a man-made information processing machine. Sure, they got some or most of their methodologies, assumptions, and predictions wrong, but we still agree with their main hope – that this is possible. But we must choose our tools wisely, our methodology carefully, and most importantly: We must not be tempted to simplify the thing we are studying – intelligence – so much that it starts to differ significantly from the very phenomenon that got us interested in the first place – or even worse, starts to look like something else entirely.
2018©K.R.Thórisson