Both sides previous revisionPrevious revisionNext revision | Previous revision |
public:t720-atai-2012:what_is_agi [2021/08/16 09:04] – thorisson | public:t720-atai-2012:what_is_agi [2024/04/29 13:33] (current) – external edit 127.0.0.1 |
---|
A system that is autonomous can function "by itself", more or less, without needing constant or intermittent help or input from the outside, whether in the form of tutoring, adjustment, re-alignment, or resetting. It is very difficult to pry the concept of autonomy away from the concept of intelligence -- it is difficult to think of what an intelligent system would look like or behave like that was //not also//, in some significant way, //autonomous//, that is, capable of acting on its own. So //autonomy// is essential to any intelligent system: The more input that is **required** from the outside, the less autonomous the system is. A truly intelligent system should be able to "figure things out for itself", right? | A system that is autonomous can function "by itself", more or less, without needing constant or intermittent help or input from the outside, whether in the form of tutoring, adjustment, re-alignment, or resetting. It is very difficult to pry the concept of autonomy away from the concept of intelligence -- it is difficult to think of what an intelligent system would look like or behave like that was //not also//, in some significant way, //autonomous//, that is, capable of acting on its own. So //autonomy// is essential to any intelligent system: The more input that is **required** from the outside, the less autonomous the system is. A truly intelligent system should be able to "figure things out for itself", right? |
| |
Researchers on the scientific side have put aside, for the most part, the dream of a highly general artificial intelligence, and pursued goals whose solutions seem a bit closer in time. After numerous decades of thinking human-level intelligence is "only a decade off", a reduction in ambition may seem justified. And maybe it is. It just seems //so difficult!//. Bear in mind though that, the pursuit of scientific knowledge has never been considered by the most forward-looking scientists to be at the mercy of the //difficulty// of its many and varied topics; otherwise there could hardly be much progress in science! The scientific method has always delivered the best, most reliable knowledge, especially when taking the long view. But it requires dedication, patience, and creativity. When choosing to work on a particular topic, question, or domain, science should not ask "How difficult does it seem?" and dismisses it if the answer is "very" or "enormously". How could evolution or DNA been discovered in our search for the origin of the species if that were the case? Neither does science ask "How //useful// does this knowledge we seek seem to be?" as a main way to decide what are worthy topics of study. If that were the case Boolean logic would not have predated the electronic calculator (computer); Einstein's theory of relativity would not have predated space flight. | For the past 25 years, researchers on the scientific side had put aside, for the most part, the dream of a highly general artificial intelligence, and pursued goals whose solutions seem a bit closer in time. After numerous decades of thinking human-level intelligence is "only a decade off", a reduction in ambition may seemed justified. And maybe it was. It just seems //so difficult!//. However, several companies with deep pockets have recently started to discuss this goal again. Of course, the pursuit of scientific knowledge has never been considered by the most forward-looking scientists to be at the mercy of the //difficulty// of its many and varied topics; otherwise there could hardly be much progress in science! The scientific method has always delivered the best, most reliable knowledge, especially when taking the long view. But it requires dedication, patience, and creativity, and above all, the correct application of scientific principles. When choosing to work on a particular topic, question, or domain, science should not ask "How difficult does it seem?" and dismisses it if the answer is "very" or "enormously". How could evolution or DNA been discovered in our search for the origin of the species if that were the case? Neither does science ask "How //useful// does this knowledge we seek seem to be?" as a main way to decide what are worthy topics of study. If that were the case Boolean logic would not have predated the electronic calculator (computer); Einstein's theory of relativity would not have predated space flight. |
| |
And yet, the field of AI -- engineering practice and scientific inquiry alike -- seems to have decided that the pursuit of human-level intelligence is either too far off in the future, too difficult, or both, to justify making it //its main focus//. Instead, the mainstream research community seems to have chosen the topics it works on by looking in their toolbox and asking "What can be done with these tools?" That is why many people choose their academic career by that which fits within the confines of what can be done with the various currently available //techniques//, be it artificial neural networks, application of fuzzy logic, Bayesian networks, by brute-force genetic algorithms, or just simple tricks of programming. This is not, I must emphasize, how other scientific fields decide the key questions to work on; they generally try to order and choose research questions by how important they seem, how fundamental they appear to be, or by some other factors that are closely tied to the key phenomenon they are interested in. Thus, the questions of which phenomena to study in biology is not decided by what kinds of processes cellular automata can model, or by the resolution of the latest imagery equipment, but in fact quite the opposite: The biological processes deemed most important, critical, fundamental or interesting, is used to decide what //new kinds of tools// -- simulation tools (whether cellular automata or something else), imagery equipment, etc. -- are actually attempted to //be built//. | And yet, the field of AI -- engineering practice and scientific inquiry alike -- seems to have decided that the pursuit of human-level intelligence is either too far off in the future, too difficult, or both, to justify making it //its main focus//. Instead, the mainstream research community seems to have chosen the topics it works on by looking in their toolbox and asking "What can be done with these tools?" That is why many people choose their academic career by that which fits within the confines of what can be done with the various currently available //techniques//, be it artificial neural networks, application of fuzzy logic, Bayesian networks, by brute-force genetic algorithms, or just simple tricks of programming. This is not, I must emphasize, how other scientific fields decide the key questions to work on; they generally try to order and choose research questions by how important they seem, how fundamental they appear to be, or by some other factors that are closely tied to the key phenomenon they are interested in. Thus, the questions of which phenomena to study in biology is not decided by what kinds of processes cellular automata can model, or by the resolution of the latest imagery equipment, but in fact quite the opposite: The biological processes deemed most important, critical, fundamental or interesting, is used to decide what //new kinds of tools// -- simulation tools (whether cellular automata or something else), imagery equipment, etc. -- are actually attempted to //be built//. |
| |
As a phenomenon to be observed in nature, intelligence comes in many flavors, has many sides and forms of expression. This has perhaps made its study even more difficult -- how can we say that "intelligence" is //one// thing, when it has so many realizations and functions? Well, automobiles also have a number of realizations and functions. Even more so do laptop computers. Yet we have little difficulty in saying that something is a "laptop" while something else is not. So to take this analogy, while for a laptop, working on the screen, memory, hard drive, and battery technology separately, thinking that sometime, some day, we will put it all together and make a laptop, intelligence is not modularizable in the same way (this analogy is not perfect of course, because unlike intelligences, laptops are already an artifact that exists – so bear with me). By reducing intelligence to, say, the ability to play chess at human grand master, several //critical// capabilities of natural intelligences are cut out of the equation, to which we will come back later. While the hypothesis – put forth by some of the founding fathers of the field of A.I. – may have, at some point several decades ago, seemed plausible, that if we had a machine that could beat a human grand master at chess such a machine would //have to be generally intelligent//, the evidence is now in: This hypothesis could hardly have been proven //more// wrong. As a case in point, Deep Blue, the computer/software system that beat grand master and past world-champion Gary Kasparov in 1997, was not only found incapable of doing any other task that generally we consider intelligence necessary for, it was essentially found devoid of //any use whatsoever// other than playing chess, no matter how hard the IBM reseearchers scratched their head in trying to transfer some of the massive work that went into it to other tasks, fields, and projects (a team of experts spent two years and millions of dollars to find something else for Deep Blue to do -- with **no** success). But, you may ask, cannot these missing mental capabilities -- whatever they are -- be added in afterwards to such a system? In short: No. And the evidence for that is almost as conclusive as evidence presented by the Deep Blue story, as will be clear when we look further at what the "ingredients" of the "intelligence pie" are. | As a phenomenon to be observed in nature, intelligence comes in many flavors, has many sides and forms of expression. This has perhaps made its study even more difficult -- how can we say that "intelligence" is //one// thing, when it has so many realizations and functions? Well, automobiles also have a number of realizations and functions. Even more so do laptop computers. Yet we have little difficulty in saying that something is a "laptop" while something else is not. So to take this analogy, while for a laptop, working on the screen, memory, hard drive, and battery technology separately, thinking that sometime, some day, we will put it all together and make a laptop, intelligence is not modularizable in the same way (this analogy is not perfect of course, because unlike intelligences, laptops are already an artifact that exists – so bear with me). By reducing intelligence to, say, the ability to play chess at human grand master levels, several //critical// capabilities of natural intelligences are cut out of the equation, to which we will come back later. While the hypothesis – put forth by some of the founding fathers of the field of A.I. – may have, at some point several decades ago, seemed plausible, that if we had a machine that could beat a human grand master at chess such a machine would //have to be generally intelligent//, the evidence is now in: This hypothesis could hardly have been proven //more// wrong. As a case in point, Deep Blue, the computer/software system that beat grand master and past world-champion Gary Kasparov in 1997, was not only found incapable of doing any other task that generally we consider intelligence necessary for, it was essentially found devoid of //any use whatsoever// other than playing chess, no matter how hard the IBM reseearchers scratched their head in trying to transfer some of the massive work that went into it to other tasks, fields, and projects (a team of experts spent two years and millions of dollars to find something else for Deep Blue to do -- with **no** success). But, you may ask, cannot these missing mental capabilities -- whatever they are -- be added in afterwards to such a system? In short: No. And the evidence for that is almost as conclusive as evidence presented by the Deep Blue story, as will be clear when we look further at what the "ingredients" of the "intelligence pie" are. |
| |
By now it should be clear that the "g" in "GMI" (general machine intelligence) is an attempt to put back the emphasis on holistic intelligence, in the pursuit of artificially intelligent systems. It is there to re-invigorate the hopes and dreams of the founding fathers of A.I., such as Alan Turing, Marvin Minsky, Alen Newell, John McCarthy, and others, who thought that it might be possible to challenge human intelligence with a man-made information processing machine. Sure, they got some or most of their methodologies, assumptions, and predictions wrong, but that is inevitable in the early days of any scientific field. And we still agree with their main vision – that this goal is possible to achieve. However, we must choose our methodology carefully, hone our tools thoughtfully, and most importantly: We must not be tempted to simplify the thing we are studying -- //intelligence// -- so much so that it starts to differ significantly from the very phenomenon that got us interested this pursuit in the first place -- or even worse, starts to look like //something else entirely//. | By now it should be clear that the "g" in "GMI" (general machine intelligence) is an attempt to put back the emphasis on holistic intelligence, in the pursuit of artificially intelligent systems. It is there to re-invigorate the hopes and dreams of the founding fathers of A.I., such as Alan Turing, Marvin Minsky, Alen Newell, John McCarthy, and others, who thought that it might be possible to challenge human intelligence with a man-made information processing machine. Sure, they got some or most of their methodologies, assumptions, and predictions wrong, but that is inevitable in the early days of any scientific field. And we still agree with their main vision – that this goal is possible to achieve. However, we must choose our methodology carefully, hone our tools thoughtfully, and most importantly: We must not be tempted to simplify the thing we are studying -- //intelligence// -- so much so that it starts to differ significantly from the very phenomenon that got us interested this pursuit in the first place -- or even worse, starts to look like //something else entirely//. |