[[/public:t-720-atai:atai-24:main|T-720-ATAI-2024 Main]] \\ [[/public:t-720-atai:atai-24:lecture_notes|Link to Lecture Notes]] ====== INTELLIGENCE ====== \\ \\ ==== What is Intelligence? ==== | Why The Question Matters | It is important to know what you're studying and researching! \\ A researcher selects topics in nature; an engineer decides what problem to solve. \\ AI has elements of both these approaches. | | Intelligence is a phenomenon encountered in nature | Intelligence is a natural phenomenon with good examples found in the natural world, but may have more forms than the examples from nature. \\ Its only form that //everyone// agrees on to call 'intelligent': **humans**. | \\ \\ ==== Key Terms ==== | Natural Intelligence | Intelligence as it appears in nature. Some kinds of animals are considered "intelligent", or at least some behavior of some individuals of an animal species other than humans are deemed indicators of intelligence. | | Cognitive Science | The study of natural intelligence, in particular human (and that found in nature). | | Artificial Intelligence | The study of how to make intelligent machines. | | Intelligent Machines | Systems created by humans intended to display (some - but not all?) features required of beings encountered in nature. | | How to define 'intelligence' | Many definitions have been proposed. \\ See e.g.: [[http://www.vetta.org/documents/A-Collection-of-Definitions-of-Intelligence.pdf|A Collection of Definitions of Intelligence]] by Legg & Hutter. | | Definitions: a word of caution | We must be careful when it comes to definitions -- for any complex system there is a world of difference between a decent definition and //good accurate appropriate// definition. | | \\ Related quote | Aaron Sloman says: "Some readers may hope for definitions of terms like information processing, mental process, consciousness, emotion, love. However, each of these denotes a large and ill-defined collection of capabilities or features. There is no definite collection of necessary or sufficient conditions (nor any disjunction of conjunctions) that can be used to define such terms." (From [[http://www.cs.bham.ac.uk/research/projects/cogaff/Sloman.kd.pdf|Architectural Requirements for Human-like Agents Both Natural and Artificial]] by A. Sloman) | \\ \\ ====The "Turing Test"==== | What it is | A **working definition** of intelligence proposed by Alan Turing in 1950. | | Why it's relevant | Before there were any good proposals for defining the phenomenon of intelligence, it stood as the only proposal for a pragmatic/practical definition of the //concept of intelligence//. | | Basic idea | A linguistic inquiry of a supposed intelligence (via a terminal, making this the "original 'online chat' interface") could expose its limitations by asking it questions about anything and everything. Either its limitations would be exposed, through its demonstrated lack of capacity for comprehending the world, or it would be indistinguishable from doing this with a human, which would mean it would have to be intelligent. | | \\ Method | It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." We now ask the question, "What will happen when a machine takes the part of A in this game?" | | \\ Implementations | The [[https://en.wikipedia.org/wiki/Loebner_Prize|Loebner Prize competition]] ran for some decades, offering a large financial prize for the first machine to "pass the Turing Test". No machine competing in such a tournament has offered any significant advances in the field of AI, and most certainly not towards how to make a machine with real intelligence. | | Pros | The basic idea seems fine. It is difficult to imagine an honest, collaborative machine playing this game for several days or months could ever fool a human into thinking it was a grown human unless it really understood a great deal. | | \\ Cons | The idea's usefulness has been watered down by turning it into something much bigger than what it was originally intended to be - or can bear. //"It's important to note that Turing never meant for his test to be the official benchmark as to whether a machine or computer program can actually think like a human"// (- Mark Reidl) \\ Targets evaluation at a single point in time. \\ Anchored in human language, social convention and dialogue. \\ The Loebner Prize competition was ran for some decades, offering a large financial prize for the first machine to "pass the Turing Test". None of the competing machines offered any significant advances in the field of AI, and most certainly not on AGI. | | Bottom Line | As a working definition in 1950, it was an interesting idea, but has run its course of utility. \\ Little evidence in support of the claim that it has helped with progress in AI research. \\ With the advent of Large Language Models, this | | \\ Links | [[https://chatbotsmagazine.com/how-to-win-a-turing-test-the-loebner-prize-3ac2752250f1|2017 Loebner prize article]] \\ [[https://artistdetective.wordpress.com/2019/09/21/loebner-prize-2019/|Blog entry on Lobner Prize competitor 2019]] \\ The Loebner Prize competition ended in 2020. Feel free to chat with[[https://www.pandorabots.com/mitsuku/|Kuki]], the 2019 Lobner Prize winner. | \\ \\ ====How Do We Approach the Phenomenon of Intelligence?==== | Be Careful with Definitions! | By definition, a definition of a phenomenon is useful only if it helps you in your research. So a good definition is fantastic: It advances research and helps with progress towards the goal of understanding a novel phenomenon. But what is the effect of a bad definition? | | \\ The Challenge | You cannot define something precisely until you understand it! \\ Premature precise definitions may be much worse than loose definitions or even pretty-bad-but-roughly-approximate definitions: With a bad definition you are very likely to end up researching //something other// than what you set out to research. It will look like you're making progress, and this may very well be the case, but it will be //towards a different goal than what you thought.// | | \\ \\ What Can We Do? | List the //requirements//. Even a partial list will go a long way towards helping steer the research. \\ Engineers use requirements to guide their building of artifacts. If the artifact doesn't meet the requirements it is not a valid member of the category that was targeted. \\ In science it is not customary to use requirements to guide research questions, but it works just the same (and equally well!): List the features of the phenomenon you are researching and group them into **essential**, **important but non-essential**, and **other**. Then use these to guide the kinds of questions you try to answer. | | Before Requirements, Look At Examples | To get to a good list it may be necessary to explore the boundaries of your phenomenon. | | \\ Create a Working Definition | It's called a "//working// definition" because it is supposed to be subject to (asap) scrutiny and revision. \\ A good working definition avoids the problem of entrenchment, which, in the worst case, may result in a whole field being re-defined around something that was supposed to be temporary. \\ One great example of that: The Turing Test. | \\ \\ ==== A Modern Working Definition of Intelligence ==== | The (working) \\ Definition of \\ Intelligence \\ Used in This \\ Course | \\ **Adaptation with insufficient knowledge and resources** \\ -- Pei Wang \\ \\ | | 'Adaptation' | means changing strategically in light of new information. | | 'Insufficient' | means that it cannot be guaranteed, and in fact, can never be guaranteed to be sufficient to guarantee that goals are achieved. The reason it cannot is that an agent in the physical world can never know for //sure// that it has everything needed to achieve its goals. | | 'Knowledge' | means information structures (about target phenomena) that allows an agent to predict, achieve goals, explain, or model (target phenomena). | | 'Resources' | means that it cannot be guaranteed, and in fact, can never be guaranteed. The reason they cannot is that we don't know the 'axioms' of the physical world, and even if we did we could never be sure of it. | | \\ Another way to say \\ 'Adaptation under Insufficient Knowledge & Resources' | \\ "Discretionarily Constrained Adaptation Under Insufficient Knowledge & Resources" \\ -- K. R. Thórisson \\ \\ Or simply **Figuring out how to get new stuff done**. \\ \\ | | 'Discretionarily constrained' adaptation | means that an agent can //choose// particular constraints under which to operate or act (e.g. to not consume chocolate for a whole month) -- that the agent's adaptation can be arbitrarily constrained at the discretion of the agent itself (or someone/something else). Extending the term 'adaptation' with this longer 'discretionarily constrained' has the benefit of separating this use of the term ‘adaptation’ from its more common use in the context of natural evolution, where it describes a process fashioned by uniform physical laws. | \\ \\ ==== Q: What is AI? A: A Field of Research==== | A Field of Research | First and foremost, the field of AI is a field of scientific research, pursuing the topic of intelligence, with the aim of being sufficiently detailed and general as to enable a variety of (partial and full) implementations of intelligence in machines. | | \\ A Scientific Discipline | As an empirical scientific discipline, AI aims to figure out the general principles of how to implement the phenomenon of intelligence, including any subset thereof. \\ Does empiricism mean there is no theory? Absolutely not! Physics is an empirical science, yet is the scientific field that has the most advanced, accomplished theoretical foundation. \\ What would a proper theory of intelligence contain, if it existed? A //general// theory of intelligence would allow us to implement intelligence, or any subset thereof, successfully on first try, in a variety of formats meeting a variety of constraints. | | \\ An Engineering Discipline | What separates AI from e.g. cognitive science? AI explicitly aims at figuring out how to build machines that are intelligent. \\ Note that CogSci may also build intelligent machines, but the goal is not the machine itself, or the principles of its construction, but rather its role as a tool to figure out how intelligence works. The outcome from such work is unlikely overlap much due to the difference in working constraints,* although in the long term the two are likely to converge -- and the two different approaches should be able to help each other. | | AI spans many scientific fields of study | \\ Psychology, mathematics and computation, neurology, philosophy. | | Alternative View | Psychology, mathematics & computation, neurology, and philosophy all were sooner than AI to address concepts of high relevance to the study of intelligence. | | Is AI a subfield of computer science? | Yes and No. Yes, because this is the field that has the best and most tools for studying it as a phenomenon. No, because the field does not address important concepts and features of intelligence. | //* Like the difference between constructing brick walls to study the stability of rock formations in nature versus the engineering principles of building brick walls: If the principles are well understood (weight distribution and stability), you should be able to build walls out of many materials.// \\ \\ ====Is a System Intelligent If ... ==== | ...it can really learn //anything//, but it takes the duration of the universe for it to learn each of those things? | Lifetime | | ...it can only learn one task, but it can get better at it than any other system or controller in the universe? | Generality | | ...it can only learn one task at a time, and can only learn something else by forgetting what it knew before? | Generality | | ...it can respond to anything and everything correctly, but always responds too late? | Response Time | | ...it can learn to do anything //in principle//, but is in principle impossible to implement? | Implementation | | ...it can learn to do anything //in principle//, but requires as much energy as is available in the whole universe to run? | Implementation | | ...it can learn and do anything, but it cannot do anything entirely on its own and always requires help from the outside? | Autonomy | | ...it can learn anything, but it cannot learn whether or not to trust its own abilities? | Autonomy | | ...it can learn anything, but cannot handle any variation on what it has learned? | Autonomy | \\ \\ \\ 2024(c)K. R. Thórisson