public:t-720-atai:atai-22:intelligence
Table of Contents
T-720-ATAI-2022 Main
Link to Lecture Notes
INTELLIGENCE
Intelligence: A Natural Phenomenon
Intelligence | A phenomenon encountered in nature. Intelligence is a phenomenon with good examples in the natural world, but may have more forms than the examples from nature. The only one that everyone agrees on to call 'intelligent': humans. |
Natural Intelligence | Intelligence as it appears in nature. Some kinds of animals are considered “intelligent”, or at least some behavior of some individuals of an animal species other than humans are deemed indicators of intelligence. |
Cognitive Science | The study of natural intelligence, in particular human (and that found in nature). |
Artificial Intelligence | The study of how to make intelligent machines. |
Intelligent Machines | Systems created by humans intended to display (some - but not all?) features required of beings encountered in nature. |
How to define 'intelligence' | Many definitions have been proposed. See e.g.: A Collection of Definitions of Intelligence by Legg & Hutter. |
Definitions: a word of caution | We must be careful when it comes to definitions – for any complex system there is a world of difference between a decent definition and good accurate appropriate definition. |
Related quote | Aaron Sloman says: “Some readers may hope for definitions of terms like information processing, mental process, consciousness, emotion, love. However, each of these denotes a large and ill-defined collection of capabilities or features. There is no definite collection of nec- essary or sufficient conditions (nor any disjunction of conjunctions) that can be used to define such terms.” (From Architectural Requirements for Human-like Agents Both Natural and Artificial by A. Sloman) |
Working Definition of Intelligence
The (working) Definition of Intelligence Used in This Course | Adaptation with insufficient knowledge and resources – Pei Wang |
'Adaptation' | means changing strategically in light of new information. |
'Insufficient' | means that it cannot be guaranteed, and in fact, can never be guaranteed to be sufficient to guarantee that goals are achieved. The reason it cannot is that an agent in the physical world can never know for sure that it has everything needed to achieve its goals. |
'Knowledge' | means information structures (about target phenomena) that allows an agent to predict, achieve goals, explain, or model (target phenomena). |
'Resources' | means that it cannot be guaranteed, and in fact, can never be guaranteed. The reason they cannot is that we don't know the 'axioms' of the physical world, and even if we did we could never be sure of it. |
Another way to say 'Adaptation under Insufficient Knowledge & Resources' | “Discretionarily Constrained Adaptation Under Insufficient Knowledge & Resources” – K. R. Thórisson Or simply Figuring out how to get new stuff done. |
'Discretionarily constrained' adaptation | means that an agent can choose particular constraints under which to operate or act (e.g. to not consume chocolate for a whole month) – that the agent's adaptation can be arbitrarily constrained at the discretion of the agent itself (or someone/something else). Extending the term 'adaptation' with this longer 'discretionarily constrained' has the benefit of separating this use of the term ‘adaptation’ from its more common use in the context of natural evolution, where it describes a process fashioned by uniform physical laws. |
Some Interesting Case Stories of Intelligence
The Crow | One crow was observed, on multiple occasions, to make its own tools. |
The Parrot | One parrot (Alex) was taught multiple concepts in numbers and logic, including the meaning of “not” and the use of multi-dimensional features to dynamically create object groups for the purpose of communicating about multiple objects in single references. Oh, and the parrot participates in dialogue. |
The Ape | One ape (Koko) was taught to use sign language to communicate with its caretakers. It was observed creating compound words that it had never heard, for purposes of clarifying references in space and time. |
Some (Pressing?) Questions
Isn't intelligence 'almost solved'? | Short answer: No! If it's almost solved it's been “almost solved” for over 60 years. And yet we still don't have machines with real intelligence. |
Should we fear AI? | Short answer: No! The threat lies with humans, not with machines – human abuse of knowledge goes back to the stone age. |
Is the Singularity near? | Short answer: Who's to say? Predictions are difficult, especially wrt the future. By the time the course is finished you will be in a good position to make up your own mind about this. |
Historical Concepts
AI | In 1956 there was a workshop at Dartmouth College in the US where many of the field's founding fathers agreed on the term to user for their field, and outlined various topics to be studied within the field. |
GOFAI | “Good old-fashioned AI” is a term used nowadays to describe the first 20-30 years of research in the field. |
Cybernetics | Going back to WWI the field of cybernetics claimed a scope that could easily be said to subsume AI. Many of the ideas associated with information technology came out of this melting pot, including ideas by von Neumann. However, cybernetics has since all but disappeared. Why? |
GMI or AGI | “Artificial general intelligence” or “general machine intelligence”: What we call the machine that we hope to build that could potentially surpass human intelligence at some point in the future – a more holistic take on the phenomenon of intelligence than the present mainstream AI research would indicate. Will we succeed? Only time will tell. |
Key Concepts in AI
Perception / Percept | A process (perception) and its product (percept) that is part of the cognitive apparatus of intelligent systems. |
Goal | The resulting state after a successful change. |
Task | A problem that is assigned to be solved by an agent. |
Environment | The constraints that may interfere with achieving a goal. |
Plan | The partial set of actions that an agent assumes will achieve the goal. |
Planning | The act of generating a plan. |
Knowledge | Information that can be used for various purposes. |
Agent | A system that can sense and act in an environment to do tasks. https://en.wikipedia.org/wiki/Intelligent_agent |
AI is a Broad Field of Research
A Scientific Discipline | As an empirical scientific discipline, AI aims to figure out the general principles of how to implement the phenomenon of intelligence, including any subset thereof. Does empiricism mean there is no theory? Absolutely not! Physics is an empirical science, yet is the scientific field that has the most advanced, accomplished theoretical foundation. What would a proper theory of intelligence contain, if it existed? A general theory of intelligence would allow us to implement intelligence, or any subset thereof, successfully on first try, in a variety of formats meeting a variety of constraints. |
An Engineering Discipline | What separates AI from e.g. cognitive science? AI explicitly aims at figuring out how to build machines that are intelligent. Note that CogSci may also build intelligent machines, but the goal is not the machine itself, or the principles of its construction, but rather its role as a tool to figure out how intelligence works. The outcome from such work is unlikely overlap much due to the difference in working constraints,* although in the long term the two are likely to converge – and the two different approaches should be able to help each other. |
AI spans many fields | Psychology, mathematics and computation, neurology, philosophy. |
Alternative View | Psychology, mathematics & computation, neurology, and philosophy all were sooner than AI to address concepts of high relevance to the study of intelligence. |
Is AI a subfield of computer science? | Yes and No. Yes, because this is the field that has the best and most tools for studying it as a phenomenon. No, because the field does not address important concepts and features of intelligence. |
* Like the difference between constructing brick walls to study the stability of rock formations in nature versus the engineering principles of building brick walls: If the principles are well understood (weight distribution and stability), you should be able to build walls out of many materials.
Terminology
Terminology is important! | The terms we use for phenomena must be shared to work as an effective means of communication. Obsessing about the definition of terms is a good thing! |
Beware of Definitions! | Obsessing over precise, definitive definitions of terms should not extend to the phenomena that the research targets: These are by definition not well understood. It is impossible to define something that is not understood! So beware of those who insist on such things. |
Overloaded Terms | Many key terms in AI tend to be overloaded. Others are very unclear. Examples of the latter include: intelligence, agent, concept, thought. Many terms have multiple meanings, e.g. reasoning, learning, complexity, generality, task, solution, proof. Yet others are both unclear and polysemous, e.g. consciousness. One source of the multiple meanings for terms is the tendency, in the beginning of a new research field, for founders to use common terms that originally refer to general concepts in nature, and which they intend to study, about the results of their own work. As time passes those concepts then begin to reference work done in the field, instead of their counterpart in nature. Examples include reinforcement learning (originally studied by Pavlov, Skinner, and others in psychology and biology), machine learning (learning in nature different from 'machine learning' in many ways), neural nets (artificial neural nets bear almost no relation to biological neural networks). Needless to say, this regularly makes for some lively but more or less pointless debates on many subjects within the field of AI (and others, in fact, but especially AI). |
So What Is Intelligence?
Why The Question Matters | It is important to know what you're studying and researching! …A researcher selects |
The Challenge | You cannot define something precisely until you understand it! Premature precise definitions may be much worse than loose definitions or even bad-but-rough definitions: You are very likely to end up researching something other than what you set out to research. |
What Can We Do? | List the requirements. Even a partial list will go a long way towards helping steer the research. Engineers use requirements to guide their building of artifacts. If the artifact doesn't meet the requirements it is not a valid member of the category that was targeted. In science it is not customary to use requirements to guide research questions, but it works just the same (and equally well!): List the features of the phenomenon you are researching and group them into essential, important but non-essential, and other. Then use these to guide the kinds of questions you try to answer. |
Before Requirements, Look At Examples | To get to a good list it may be necessary to explore the boundaries of your phenomenon. |
Create a Working Definition | It's called a “working definition” because it is supposed to be subject to (asap) scrutiny and revision. A good working definition avoids the problem of entrenchment, which, in the worst case, may result in a whole field being re-defined around something that was supposed to be temporary. One great example of that: The Turing Test. |
Is a System Intelligent If ... ?
Lifetime | …it can really learn anything, but it takes the duration of the universe for it to learn each of those things? |
Generality | …it can only learn one task, but it can get better at it than any other system or controller in the universe? |
Generality | …it can only learn one task at a time, and can only learn something else by forgetting what it knew before? |
Response Time | …it can respond to anything and everything correctly, but always responds too late? |
Implementation | …it can learn to do anything in principle, but is in principle impossible to implement? |
Implementation | …it can learn to do anything in principle, but requires as much energy as is available in the whole universe to run? |
Autonomy | …it can learn and do anything, but it cannot do anything entirely on its own and always requires help from the outside? |
Autonomy | …it can learn anything, but it cannot learn whether or not to trust its own abilities? |
Autonomy | …it can learn anything, but cannot handle any variation on what it has learned? |
2022©K. R. Thórisson
/var/www/cadia.ru.is/wiki/data/pages/public/t-720-atai/atai-22/intelligence.txt · Last modified: 2024/04/29 13:33 by 127.0.0.1