This is an old revision of the document!
Table of Contents
T-719-NXAI-2025 Main
Links to Lecture Notes
READINGS
- The Future of AI Research: Ten Defeasible 'Axioms of Intelligence' by K.R.Thórisson and H. Minsky
- Cognitive Architecture Requirements for Achieving AGI by J.E. Laird et al.
Artificial intelligence (AI) started as a research field. It still is. Just like research results in physics are useful for engineering, results in AI are useful for industry. AI is still in formation, much like computer science. It is a knowledge-generating enterprise funded by the public through universities and competitive research grants. Applications of AI are funded by companies and through various other means (including competitive grants for applied research). The knowledge generated in AI research is in part determined by the nature of the enterprise - how it's organized, who are the influencers, what are low-hanging fruit, etc.
Requirements for General Autonomous Intelligence
When engineers make an artifact, like a bridge or a space rocket, they start by listing the artifact's requirements. This way, for any proposed implementation, they can check their progress by comparing the performance of a prototype to these. The below papers consider what are necessary and sufficient requirements for a machine with real intelligence. (Therefore, these speak to defining what 'intelligent systems' in fact really means.)
Terminology
Terminology is important! | The terms we use for phenomena must be shared to work as an effective means of communication. Obsessing about the definition of terms is a good thing! |
Beware of Definitions! | Obsessing over precise, definitive definitions of terms should not extend to the phenomena that the research targets: These are by definition not well understood. It is impossible to define something that is not understood! So beware of those who insist on such things. |
Overloaded Terms | Many key terms in AI tend to be overloaded. Others are very unclear. Examples of the latter include: intelligence, agent, concept, thought. Many terms have multiple meanings, e.g. reasoning, learning, complexity, generality, task, solution, proof. Yet others are both unclear and polysemous, e.g. consciousness. One source of the multiple meanings for terms is the tendency, in the beginning of a new research field, for founders to use common terms that originally refer to general concepts in nature, and which they intend to study, about the results of their own work. As time passes those concepts then begin to reference work done in the field, instead of their counterpart in nature. Examples include reinforcement learning (originally studied by Pavlov, Skinner, and others in psychology and biology), machine learning (learning in nature different from 'machine learning' in many ways), neural nets (artificial neural nets bear almost no relation to biological neural networks). Needless to say, this regularly makes for some lively but more or less pointless debates on many subjects within the field of AI (and others, in fact, but especially AI). |
Key Concepts in AI
Perception / Percept | A process (perception) and its product (percept) that is part of the cognitive apparatus of intelligent systems. |
Goal | The resulting state after a successful change. |
Task | A problem that is assigned to be solved by an agent. |
Environment | The constraints that may interfere with achieving a goal. |
Plan | The partial set of actions that an agent assumes will achieve the goal. |
Planning | The act of generating a plan. |
Knowledge | Information that can be used for various purposes. |
Agent | A system that can sense and act in an environment to do tasks. https://en.wikipedia.org/wiki/Intelligent_agent |
Some (Pressing?) Questions
Isn't intelligence 'almost solved'? | Short answer: No! If it's almost solved it's been “almost solved” for over 60 years. And yet we still don't have machines with real intelligence. |
Should we fear AI? | Short answer: No! The threat lies with humans, not with machines – human abuse of knowledge goes back to the stone age. |
Is the Singularity near? | Short answer: Who's to say? Predictions are difficult, especially wrt the future. By the time the course is finished you will be in a good position to make up your own mind about this. |
Intelligence: A Natural Phenomenon
Intelligence | A phenomenon encountered in nature. Intelligence is a phenomenon with good examples in the natural world, but may have more forms than the examples from nature. The only one that everyone agrees on to call 'intelligent': humans. |
Natural Intelligence | Intelligence as it appears in nature. Some kinds of animals are considered “intelligent”, or at least some behavior of some individuals of an animal species other than humans are deemed indicators of intelligence. |
Cognitive Science | The study of natural intelligence, in particular human (and that found in nature). |
Artificial Intelligence | The study of how to make intelligent machines. |
Intelligent Machines | Systems created by humans intended to display (some - but not all?) features required of beings encountered in nature. |
How to define 'intelligence' | Many definitions have been proposed. See e.g.: A Collection of Definitions of Intelligence by Legg & Hutter. |
Definitions: a word of caution | We must be careful when it comes to definitions – for any complex system there is a world of difference between a decent definition and good accurate appropriate definition. |
Related quote | Aaron Sloman says: “Some readers may hope for definitions of terms like information processing, mental process, consciousness, emotion, love. However, each of these denotes a large and ill-defined collection of capabilities or features. There is no definite collection of nec- essary or sufficient conditions (nor any disjunction of conjunctions) that can be used to define such terms.” (From Architectural Requirements for Human-like Agents Both Natural and Artificial by A. Sloman) |
A Working Definition of Intelligence
The (working) Definition of Intelligence Used in This Course | Adaptation with insufficient knowledge and resources – Pei Wang |
'Adaptation' | means changing strategically in light of new information. |
'Insufficient' | means that it cannot be guaranteed, and in fact, can never be guaranteed to be sufficient to guarantee that goals are achieved. The reason it cannot is that an agent in the physical world can never know for sure that it has everything needed to achieve its goals. |
'Knowledge' | means information structures (about target phenomena) that allows an agent to predict, achieve goals, explain, or model (target phenomena). |
'Resources' | means that it cannot be guaranteed, and in fact, can never be guaranteed. The reason they cannot is that we don't know the 'axioms' of the physical world, and even if we did we could never be sure of it. |
Another way to say 'Adaptation under Insufficient Knowledge & Resources' | “Discretionarily Constrained Adaptation Under Insufficient Knowledge & Resources” – K. R. Thórisson Or simply Figuring out how to get new stuff done. |
'Discretionarily constrained' adaptation | means that an agent can choose particular constraints under which to operate or act (e.g. to not consume chocolate for a whole month) – that the agent's adaptation can be arbitrarily constrained at the discretion of the agent itself (or someone/something else). Extending the term 'adaptation' with this longer 'discretionarily constrained' has the benefit of separating this use of the term ‘adaptation’ from its more common use in the context of natural evolution, where it describes a process fashioned by uniform physical laws. |
Historical Concepts
AI | In 1956 there was a workshop at Dartmouth College in the US where many of the field's founding fathers agreed on the term to user for their field, and outlined various topics to be studied within the field. |
GOFAI | “Good old-fashioned AI” is a term used nowadays to describe the first 20-30 years of research in the field. |
Cybernetics | Going back to WWI the field of cybernetics claimed a scope that could easily be said to subsume AI. Many of the ideas associated with information technology came out of this melting pot, including ideas by von Neumann. However, cybernetics has since all but disappeared. Why? |
GMI or AGI | “Artificial general intelligence” or “general machine intelligence”: What we call the machine that we hope to build that could potentially surpass human intelligence at some point in the future – a more holistic take on the phenomenon of intelligence than the present mainstream AI research would indicate. Will we succeed? Only time will tell. |
AI is a Broad Field of Research
A Scientific Discipline | As an empirical scientific discipline, AI aims to figure out the general principles of how to implement the phenomenon of intelligence, including any subset thereof. Does empiricism mean there is no theory? Absolutely not! Physics is an empirical science, yet is the scientific field that has the most advanced, accomplished theoretical foundation. What would a proper theory of intelligence contain, if it existed? A general theory of intelligence would allow us to implement intelligence, or any subset thereof, successfully on first try, in a variety of formats meeting a variety of constraints. |
An Engineering Discipline | What separates AI from e.g. cognitive science? AI explicitly aims at figuring out how to build machines that are intelligent. Note that CogSci may also build intelligent machines, but the goal is not the machine itself, or the principles of its construction, but rather its role as a tool to figure out how intelligence works. The outcome from such work is unlikely overlap much due to the difference in working constraints,* although in the long term the two are likely to converge – and the two different approaches should be able to help each other. |
AI spans many fields | Psychology, mathematics and computation, neurology, philosophy. |
Alternative View | Psychology, mathematics & computation, neurology, and philosophy all were sooner than AI to address concepts of high relevance to the study of intelligence. |
Is AI a subfield of computer science? | Yes and No. Yes, because this is the field that has the best and most tools for studying it as a phenomenon. No, because the field does not address important concepts and features of intelligence. |
* Like the difference between constructing brick walls to study the stability of rock formations in nature versus the engineering principles of building brick walls: If the principles are well understood (weight distribution and stability), you should be able to build walls out of many materials.
So What Is Intelligence?
Why The Question Matters | It is important to know what you're studying and researching! …A researcher selects which questions to work on. If the topic of research is unclear, well, then … it's “garbage in, garbage out”. |
The Challenge | You cannot define something precisely until you understand it! Premature precise definitions may be much worse than loose definitions or even bad-but-rough definitions: You are very likely to end up researching something other than what you set out to research. |
What Can We Do? | List the requirements. Even a partial list will go a long way towards helping steer the research. Engineers use requirements to guide their building of artifacts. If the artifact doesn't meet the requirements it is not a valid member of the category that was targeted. In science it is not customary to use requirements to guide research questions, but it works just the same (and equally well!): List the features of the phenomenon you are researching and group them into essential, important but non-essential, and other. Then use these to guide the kinds of questions you try to answer. |
Before Requirements, Look At Examples | To get to a good list it may be necessary to explore the boundaries of your phenomenon. |
Create a Working Definition | It's called a “working definition” because it is supposed to be subject to (asap) scrutiny and revision. A good working definition avoids the problem of entrenchment, which, in the worst case, may result in a whole field being re-defined around something that was supposed to be temporary. One great example of that: The Turing Test. |
Is a System Intelligent If ... ?
Lifetime | …it can really learn anything, but it takes the duration of the universe for it to learn each of those things? |
Generality | …it can only learn one task, but it can get better at it than any other system or controller in the universe? |
Generality | …it can only learn one task at a time, and can only learn something else by forgetting what it knew before? |
Response Time | …it can respond to anything and everything correctly, but always responds too late? |
Implementation | …it can learn to do anything in principle, but is in principle impossible to implement? |
Implementation | …it can learn to do anything in principle, but requires as much energy as is available in the whole universe to run? |
Autonomy | …it can learn and do anything, but it cannot do anything entirely on its own and always requires help from the outside? |
Autonomy | …it can learn anything, but it cannot learn whether or not to trust its own abilities? |
Autonomy | …it can learn anything, but cannot handle any variation on what it has learned? |
Goal
What it is | A goal is a set of steady states to be achieved. A goal must be described by an abstraction language to be achieved; its desccription must contain sufficient information to verify whether that goal has been achieved or not. A well-defined goal can be assigned to an agent/system/process to be achieved. An ill-defined goal is a goal description with missing elements (i.e., due to some part of its description, there is missing information for the process/system/agent that is supposed to perform it. |
How it may be expressed | Gtop = [Gsub1, Gsub-2, … Gsub-n, G{-sub1}, G{-sub2}, …G{-sub-m} ], i.e. a set of zero or more subgoals, where G{-} (meaning “to the power of minus”) are “negative goals” (states to be avoided = constraints) and G = [s1, s2, … s-n, R ], where s-n describes a state s subset S of a (subset) of a World and R are relevant relations between these. |
Components of s | s= [v1, v2, … v-n, R] rbrace}: A set of patterns, expressed as variables with error/precision constraints, that refer to the world. |
What we can do with it | Define a task: task := goal + timeframe + initial world state |
Why it is important | Goals are needed for concrete tasks, and tasks are a key part of why we would want AI in the first place. For any complex tasks there will be identifiable sub-goals – talking about these in compressed manners (e.g. using natural language) is important for learning and for monitoring of task progress. |
Historically speaking | Goals have been with the field of AI from the very beginning, but definitions vary. |
What to be aware of | We can assign goals to an AI without the AI having an explicit data structure that we can say matches the goal directly (see e.g. Braitenberg Vehicles - above). These are called implicit goals. We may conjecture that if we want an AI to be able to talk about its goals they will have to be – in some sense – explicit, that is, having a discrete representation in the AI's mind (information structures) that can be manipulated, inspected, compressed / decompressed, and related to other data structures for various purposes, in isolation (without affecting in any unnecessary, unwanted, or unforeseen way, other (irrelevant) information structures). |
What Do You Mean by "Generality"?
Flexibility: Breadth of task-environments | Enumeration of variety. (By 'variety' we mean the discernibly different states that can be sensed and that make a difference to a controller.) If a system X can operate in more diverse task-environments than system Y, system X is more flexible than system Y. |
Solution Diversity: Breadth of solutions | If a system X can reliably generate a larger variation of acceptable solutions to problems than system Y, system X is more powerful than system Y. |
Constraint Diversity: Breadth of constraints on solutions | If a system X can reliably produce acceptable solutions under a higher number of solution constraints than system Y, system X is more powerful than system Y. |
Goal Diversity: Breadth of goals | If a system X can meet a wider range of goals than system Y, system X is more powerful than system Y. |
Generality | Any system X that exceeds system Y on one or more of the above we say it's more general than system Y, but typically pushing for increased generality means pushing on all of the above dimensions. |
General intelligence… | …means less is needed to be known up front when the system is created; the system can learn to figure things out and how to handle itself, in light of LTE. |
And yet: The hallmark of an AGI | A system that can handle novel or brand-new problems, and be expected to attempt to address open problems sensibly. The level of difficulty of the problems it solves would indicate its generality. |
2025©K. R. Thórisson