Both sides previous revisionPrevious revisionNext revision | Previous revision |
public:t-719-nxai:nxai-25:intelligence [2025/04/27 11:34] – thorisson | public:t-719-nxai:nxai-25:intelligence [2025/04/27 14:05] (current) – thorisson |
---|
\\ | \\ |
| |
====READINGS==== | ====== INTELLIGENCE: THE PHENOMENON ====== |
| |
| //Key concepts: Goals, Behaviors, Generality, Cognition, Perception, Classification// |
| |
| \\ |
| |
| ====REQUIRED READINGS==== |
| |
* [[https://proceedings.mlr.press/v192/thorisson22b/thorisson22b.pdf | The Future of AI Research: Ten Defeasible 'Axioms of Intelligence']] by K.R.Thórisson and H. Minsky | * [[https://proceedings.mlr.press/v192/thorisson22b/thorisson22b.pdf | The Future of AI Research: Ten Defeasible 'Axioms of Intelligence']] by K.R.Thórisson and H. Minsky |
* [[https://pdfs.semanticscholar.org/4688/e564f16662838938a2d729685baab068751b.pdf?_ga=2.196493103.2038245345.1566469635-1488191987.1566469635|Cognitive Architecture Requirements for Achieving AGI]] by J.E. Laird et al. | * [[https://pdfs.semanticscholar.org/4688/e564f16662838938a2d729685baab068751b.pdf?_ga=2.196493103.2038245345.1566469635-1488191987.1566469635|Cognitive Architecture Requirements for Achieving AGI]] by J.E. Laird et al. |
| |
\\ | Related readings: [[http://cadia.ru.is/wiki/public:t-719-nxai:nxai-25:readings#intelligencethe_phenomenon|Intelligence: The Phenomenon]] |
\\ | \\ |
| |
====Requirements for General Autonomous Intelligence ==== | ====Requirements for General Autonomous Intelligence ==== |
//When engineers make an artifact, like a bridge or a space rocket, they start by listing the artifact's **requirements**. This way, for any proposed implementation, they can check their progress by comparing the performance of a prototype to these. The below papers consider what are necessary and sufficient requirements for a machine with **real** intelligence. (Therefore, these speak to defining what 'intelligent systems' in fact really means.)// | //When engineers make an artifact, like a bridge or a space rocket, they start by listing the artifact's **requirements**. This way, for any proposed implementation, they can check their progress by comparing the performance of a prototype to these. The below papers consider what are necessary and sufficient requirements for a machine with **real** intelligence. (Therefore, these speak to defining what 'intelligent systems' in fact really means.)// |
| |
| \\ |
| \\ |
| |
| ====Terminology==== |
| | Terminology is important! | The terms we use for phenomena must be shared to work as an effective means of communication. Obsessing about the definition of terms is a good thing! | |
| | Beware of Definitions! | Obsessing over precise, definitive definitions of terms should not extend to the phenomena that the research targets: These are by definition not well understood. It is impossible to define something that is not understood! So beware of those who insist on such things. | |
| | \\ Overloaded Terms | Many key terms in AI tend to be overloaded. Others are very unclear. Examples of the latter include: //intelligence, agent, concept, thought.// \\ Many terms have multiple meanings, e.g. reasoning, learning, complexity, generality, task, solution, proof. \\ Yet others are both unclear and polysemous, e.g. //consciousness//. \\ One source of the multiple meanings for terms is the tendency, in the beginning of a new research field, for founders to use common terms that originally refer to general concepts in nature, and which they intend to study, about the results of their own work. As time passes those concepts then begin to reference work done in the field, instead of their counterpart in nature. Examples include reinforcement learning (originally studied by Pavlov, Skinner, and others in psychology and biology), machine learning (learning in nature different from 'machine learning' in many ways), neural nets (artificial neural nets bear almost no relation to biological neural networks). \\ Needless to say, this regularly makes for some lively but more or less //pointless// debates on many subjects within the field of AI (and others, in fact, but especially AI). | |
| |
| |
| \\ |
| \\ |
| |
| ====Key Concepts in AI==== |
| | Perception / Percept | A process (//perception//) and its product (//percept//) that is part of the cognitive apparatus of intelligent systems. | |
| | Goal | The resulting state after a successful change. | |
| | Task | A problem that is assigned to be solved by an agent. | |
| | Environment | The constraints that may interfere with achieving a goal. | |
| | Plan | The partial set of actions that an agent assumes will achieve the goal. | |
| | Planning | The act of generating a plan. | |
| | Knowledge | Information that can be used for various purposes. | |
| | Agent | A system that can sense and act in an environment to do tasks. https://en.wikipedia.org/wiki/Intelligent_agent | |
| |
\\ | \\ |
| \\ Another way to say \\ 'Adaptation under Insufficient Knowledge & Resources' | \\ "Discretionarily Constrained Adaptation Under Insufficient Knowledge & Resources" \\ -- K. R. Thórisson \\ \\ Or simply **Figuring out how to get new stuff done**. \\ \\ | | | \\ Another way to say \\ 'Adaptation under Insufficient Knowledge & Resources' | \\ "Discretionarily Constrained Adaptation Under Insufficient Knowledge & Resources" \\ -- K. R. Thórisson \\ \\ Or simply **Figuring out how to get new stuff done**. \\ \\ | |
| 'Discretionarily constrained' adaptation | means that an agent can //choose// particular constraints under which to operate or act (e.g. to not consume chocolate for a whole month) -- that the agent's adaptation can be arbitrarily constrained at the discretion of the agent itself (or someone/something else). Extending the term 'adaptation' with this longer 'discretionarily constrained' has the benefit of separating this use of the term ‘adaptation’ from its more common use in the context of natural evolution, where it describes a process fashioned by uniform physical laws. | | | 'Discretionarily constrained' adaptation | means that an agent can //choose// particular constraints under which to operate or act (e.g. to not consume chocolate for a whole month) -- that the agent's adaptation can be arbitrarily constrained at the discretion of the agent itself (or someone/something else). Extending the term 'adaptation' with this longer 'discretionarily constrained' has the benefit of separating this use of the term ‘adaptation’ from its more common use in the context of natural evolution, where it describes a process fashioned by uniform physical laws. | |
| |
\\ | |
\\ | |
| |
====Some Interesting Case Stories of Intelligence==== | |
| The Crow | One crow was observed, on multiple occasions, to make its own tools. | | |
| The Parrot | One parrot (Alex) was taught multiple concepts in numbers and logic, including the meaning of "not" and the use of multi-dimensional features to dynamically create object groups for the purpose of communicating about multiple objects in single references. Oh, and the parrot participates in dialogue. | | |
| The Ape | One ape (Koko) was taught to use sign language to communicate with its caretakers. It was observed creating compound words that it had never heard, for purposes of clarifying references in space and time. | | |
| |
\\ | \\ |
\\ | \\ |
| |
====Key Concepts in AI==== | |
| Perception / Percept | A process (//perception//) and its product (//percept//) that is part of the cognitive apparatus of intelligent systems. | | |
| Goal | The resulting state after a successful change. | | |
| Task | A problem that is assigned to be solved by an agent. | | |
| Environment | The constraints that may interfere with achieving a goal. | | |
| Plan | The partial set of actions that an agent assumes will achieve the goal. | | |
| Planning | The act of generating a plan. | | |
| Knowledge | Information that can be used for various purposes. | | |
| Agent | A system that can sense and act in an environment to do tasks. https://en.wikipedia.org/wiki/Intelligent_agent | | |
| |
\\ | |
\\ | |
| |
====AI is a Broad Field of Research==== | ====AI is a Broad Field of Research==== |
\\ | \\ |
| |
====Terminology==== | |
| Terminology is important! | The terms we use for phenomena must be shared to work as an effective means of communication. Obsessing about the definition of terms is a good thing! | | |
| Beware of Definitions! | Obsessing over precise, definitive definitions of terms should not extend to the phenomena that the research targets: These are by definition not well understood. It is impossible to define something that is not understood! So beware of those who insist on such things. | | |
| \\ Overloaded Terms | Many key terms in AI tend to be overloaded. Others are very unclear. Examples of the latter include: //intelligence, agent, concept, thought.// \\ Many terms have multiple meanings, e.g. reasoning, learning, complexity, generality, task, solution, proof. \\ Yet others are both unclear and polysemous, e.g. //consciousness//. \\ One source of the multiple meanings for terms is the tendency, in the beginning of a new research field, for founders to use common terms that originally refer to general concepts in nature, and which they intend to study, about the results of their own work. As time passes those concepts then begin to reference work done in the field, instead of their counterpart in nature. Examples include reinforcement learning (originally studied by Pavlov, Skinner, and others in psychology and biology), machine learning (learning in nature different from 'machine learning' in many ways), neural nets (artificial neural nets bear almost no relation to biological neural networks). \\ Needless to say, this regularly makes for some lively but more or less //pointless// debates on many subjects within the field of AI (and others, in fact, but especially AI). | | |
| |
| |
\\ | |
| |
====So What Is Intelligence?==== | ====So What Is Intelligence?==== |
| \\ Create a Working Definition | It's called a "//working// definition" because it is supposed to be subject to (asap) scrutiny and revision. \\ A good working definition avoids the problem of entrenchment, which, in the worst case, may result in a whole field being re-defined around something that was supposed to be temporary. \\ One great example of that: The Turing Test. | | | \\ Create a Working Definition | It's called a "//working// definition" because it is supposed to be subject to (asap) scrutiny and revision. \\ A good working definition avoids the problem of entrenchment, which, in the worst case, may result in a whole field being re-defined around something that was supposed to be temporary. \\ One great example of that: The Turing Test. | |
| |
| \\ |
| |
| |
| ====Goal==== |
| | What it is | A goal is a set of **steady states** to be achieved. \\ A goal must be described by an abstraction language to be achieved; its desccription must contain sufficient information to verify whether that goal has been achieved or not. \\ A well-defined goal can be assigned to an agent/system/process to be achieved. \\ An //ill-defined goal// is a goal description with missing elements (i.e., due to some part of its description, there is missing information for the process/system/agent that is supposed to perform it. | |
| | \\ How it may be expressed | Gtop = [Gsub1, Gsub-2, ... Gsub-n, G{-sub1}, G{-sub2}, ...G{-sub-m} ], i.e. a set of zero or more subgoals, where \\ G{-} (meaning "to the power of minus") are "negative goals" (states to be avoided = constraints) and \\ G = [s1, s2, ... s-n, R ], where s-n describes a state s subset S of a (subset) of a World and \\ R are relevant relations between these. | |
| | Components of s | s= [v1, v2, ... v-n, R] rbrace}: A set of //patterns//, expressed as variables with error/precision constraints, that refer to the world. | |
| | What we can do with it | Define a task: **task := goal + timeframe + initial world state** | |
| | Why it is important | Goals are needed for concrete tasks, and tasks are a key part of why we would want AI in the first place. For any complex tasks there will be identifiable sub-goals -- talking about these in compressed manners (e.g. using natural language) is important for learning and for monitoring of task progress. | |
| | Historically speaking | Goals have been with the field of AI from the very beginning, but definitions vary. | |
| | \\ What to be aware of | We can assign goals to an AI without the AI having an explicit data structure that we can say matches the goal directly (see e.g. [[/public:t-720-atai:atai-20:agents_and_control#braitenberg_vehicle_examples|Braitenberg Vehicles]] - above). These are called //**implicit goals**//. We may conjecture that if we want an AI to be able to talk about its goals they will have to be -- in some sense -- //**explicit**//, that is, having a discrete representation in the AI's mind (information structures) that can be manipulated, inspected, compressed / decompressed, and related to other data structures for various purposes, in isolation (without affecting in any unnecessary, unwanted, or unforeseen way, other (irrelevant) information structures). | |
| |
| |
| \\ |
| \\ |
| |
| ====What Do You Mean by "Generality"?==== |
| | Flexibility: \\ Breadth of task-environments | Enumeration of variety. \\ (By 'variety' we mean the discernibly different states that can be sensed and that make a difference to a controller.) \\ If a system X can operate in more diverse task-environments than system Y, system X is more //flexible// than system Y. | |
| | Solution Diversity: \\ Breadth of solutions | \\ If a system X can reliably generate a larger variation of acceptable solutions to problems than system Y, system X is more //powerful// than system Y. | |
| | Constraint Diversity: \\ Breadth of constraints on solutions | \\ If a system X can reliably produce acceptable solutions under a higher number of solution constraints than system Y, system X is more //powerful// than system Y. | |
| | Goal Diversity: \\ Breadth of goals | If a system X can meet a wider range of goals than system Y, system X is more //powerful// than system Y. | |
| | \\ Generality | Any system X that exceeds system Y on one or more of the above we say it's more //general// than system Y, but typically pushing for increased generality means pushing on all of the above dimensions. | |
| | General intelligence... | ...means less is needed to be known up front when the system is created; the system can learn to figure things out and how to handle itself, in light of **LTE**. | |
| | And yet: \\ The hallmark of an AGI | A system that can handle novel or **brand-new** problems, and be expected to attempt to address //open problems// sensibly. \\ The level of difficulty of the problems it solves would indicate its generality. | |
| |
| \\ |
\\ | \\ |
| |
\\ | \\ |
| |
====Goal==== | |
| What it is | A goal is a set of **steady states** to be achieved. \\ A goal must be described by an abstraction language to be achieved; its desccription must contain sufficient information to verify whether that goal has been achieved or not. \\ A well-defined goal can be assigned to an agent/system/process to be achieved. \\ An //ill-defined goal// is a goal description with missing elements (i.e., due to some part of its description, there is missing information for the process/system/agent that is supposed to perform it. | | |
| \\ How it may be expressed | Gtop = [Gsub1, Gsub-2, ... Gsub-n, G{-sub1}, G{-sub2}, ...G{-sub-m} ], i.e. a set of zero or more subgoals, where \\ G{-} (meaning "to the power of minus") are "negative goals" (states to be avoided = constraints) and \\ G = [s1, s2, ... s-n, R ], where s-n describes a state s subset S of a (subset) of a World and \\ R are relevant relations between these. | | |
| Components of s | s= [v1, v2, ... v-n, R] rbrace}: A set of //patterns//, expressed as variables with error/precision constraints, that refer to the world. | | |
| What we can do with it | Define a task: **task := goal + timeframe + initial world state** | | |
| Why it is important | Goals are needed for concrete tasks, and tasks are a key part of why we would want AI in the first place. For any complex tasks there will be identifiable sub-goals -- talking about these in compressed manners (e.g. using natural language) is important for learning and for monitoring of task progress. | | |
| Historically speaking | Goals have been with the field of AI from the very beginning, but definitions vary. | | |
| \\ What to be aware of | We can assign goals to an AI without the AI having an explicit data structure that we can say matches the goal directly (see e.g. [[/public:t-720-atai:atai-20:agents_and_control#braitenberg_vehicle_examples|Braitenberg Vehicles]] - above). These are called //**implicit goals**//. We may conjecture that if we want an AI to be able to talk about its goals they will have to be -- in some sense -- //**explicit**//, that is, having a discrete representation in the AI's mind (information structures) that can be manipulated, inspected, compressed / decompressed, and related to other data structures for various purposes, in isolation (without affecting in any unnecessary, unwanted, or unforeseen way, other (irrelevant) information structures). | | |
| |
| |
\\ | |
\\ | |
\\ | \\ |
\\ | \\ |
| |
2025(c)K. R. Thórisson | 2025(c)K. R. Thórisson |