User Tools

Site Tools


public:t-719-nxai:nxai-25:intelligence

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
public:t-719-nxai:nxai-25:intelligence [2025/04/27 11:37] – [What Do You Mean by "Generality"?] thorissonpublic:t-719-nxai:nxai-25:intelligence [2025/04/27 14:05] (current) thorisson
Line 5: Line 5:
 \\ \\
  
-====READINGS====+====== INTELLIGENCE: THE PHENOMENON ====== 
 + 
 +//Key concepts: Goals, Behaviors, Generality, Cognition, Perception, Classification// 
 + 
 +\\ 
 + 
 +====REQUIRED READINGS====
  
   * [[https://proceedings.mlr.press/v192/thorisson22b/thorisson22b.pdf | The Future of AI Research: Ten Defeasible 'Axioms of Intelligence']] by K.R.Thórisson and H. Minsky   * [[https://proceedings.mlr.press/v192/thorisson22b/thorisson22b.pdf | The Future of AI Research: Ten Defeasible 'Axioms of Intelligence']] by K.R.Thórisson and H. Minsky
Line 11: Line 17:
   * [[https://pdfs.semanticscholar.org/4688/e564f16662838938a2d729685baab068751b.pdf?_ga=2.196493103.2038245345.1566469635-1488191987.1566469635|Cognitive Architecture Requirements for Achieving AGI]] by J.E. Laird et al.    * [[https://pdfs.semanticscholar.org/4688/e564f16662838938a2d729685baab068751b.pdf?_ga=2.196493103.2038245345.1566469635-1488191987.1566469635|Cognitive Architecture Requirements for Achieving AGI]] by J.E. Laird et al. 
  
-\\+Related readings: [[http://cadia.ru.is/wiki/public:t-719-nxai:nxai-25:readings#intelligencethe_phenomenon|Intelligence: The Phenomenon]] 
 \\ \\
  
Line 80: Line 87:
 \\ \\
 \\ \\
-====What Do You Mean by "Generality"?==== 
-|  Flexibility: \\ Breadth of task-environments  | Enumeration of variety. \\ (By 'variety' we mean the discernibly different states that can be sensed and that make a difference to a controller.) \\ If a system X can operate in more diverse task-environments than system Y, system X is more //flexible// than system Y.     | 
-|  Solution Diversity: \\ Breadth of solutions  | \\ If a system X can reliably generate a larger variation of acceptable solutions to problems than system Y, system X is more //powerful// than system Y.  | 
-|  Constraint Diversity: \\ Breadth of constraints on solutions  | \\ If a system X can reliably produce acceptable solutions under a higher number of solution constraints than system  Y, system X is more //powerful// than system Y.  | 
-|  Goal Diversity: \\ Breadth of goals  | If a system X can meet a wider range of goals than system Y, system X is more //powerful// than system Y.  | 
-|  \\ Generality  | Any system X that exceeds system Y on one or more of the above we say it's more //general// than system Y, but typically pushing for increased generality means pushing on all of the above dimensions.   | 
-|  General intelligence...  | ...means less is needed to be known up front when the system is created; the system can learn to figure things out and how to handle itself, in light of **LTE**.   | 
-|  And yet: \\ The hallmark of an AGI  | A system that can handle novel or **brand-new** problems, and be expected to attempt to address //open problems// sensibly. \\ The level of difficulty of the problems it solves would indicate its generality.  | 
- 
-\\ 
-\\ 
- 
  
 ====Historical Concepts==== ====Historical Concepts====
Line 124: Line 119:
 |  \\ Create a Working Definition  | It's called a "//working// definition" because it is supposed to be subject to (asap) scrutiny and revision. \\ A good working definition avoids the problem of entrenchment, which, in the worst case, may result in a whole field being re-defined around something that was supposed to be temporary. \\ One great example of that: The Turing Test.     | |  \\ Create a Working Definition  | It's called a "//working// definition" because it is supposed to be subject to (asap) scrutiny and revision. \\ A good working definition avoids the problem of entrenchment, which, in the worst case, may result in a whole field being re-defined around something that was supposed to be temporary. \\ One great example of that: The Turing Test.     |
  
 +\\
 +
 +
 +====Goal====
 +|  What it is  | A goal is a set of **steady states** to be achieved. \\ A goal must be described by an abstraction language to be achieved; its desccription must contain sufficient information to verify whether that goal has been achieved or not. \\ A well-defined goal can be assigned to an agent/system/process to be achieved. \\ An //ill-defined goal// is a goal description with missing elements (i.e., due to some part of its description, there is missing information for the process/system/agent that is supposed to perform it.   |
 +|  \\ How it may be expressed  | Gtop = [Gsub1, Gsub-2, ... Gsub-n, G{-sub1}, G{-sub2}, ...G{-sub-m} ], i.e. a set of zero or more subgoals, where \\ G{-} (meaning "to the power of minus") are "negative goals" (states to be avoided = constraints) and \\ G = [s1, s2, ... s-n, R ], where s-n describes a state s subset S of a (subset) of a World and \\ R are relevant relations between these.  |
 +|  Components of s  | s= [v1, v2, ... v-n, R]  rbrace}: A set of //patterns//, expressed as variables with error/precision constraints, that refer to the world.   |
 +|  What we can do with it  | Define a task: **task := goal + timeframe + initial world state**  |
 +|  Why it is important  | Goals are needed for concrete tasks, and tasks are a key part of why we would want AI in the first place. For any complex tasks there will be identifiable sub-goals -- talking about these in compressed manners (e.g. using natural language) is important for learning and for monitoring of task progress.   |
 +|  Historically speaking  | Goals have been with the field of AI from the very beginning, but definitions vary.   |
 +|  \\ What to be aware of  | We can assign goals to an AI without the AI having an explicit data structure that we can say matches the goal directly (see e.g. [[/public:t-720-atai:atai-20:agents_and_control#braitenberg_vehicle_examples|Braitenberg Vehicles]] - above). These are called //**implicit goals**//. We may conjecture that if we want an AI to be able to talk about its goals they will have to be -- in some sense -- //**explicit**//, that is, having a discrete representation in the AI's mind (information structures) that can be manipulated, inspected, compressed / decompressed, and related to other data structures for various purposes, in isolation (without affecting in any unnecessary, unwanted, or unforeseen way, other (irrelevant) information structures).  |
 +
 +
 +\\
 +\\
 +
 +====What Do You Mean by "Generality"?====
 +|  Flexibility: \\ Breadth of task-environments  | Enumeration of variety. \\ (By 'variety' we mean the discernibly different states that can be sensed and that make a difference to a controller.) \\ If a system X can operate in more diverse task-environments than system Y, system X is more //flexible// than system Y.     |
 +|  Solution Diversity: \\ Breadth of solutions  | \\ If a system X can reliably generate a larger variation of acceptable solutions to problems than system Y, system X is more //powerful// than system Y.  |
 +|  Constraint Diversity: \\ Breadth of constraints on solutions  | \\ If a system X can reliably produce acceptable solutions under a higher number of solution constraints than system  Y, system X is more //powerful// than system Y.  |
 +|  Goal Diversity: \\ Breadth of goals  | If a system X can meet a wider range of goals than system Y, system X is more //powerful// than system Y.  |
 +|  \\ Generality  | Any system X that exceeds system Y on one or more of the above we say it's more //general// than system Y, but typically pushing for increased generality means pushing on all of the above dimensions.   |
 +|  General intelligence...  | ...means less is needed to be known up front when the system is created; the system can learn to figure things out and how to handle itself, in light of **LTE**.   |
 +|  And yet: \\ The hallmark of an AGI  | A system that can handle novel or **brand-new** problems, and be expected to attempt to address //open problems// sensibly. \\ The level of difficulty of the problems it solves would indicate its generality.  |
 +
 +\\
 \\ \\
  
Line 142: Line 163:
 \\ \\
  
-====Goal==== 
-|  What it is  | A goal is a set of **steady states** to be achieved. \\ A goal must be described by an abstraction language to be achieved; its desccription must contain sufficient information to verify whether that goal has been achieved or not. \\ A well-defined goal can be assigned to an agent/system/process to be achieved. \\ An //ill-defined goal// is a goal description with missing elements (i.e., due to some part of its description, there is missing information for the process/system/agent that is supposed to perform it.   | 
-|  \\ How it may be expressed  | Gtop = [Gsub1, Gsub-2, ... Gsub-n, G{-sub1}, G{-sub2}, ...G{-sub-m} ], i.e. a set of zero or more subgoals, where \\ G{-} (meaning "to the power of minus") are "negative goals" (states to be avoided = constraints) and \\ G = [s1, s2, ... s-n, R ], where s-n describes a state s subset S of a (subset) of a World and \\ R are relevant relations between these.  | 
-|  Components of s  | s= [v1, v2, ... v-n, R]  rbrace}: A set of //patterns//, expressed as variables with error/precision constraints, that refer to the world.   | 
-|  What we can do with it  | Define a task: **task := goal + timeframe + initial world state**  | 
-|  Why it is important  | Goals are needed for concrete tasks, and tasks are a key part of why we would want AI in the first place. For any complex tasks there will be identifiable sub-goals -- talking about these in compressed manners (e.g. using natural language) is important for learning and for monitoring of task progress.   | 
-|  Historically speaking  | Goals have been with the field of AI from the very beginning, but definitions vary.   | 
-|  \\ What to be aware of  | We can assign goals to an AI without the AI having an explicit data structure that we can say matches the goal directly (see e.g. [[/public:t-720-atai:atai-20:agents_and_control#braitenberg_vehicle_examples|Braitenberg Vehicles]] - above). These are called //**implicit goals**//. We may conjecture that if we want an AI to be able to talk about its goals they will have to be -- in some sense -- //**explicit**//, that is, having a discrete representation in the AI's mind (information structures) that can be manipulated, inspected, compressed / decompressed, and related to other data structures for various purposes, in isolation (without affecting in any unnecessary, unwanted, or unforeseen way, other (irrelevant) information structures).  | 
- 
- 
-\\ 
-\\ 
 \\ \\
 \\ \\
  
 2025(c)K. R. Thórisson  2025(c)K. R. Thórisson 
/var/www/cadia.ru.is/wiki/data/attic/public/t-719-nxai/nxai-25/intelligence.1745753861.txt.gz · Last modified: 2025/04/27 11:37 by thorisson

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki