User Tools

Site Tools


public:t_720_atai:atai-18:lecture_notes_w2

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
public:t_720_atai:atai-18:lecture_notes_w2 [2019/08/21 14:02] – [Complexity of Agents] thorissonpublic:t_720_atai:atai-18:lecture_notes_w2 [2024/04/29 13:33] (current) – external edit 127.0.0.1
Line 7: Line 7:
 =====T-720-ATAI-2018===== =====T-720-ATAI-2018=====
 ====Lecture Notes, W2: Agents & Control==== ====Lecture Notes, W2: Agents & Control====
 +2020(c)Kristinn R. Thórisson
 \\ \\
 \\ \\
Line 16: Line 17:
  
 |  {{ :public:t-720-atai:abstract-agent.png?250 }}  | |  {{ :public:t-720-atai:abstract-agent.png?250 }}  |
-|  An abstraction of a controller: A controller has an input, <m>i_t</m>, selected from a task-environment, current state (<m>S</m>), at least one goal (<m>G</m>implicit or explicit) whereby at least one is a Drive (<m>G_d</m>), and output <m>o_t</m> in the form of atomic actions (selected from a set of atomic possible outputs), and a set of processes (<m>P</m>). \\ The internals of a controller for the complex, adaptive control of a situated agent is referred to as //cognitive architecture//. \\ Any practical controller is //embodied//, in that it interacts with its environment through interfaces whereby its internal computations are turned into //physical actions:// <m>i</m> enters via //measuring devices// ("sensors") and <m>o</m> exits the controller via //effectors//  |+|  An abstraction of a controller: A controller has an input, <m>i_t</m>, selected from a task-environment, current state <m>S</m>, at least one goal <m>G</m> (implicit or explicit - see table below and output <m>o_t</m> in the form of atomic actions (selected from a set of atomic possible outputs), and a set of processes <m>P</m>. \\ The internals of a controller for the complex, adaptive control of a situated agent is referred to as //cognitive architecture//. \\ Any practical controller is //embodied//, in that it interacts with its environment through interfaces whereby its internal computations are turned into //physical actions:// <m>i</m> enters via //measuring devices// ("sensors") and <m>o</m> exits the controller via //effectors//  |
  
 \\ \\
Line 26: Line 27:
 |  Why it is important  | Goals are needed for concrete tasks, and tasks are a key part of why we would want AI in the first place. For any complex tasks there will be identifiable sub-goals -- talking about these in compressed manners (e.g. using natural language) is important for learning and for monitoring of task progress.   | |  Why it is important  | Goals are needed for concrete tasks, and tasks are a key part of why we would want AI in the first place. For any complex tasks there will be identifiable sub-goals -- talking about these in compressed manners (e.g. using natural language) is important for learning and for monitoring of task progress.   |
 |  Historically speaking  | Goals have been with the field of AI from the very beginning, but definitions vary.   | |  Historically speaking  | Goals have been with the field of AI from the very beginning, but definitions vary.   |
-|  What to be aware of  | We can assign goals to an AI without the AI having an explicit data structure that we can say matches the goal directly (see [[/public:t_720_atai:atai-18:lecture_notes_w2?&#braitenberg_vehicle_examples|Braitenberg Vehicles]] - below). These are called //**implicit goals**//. We may conjecture that if we want an AI to be able to talk about its goals they will have to be -- in some sense -- //**explicit**//, that is, having a discrete representation in the AI's "mind" that can be manipulated, inspected, compressed / decompressed, and related to other data structures for various purposes.  |+|  What to be aware of  | We can assign goals to an AI without the AI having an explicit data structure that we can say matches the goal directly (see [[public:t_720_atai:atai-18:lecture_notes_w2#Example: Braitenberg Vehicles|Braitenberg Vehicles]] - below). These are called //**implicit goals**//. We may conjecture that if we want an AI to be able to talk about its goals they will have to be -- in some sense -- //**explicit**//, that is, having a discrete representation in the AI's "mind" that can be manipulated, inspected, compressed / decompressed, and related to other data structures for various purposes.  |
  
  
Line 105: Line 106:
  
  
-|  Minimal agent  | sensory data -> decision -> action   |+|  Minimal agent  | single goal; inability to create sub-goals; sensory data -> decision -> action   |
 |  Perception  | Transducer that turns energy into information representation.  | |  Perception  | Transducer that turns energy into information representation.  |
 |  Decision  | Computation that uses perceptual data; chooses one alternative over (potentially) many for implementation.  | |  Decision  | Computation that uses perceptual data; chooses one alternative over (potentially) many for implementation.  |
Line 116: Line 117:
 ====Complexity of Agents==== ====Complexity of Agents====
 |  Agent complexity  | Determined by <m>I X P X O</m>, not just <m>P, i,</m> or <m>o</m> | |  Agent complexity  | Determined by <m>I X P X O</m>, not just <m>P, i,</m> or <m>o</m> |
-|  Agent action complexity potential  | Potential for P to control combinatorics of, or change, o, beyond initial i (at "birth").   | +|  Agent action complexity potential  | Potential for <m>P</m> to control combinatorics of, or change, <m>o</m>, beyond initial <m>i</m> (at "birth").   | 
-|  Agent input complexity potential  | Potential for P to structure i in post-processing, and to extend i.  | +|  Agent input complexity potential  | Potential for <m>P</m> to structure i in post-processing, and to extend <m>i</m>.  | 
-|  Agent P complexity potential  | Potential for P to acquire and effectively and efficiently store and access past i (learning); potential for P to change P.  | +|  Agent <m>P</m> complexity potential  | Potential for <m>P</m> to acquire and effectively and efficiently store and access past <m>i</m> (learning); potential for <m>P</m> to change <m>P</m>.  | 
-|  Agent intelligence potential  | Potential for P to coherently coordinate the above to improve the agent'ability to use its resources, or acquire more resources, to achieve top-level goals.  |+|  Agent intelligence potential  | Potential for <m>P</m> to coherently coordinate all of the above to improve its own ability to use its resources, acquire more resources, in light of drives (top-level goals).  |
  
  
Line 125: Line 126:
 \\ \\
 \\ \\
- 
 ====Reactive Agent Architecture==== ====Reactive Agent Architecture====
-| Architecture  | Largely fixed for the entire lifetime of the agent. + Architecture  | Largely fixed for the entire lifetime of the agent. 
-| super simple | Sensors connected directly to motors, e.g. Braitenberg Vehicles. | + Why "Reactive"?  | Named "reactive" because there is no prediction - the agent simply reacts to stimuli (sensory data) when/after it happens. 
-| simple | Deterministic connections between components with small memory, e.g. chess engines, Roomba vacuum cleaner. +|  super simple  | Sensors connected directly to motors, e.g. Braitenberg Vehicles. | 
-| Complex | Grossly modular architecture (< 30 modules) with multiple relationships at more than one level of control detail (LoC), e.g. speech-controlled dialogue systems like Siri.   | + simple  | Deterministic connections between components with small memory, e.g. chess engines, Roomba vacuum cleaner. 
-| Super complex | Large number of modules (> 30) at various sizes, each with multiple relationships to others, at more than one LoC, e.g. subsumption architecture.  |+ Complex  | Grossly modular architecture (< 30 modules) with multiple relationships at more than one level of control detail (LoC), e.g. speech-controlled dialogue systems like Siri and Alexa.   | 
 + Super complex  | Large number of modules (> 30) at various sizes, each with multiple relationships to others, at more than one LoC, e.g. some robots using the subsumption architecture.  |
  
 \\ \\
 \\ \\
  
-====Braitenberg Vehicle Examples====+====Example: Braitenberg Vehicles====
 | {{ :public:t-720-atai:love.png?150 }} | | {{ :public:t-720-atai:love.png?150 }} |
 |  Braitenberg vehicle example control scheme: "love". Steers towards (and crashes into) that which its sensors sense.  | |  Braitenberg vehicle example control scheme: "love". Steers towards (and crashes into) that which its sensors sense.  |
/var/www/cadia.ru.is/wiki/data/attic/public/t_720_atai/atai-18/lecture_notes_w2.1566396135.txt.gz · Last modified: 2024/04/29 13:33 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki