Both sides previous revisionPrevious revisionNext revision | Previous revision |
public:t_720_atai:atai-18:lecture_notes_w2 [2019/08/21 14:10] – [Controller] thorisson | public:t_720_atai:atai-18:lecture_notes_w2 [2024/04/29 13:33] (current) – external edit 127.0.0.1 |
---|
=====T-720-ATAI-2018===== | =====T-720-ATAI-2018===== |
====Lecture Notes, W2: Agents & Control==== | ====Lecture Notes, W2: Agents & Control==== |
| 2020(c)Kristinn R. Thórisson |
\\ | \\ |
\\ | \\ |
| Why it is important | Goals are needed for concrete tasks, and tasks are a key part of why we would want AI in the first place. For any complex tasks there will be identifiable sub-goals -- talking about these in compressed manners (e.g. using natural language) is important for learning and for monitoring of task progress. | | | Why it is important | Goals are needed for concrete tasks, and tasks are a key part of why we would want AI in the first place. For any complex tasks there will be identifiable sub-goals -- talking about these in compressed manners (e.g. using natural language) is important for learning and for monitoring of task progress. | |
| Historically speaking | Goals have been with the field of AI from the very beginning, but definitions vary. | | | Historically speaking | Goals have been with the field of AI from the very beginning, but definitions vary. | |
| What to be aware of | We can assign goals to an AI without the AI having an explicit data structure that we can say matches the goal directly (see [[/public:t_720_atai:atai-18:lecture_notes_w2?&#braitenberg_vehicle_examples|Braitenberg Vehicles]] - below). These are called //**implicit goals**//. We may conjecture that if we want an AI to be able to talk about its goals they will have to be -- in some sense -- //**explicit**//, that is, having a discrete representation in the AI's "mind" that can be manipulated, inspected, compressed / decompressed, and related to other data structures for various purposes. | | | What to be aware of | We can assign goals to an AI without the AI having an explicit data structure that we can say matches the goal directly (see [[public:t_720_atai:atai-18:lecture_notes_w2#Example: Braitenberg Vehicles|Braitenberg Vehicles]] - below). These are called //**implicit goals**//. We may conjecture that if we want an AI to be able to talk about its goals they will have to be -- in some sense -- //**explicit**//, that is, having a discrete representation in the AI's "mind" that can be manipulated, inspected, compressed / decompressed, and related to other data structures for various purposes. | |
| |
| |
| |
| |
| Minimal agent | sensory data -> decision -> action | | | Minimal agent | single goal; inability to create sub-goals; sensory data -> decision -> action | |
| Perception | Transducer that turns energy into information representation. | | | Perception | Transducer that turns energy into information representation. | |
| Decision | Computation that uses perceptual data; chooses one alternative over (potentially) many for implementation. | | | Decision | Computation that uses perceptual data; chooses one alternative over (potentially) many for implementation. | |
\\ | \\ |
====Reactive Agent Architecture==== | ====Reactive Agent Architecture==== |
| Architecture | Largely fixed for the entire lifetime of the agent. | | | Architecture | Largely fixed for the entire lifetime of the agent. | |
| super simple | Sensors connected directly to motors, e.g. Braitenberg Vehicles. | | | Why "Reactive"? | Named "reactive" because there is no prediction - the agent simply reacts to stimuli (sensory data) when/after it happens. | |
| simple | Deterministic connections between components with small memory, e.g. chess engines, Roomba vacuum cleaner. | | | super simple | Sensors connected directly to motors, e.g. Braitenberg Vehicles. | |
| Complex | Grossly modular architecture (< 30 modules) with multiple relationships at more than one level of control detail (LoC), e.g. speech-controlled dialogue systems like Siri. | | | simple | Deterministic connections between components with small memory, e.g. chess engines, Roomba vacuum cleaner. | |
| Super complex | Large number of modules (> 30) at various sizes, each with multiple relationships to others, at more than one LoC, e.g. subsumption architecture. | | | Complex | Grossly modular architecture (< 30 modules) with multiple relationships at more than one level of control detail (LoC), e.g. speech-controlled dialogue systems like Siri and Alexa. | |
| | Super complex | Large number of modules (> 30) at various sizes, each with multiple relationships to others, at more than one LoC, e.g. some robots using the subsumption architecture. | |
| |
\\ | \\ |
\\ | \\ |
| |
====Braitenberg Vehicle Examples==== | ====Example: Braitenberg Vehicles==== |
| {{ :public:t-720-atai:love.png?150 }} | | | {{ :public:t-720-atai:love.png?150 }} | |
| Braitenberg vehicle example control scheme: "love". Steers towards (and crashes into) that which its sensors sense. | | | Braitenberg vehicle example control scheme: "love". Steers towards (and crashes into) that which its sensors sense. | |