[[http://cadia.ru.is/wiki/public:t-720-atai:atai-16:main|T-720-ATAI-2016 Main]] =====T-720-ATAI-2016===== ====Lecture Notes, F-3 19.01.2016==== \\ \\ \\ ====Agents==== | Minimal agent | sensory data -> decision -> action | | Perception | Transducer that turns energy into information representation. | | Decision | Computation that uses perceptual data; chooses one alternative over (potentially) many for implementation. | | Action | Potential of the Agent to influence its task-environment, e.g. to move its body, grasp an object, utter some words, etc. Decisions turned into Actions produces Behavior. | | Learning agent | Uses memory to enhance decisions. | \\ \\ | {{ :public:t-720-atai:abstract-agent.png?250 }} | | An abstraction of an agent: An agent has an input (i, selected from a task-environment), current state (S), goal (G, implicit or explicit), and output (o) in the form of atomic actions (selected from possible output), and a set of processes (P). | \\ \\ ====Complexity of Agents==== | Agent complexity | Determined, at a minimum, by iXPXo, not just P, i, or o. \\ Taking time into account, as we should, an adaptive agent is of course more complex than the minimum (in the case of human-level intelligence, //much// more complex). | | Agent action complexity potential | Potential for P to control combinatorics of, or change, o, beyond initial i (at "birth"). | | Agent input complexity potential | Potential for P to structure i in post-processing, and to extend i. | | Agent P complexity potential | Potential for P to acquire and effectively and efficiently store and access past i (learning); potential for P to change P. | | Agent intelligence potential | Potential for P to coherently coordinate the above to improve the agent's ability to use its resources, or acquire more resources, to achieve top-level goals. | \\ \\ ====Reactive Agent Architecture==== | Architecture | Largely fixed for the entire lifetime of the agent. | | super simple | Sensors connected directly to motors, e.g. Braitenberg Vehicles. | | simple | Deterministic connections between components with small memory, e.g. chess engines, Roomba vacuum cleaner. | | Complex | Grossly modular architecture (< 30 modules) with multiple relationships at more than one level of control detail (LoC), e.g. speech-controlled dialogue systems like Siri. | | Super complex | Large number of modules (> 30) at various sizes, each with multiple relationships to others, at more than one LoC, e.g. subsumption architecture. | \\ \\ ====Braitenberg Vehicle Examples==== | {{ :public:t-720-atai:love.png?150 }} | | Braitenberg vehicle example control scheme: "love". Steers towards (and crashes into) that which its sensors sense. | | {{ :public:t-720-atai:hate.png?150 }} | | Braitenberg vehicle example control scheme: "hate". Avoids that which it senses. | | {{ :public:t-720-atai:curous.png?150 }} | | Braitenberg vehicle example control scheme: "curious". The thinner wires are weighted-down signals, changing the behavior of "love" by avoiding crashing into things. | \\ \\ ====Subsumption Examples==== | {{ :public:t-720-atai:subsumption-arch-module-1.gif?450 }} | | Subsumption control architecture building block. | | {{ :public:t-720-atai:subsumption-arch-2.jpg?450 }} | | Example subsumption architecture for robot. | | {{ :public:t-720-atai:subsump-level0.png?450 }} | | Subsumption architecture example, level 0. | | {{ :public:t-720-atai:subsump-level1.png?450 }} | | Subsumption architecture example, level 1. | | {{ :public:t-720-atai:subsump-level2.png?450 }} | | Subsumption architecture example, level 2. | \\ \\ ====Model-Acquiring Agents==== | Model | A model of something is an information structure that behaves in some ways like the thing being modeled. | | A good model of X | ...allows What-If questions to be asked about X, with answers correctly predicting what happens to X. | | Model acquisition | The ability to create models of (observed) phenomena. | \\ \\ ====Reinforcement Learning==== Download Jordi's slides {{:public:t-720-atai:atai-16:20160119_reinforcement_learning.pdf|here}}, and see reading and study materials [[http://cadia.ru.is/wiki/public:t-720-atai:atai-16:readings#reinforcement_learning|here]]. \\ 2016(c)K. R. Thórisson \\ \\ //EOF//