Both sides previous revisionPrevious revisionNext revision | Previous revision |
public:t-720-atai:atai-19:lecture_notes_w2 [2020/09/07 14:15] – [Goal] thorisson | public:t-720-atai:atai-19:lecture_notes_w2 [2024/04/29 13:33] (current) – external edit 127.0.0.1 |
---|
| {{ :public:t-720-atai:abstract-agent.png?250 }} | | | {{ :public:t-720-atai:abstract-agent.png?250 }} | |
| An abstraction of a controller: A controller has an input, <m>i_t</m>, selected from a task-environment, current state <m>S</m>, at least one goal <m>G</m> (implicit or explicit - see table below) and output <m>o_t</m> in the form of atomic actions (selected from a set of atomic possible outputs), and a set of processes <m>P</m>. \\ The internals of a controller for the complex, adaptive control of a situated agent is referred to as //cognitive architecture//. \\ Any practical controller is //embodied//, in that it interacts with its environment through interfaces whereby its internal computations are turned into //physical actions:// <m>i</m> enters via //measuring devices// ("sensors") and <m>o</m> exits the controller via //effectors//. | | | An abstraction of a controller: A controller has an input, <m>i_t</m>, selected from a task-environment, current state <m>S</m>, at least one goal <m>G</m> (implicit or explicit - see table below) and output <m>o_t</m> in the form of atomic actions (selected from a set of atomic possible outputs), and a set of processes <m>P</m>. \\ The internals of a controller for the complex, adaptive control of a situated agent is referred to as //cognitive architecture//. \\ Any practical controller is //embodied//, in that it interacts with its environment through interfaces whereby its internal computations are turned into //physical actions:// <m>i</m> enters via //measuring devices// ("sensors") and <m>o</m> exits the controller via //effectors//. | |
| |
\\ | |
\\ | |
====Goal==== | |
| What it is | A state of a (subset) of a world. | | |
| What are its components | A set of //patterns//, expressed as variables with error constraints, that refer to the world. | | |
| What we can do with it | Define a task: task := goal + timeframe + initial world state | | |
| Why it is important | Goals are needed for concrete tasks, and tasks are a key part of why we would want AI in the first place. For any complex tasks there will be identifiable sub-goals -- talking about these in compressed manners (e.g. using natural language) is important for learning and for monitoring of task progress. | | |
| Historically speaking | Goals have been with the field of AI from the very beginning, but definitions vary. | | |
| What to be aware of | We can assign goals to an AI without the AI having an explicit data structure that we can say matches the goal directly (see [[//public:t-720-atai:atai-19:lecture_notes_w2?&#braitenberg_vehicles_online_code_example|Braitenberg Vehicles]] - below). These are called //**implicit goals**//. We may conjecture that if we want an AI to be able to talk about its goals they will have to be -- in some sense -- //**explicit**//, that is, having a discrete representation in the AI's "mind" that can be manipulated, inspected, compressed / decompressed, and related to other data structures for various purposes. | | |
| |
| |
| |
| {{public:t-720-atai:simple-control-pipeline.png}} | | | {{public:t-720-atai:simple-control-pipeline.png}} | |
| A simple control pipeline consists of at least one sensor, at least one control process of some sort, and at least one end effector. The goal resides in the controller. Based on what the Sensor senses and sends to the Controller, the Controller produces (in some way, e.g. via computations) an action plan (if it's really simple it´s a bit counter intuitive to call it a "plan", but it's technically a plan since it and its desired effect are already known before it has been performed), and sends it to an end-effector (Act) that executes it. \\ The Controller will keep a copy of what was sent to the end-effector (inner loop a := //efferent copy//) as well as monitor the effect of what the end-effector does to the outside workd (outer loop := //afferent copy//). | | | A simple control pipeline consists of at least one sensor, at least one control process of some sort, and at least one end effector. The goal resides in the controller. Based on what the Sensor senses and sends to the Controller, the Controller produces (in some way, e.g. via computations) an action plan (if it's really simple it's a bit counter intuitive to call it a "plan", but it's technically a plan since it and its desired effect are already known before it has been performed), and sends it to an end-effector (Act) that executes it. \\ The Controller will keep a copy of what was sent to the end-effector (inner loop a := //efferent copy//) as well as monitor the effect of what the end-effector does to the outside workd (outer loop := //afferent copy//). | |
| |
\\ | \\ |
| Agent complexity | Determined by <m>I X P X O</m>, not just <m>P, i,</m> or <m>o</m>. | | | Agent complexity | Determined by <m>I X P X O</m>, not just <m>P, i,</m> or <m>o</m>. | |
| Agent action complexity potential | Potential for <m>P</m> to control combinatorics of, or change, <m>o</m>, beyond initial <m>i</m> (at "birth"). | | | Agent action complexity potential | Potential for <m>P</m> to control combinatorics of, or change, <m>o</m>, beyond initial <m>i</m> (at "birth"). | |
| Agent input complexity potential | Potential for <m>P</m> to structure i in post-processing, and to extend <m>i</m>. | | | Agent input complexity potential | Potential for <m>P</m> to structure <m>i</m> in post-processing, and to extend <m>i</m>. | |
| Agent <m>P</m> complexity potential | Potential for <m>P</m> to acquire and effectively and efficiently store and access past <m>i</m> (learning); potential for <m>P</m> to change <m>P</m>. | | | Agent <m>P</m> complexity potential | Potential for <m>P</m> to acquire and effectively and efficiently store and access past <m>i</m> (learning); potential for <m>P</m> to change <m>P</m>. | |
| Agent intelligence potential | Potential for <m>P</m> to coherently coordinate all of the above to improve its own ability to use its resources, acquire more resources, in light of drives (top-level goals). | | | Agent intelligence potential | Potential for <m>P</m> to coherently coordinate all of the above to improve its own ability to use its resources, acquire more resources, in light of drives (top-level goals). | |