User Tools

Site Tools


public:t-720-atai:atai-20:agents_and_control

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

public:t-720-atai:atai-20:agents_and_control [2020/09/21 11:29] – [Example of Reactive Control: Braitenberg Vehicles] thorissonpublic:t-720-atai:atai-20:agents_and_control [2024/04/29 13:33] (current) – external edit 127.0.0.1
Line 34: Line 34:
 |  Mechanical Controller  | Fuses control mechanism with measurement mechanism via mechanical coupling. Adaptation would require mechanical structure to change. Makes adaptation very difficult to implement.  | |  Mechanical Controller  | Fuses control mechanism with measurement mechanism via mechanical coupling. Adaptation would require mechanical structure to change. Makes adaptation very difficult to implement.  |
 |  Digital Controller  | Separates the stages of measurement, analysis, and control. Makes adaptive control feasible.     | |  Digital Controller  | Separates the stages of measurement, analysis, and control. Makes adaptive control feasible.     |
-|  Feedback  | For a variable <m>v</m>, information of its value at time <m>t_1</m> is transmitted back to the controller through a feedback mechanism as <m>v{prime}</m>, where \\ <m>v{prime}(t) > v(t)</m> \\ that is, there is a //latency// in the transmission, which is a function of the speed of transmission (encoding (measurement) time + transmission time + decoding (read-back) time).  |+|  \\ Feedback  | For a variable <m>v</m>, information of its value at time <m>t_1</m> is transmitted back to the controller through a feedback mechanism as <m>v{prime}</m>, where \\ <m>v{prime}(t) > v(t)</m> \\ that is, there is a //latency// in the transmission, which is a function of the speed of transmission (encoding (measurement) time + transmission time + decoding (read-back) time).  |
 |  Latency  | A measure for the size of the difference between <m>v</m> and <m>v{prime}</m> | |  Latency  | A measure for the size of the difference between <m>v</m> and <m>v{prime}</m> |
 |  Jitter  | The change in Latency over time. Second-order latency.  | |  Jitter  | The change in Latency over time. Second-order latency.  |
Line 89: Line 89:
 |  {{public:t-720-atai:simple-control-pipeline.png}}  | |  {{public:t-720-atai:simple-control-pipeline.png}}  |
 |  A simple control pipeline consists of at least one sensor, at least one control process of some sort, and at least one end effector. The goal resides in the controller. Based on what the Sensor senses and sends to the Controller, the Controller produces (in some way, e.g. via computations) an action plan (if it's really simple it's a bit counter intuitive to call it a "plan", but it's technically a plan since it and its desired effect are already known before it has been performed), and sends it to an end-effector (Act) that executes it.   | |  A simple control pipeline consists of at least one sensor, at least one control process of some sort, and at least one end effector. The goal resides in the controller. Based on what the Sensor senses and sends to the Controller, the Controller produces (in some way, e.g. via computations) an action plan (if it's really simple it's a bit counter intuitive to call it a "plan", but it's technically a plan since it and its desired effect are already known before it has been performed), and sends it to an end-effector (Act) that executes it.   |
-|  The Controller may keep a copy of what was sent to the end-effector (inner loop a := //efferent copy//) as well as monitor the effect of what the end-effector does to the outside world.  |+|  The Controller may keep a copy of what was sent to the end-effector (inner loop **a** := //efferent copy//) as well as monitor the effect of what the end-effector does to the outside world.  |
 |  Modern robotics have sensors on all actuators; for instance, the Ghost MiniTaur 4-legged robot uses a technique called "torque estimation" that allows using its motors as sensors [[https://www.youtube.com/watch?v=_YrWX9ez3jM|video]].  | |  Modern robotics have sensors on all actuators; for instance, the Ghost MiniTaur 4-legged robot uses a technique called "torque estimation" that allows using its motors as sensors [[https://www.youtube.com/watch?v=_YrWX9ez3jM|video]].  |
  
Line 115: Line 115:
 ====What is an (Intelligent) Architecture?==== ====What is an (Intelligent) Architecture?====
  
-|  What it is  | In CS: the organization of the software that implements a system.  \\ In AI: The total system that has direct and independent control of the behavior of an Agent via its sensors and effectors. \\ The //controller// view in AI means that the //architecture is the controller//  | +|  \\ What it is  | In CS: the organization of the software that implements a system.  \\ In AI: The total system that has direct and independent control of the behavior of an Agent via its sensors and effectors. \\ The //controller// view in AI means that the //architecture is the controller//  | 
-|  Why it's important  | The system architecture determines what kind of information processing an agent controller can do, and what the system as a whole is capable of in a particular Task-Environemnt. \\ A controller view helps us remember that the system exists for a purpose: //To get something done.//    | +|  \\ Why it's important  | The system architecture determines what kind of information processing an agent controller can do, and what the system as a whole is capable of in a particular Task-Environemnt. \\ A controller view helps us remember that the system exists for a purpose: //To get something done.//    | 
-|  Key concepts  | - Process types \\ - process initiation \\ - information storage \\ - information flow  |+|  \\ Key concepts  | - Process types \\ - process initiation \\ - information storage \\ - information flow  |
 |  \\ Relevance in AI  | The term "system" not only includes the processing components, the functions these implement, their input and output, and relationships, but also temporal aspects of the system's behavior as a whole. \\ This is important in AI because any controller of an agent is supposed to control it in such a way that its behavior can be classified as being "intelligent" //over time//. \\ So what are the necessary and sufficient components of that behavior set?   | |  \\ Relevance in AI  | The term "system" not only includes the processing components, the functions these implement, their input and output, and relationships, but also temporal aspects of the system's behavior as a whole. \\ This is important in AI because any controller of an agent is supposed to control it in such a way that its behavior can be classified as being "intelligent" //over time//. \\ So what are the necessary and sufficient components of that behavior set?   |
 |   \\ Rationality   | The "rationality hypothesis" models an intelligent agent as a "rational" agent: An agent that would always do the most "sensible" thing at any point in time. \\ The problem with the rationality hypothesis is that given insufficient resources, including time, the concept of rationality doesn't hold up, because it assumes you have time to weigh all alternatives (or, if you have limited time, that you can choose to evaluate the most relevant options and choose among those). But since such decisions are always about the future, and we cannot predict the future perfectly, for most decisions that we get a choice in how to proceed there is no such thing as a rational choice.    |   \\ Rationality   | The "rationality hypothesis" models an intelligent agent as a "rational" agent: An agent that would always do the most "sensible" thing at any point in time. \\ The problem with the rationality hypothesis is that given insufficient resources, including time, the concept of rationality doesn't hold up, because it assumes you have time to weigh all alternatives (or, if you have limited time, that you can choose to evaluate the most relevant options and choose among those). But since such decisions are always about the future, and we cannot predict the future perfectly, for most decisions that we get a choice in how to proceed there is no such thing as a rational choice.   
-|  Satisficing  | Herbert Simon proposed the concept of "satisficing" to replace the concept of "pseudo-optimizing" when talking about intelligent action in a complex task-environment: Actions that meet a particular minimum requirement in light of a particular goal 'satisfy' and 'suffice' for the purposes of that goal. \\ We don't care (and don't have the time) to consider whether an action is "optimal" if it gets the job done in a reasonable way.   |+|  \\ Satisficing  | Herbert Simon proposed the concept of "satisficing" to replace the concept of "pseudo-optimizing" when talking about intelligent action in a complex task-environment: Actions that meet a particular minimum requirement in light of a particular goal 'satisfy' and 'suffice' for the purposes of that goal. \\ We don't care (and don't have the time) to consider whether an action is "optimal" if it gets the job done in a reasonable way.   |
 |  Intelligence is in part a //systemic// phenomenon  | Thought experiment: Take any system we deem intelligent, e.g. a 10-year old human, and isolate any of his/her skills and features. A machine that implements any //single// one of these is unlikely to seem worthy of being called "intelligent" (viz chess programs), without further qualification (e.g. "a limited expert in a sub-field"). \\ //"The intelligence resides in the architecture."// - KRTh   | |  Intelligence is in part a //systemic// phenomenon  | Thought experiment: Take any system we deem intelligent, e.g. a 10-year old human, and isolate any of his/her skills and features. A machine that implements any //single// one of these is unlikely to seem worthy of being called "intelligent" (viz chess programs), without further qualification (e.g. "a limited expert in a sub-field"). \\ //"The intelligence resides in the architecture."// - KRTh   |
  
Line 147: Line 147:
 ====Reactive Architectures: Levels of Complexity==== ====Reactive Architectures: Levels of Complexity====
 |  Super-simple  | Sensors connected directly to motors, e.g. Braitenberg Vehicles. | |  Super-simple  | Sensors connected directly to motors, e.g. Braitenberg Vehicles. |
-|  Basic  | Deterministic connections between components with small memory. \\ Examples: Chess engines, Roomba vacuum cleaner.   |+|  Basic  | Deterministic connections between components with small memory.   |
 |  Complex  | Grossly modular architecture (< 30 modules) with multiple relationships at more than one level of control detail (LoC). \\  Examples: Speech-controlled dialogue systems like Siri and Alexa.   | |  Complex  | Grossly modular architecture (< 30 modules) with multiple relationships at more than one level of control detail (LoC). \\  Examples: Speech-controlled dialogue systems like Siri and Alexa.   |
 |  Super-complex  | Large number of modules (> 30) at various sizes, each with multiple relationships to others, at more than one LoC. \\ Example: Subsumption architecture.   | |  Super-complex  | Large number of modules (> 30) at various sizes, each with multiple relationships to others, at more than one LoC. \\ Example: Subsumption architecture.   |
Line 157: Line 157:
 ====Example of Reactive Control: Braitenberg Vehicles==== ====Example of Reactive Control: Braitenberg Vehicles====
 | {{ :public:t-720-atai:love.png?100 }} |  {{ :public:t-720-atai:hate.png?100 }}  |  {{ :public:t-720-atai:curous.png?100 }}  |   | {{ :public:t-720-atai:love.png?100 }} |  {{ :public:t-720-atai:hate.png?100 }}  |  {{ :public:t-720-atai:curous.png?100 }}  |  
-|   Braitenberg vehicle example control scheme: "love". Steers towards (and crashes into) that which its sensors sense.  |  Braitenberg vehicle example control scheme: "hate". Avoids that which it senses. \\ //(Thicker wires means stronger signal.)//   Braitenberg vehicle example control scheme: "curious". Changing the behavior of "love" by avoiding crashing into things. \\ //(Thinner wires means weaker signals.)//  |+|   Braitenberg vehicle example control scheme: "love". Steers towards (and crashes into) that which its sensors sense.  |  Braitenberg vehicle example control scheme: "hate". Avoids that which it senses. \\  |  Braitenberg vehicle example control scheme: "curious". Changing the behavior of "love" by avoiding crashing into things. \\ //(Thinner wires means weaker signals.)//  |
  
 \\ \\
Line 252: Line 252:
  
 ====Goal==== ====Goal====
-|  What it is  | <m>G_{top}~=~{lbrace G_{sub-1}, ~G_{sub-2}, ~... ~G_{sub-n}, G^-_{sub-1},~G^-_{sub-2},~...~G^-_{sub-m} rbrace}</m>, i.e. a set of zero or more subgoals, where \\ <m>G^-</m> are "negative goals" (states to be avoided = constraints) and \\ <m>G~=~{lbrace s_1,~s_2,~...~s_n, ~R rbrace}</m>, where <m>s_n</m> describes a state <m>s~subset~S</m> of a (subset) of a World and \\ <m>R</m> are relevant relations between these. +|  \\ What it is  | <m>G_{top}~=~{lbrace G_{sub-1}, ~G_{sub-2}, ~... ~G_{sub-n}, G^-_{sub-1},~G^-_{sub-2},~...~G^-_{sub-m} rbrace}</m>, i.e. a set of zero or more subgoals, where \\ <m>G^-</m> are "negative goals" (states to be avoided = constraints) and \\ <m>G~=~{lbrace s_1,~s_2,~...~s_n, ~R rbrace}</m>, where <m>s_n</m> describes a state <m>s~subset~S</m> of a (subset) of a World and \\ <m>R</m> are relevant relations between these. 
-|  Components of <m>s</m>  | <m>s={lbrace v_1, ~v_2 ~... ~v_n,~R  rbrace}</m>: A set of //patterns//, expressed as variables with error constraints, that refer to the world.   |+|  \\ Components of <m>s</m>  | <m>s={lbrace v_1, ~v_2 ~... ~v_n,~R  rbrace}</m>: A set of //patterns//, expressed as variables with error constraints, that refer to the world.   |
 |  What we can do with it  | Define a task: task := goal + timeframe + initial world state  | |  What we can do with it  | Define a task: task := goal + timeframe + initial world state  |
 |  Why it is important  | Goals are needed for concrete tasks, and tasks are a key part of why we would want AI in the first place. For any complex tasks there will be identifiable sub-goals -- talking about these in compressed manners (e.g. using natural language) is important for learning and for monitoring of task progress.   | |  Why it is important  | Goals are needed for concrete tasks, and tasks are a key part of why we would want AI in the first place. For any complex tasks there will be identifiable sub-goals -- talking about these in compressed manners (e.g. using natural language) is important for learning and for monitoring of task progress.   |
 |  Historically speaking  | Goals have been with the field of AI from the very beginning, but definitions vary.   | |  Historically speaking  | Goals have been with the field of AI from the very beginning, but definitions vary.   |
-|  What to be aware of  | We can assign goals to an AI without the AI having an explicit data structure that we can say matches the goal directly (see [[/public:t-720-atai:atai-20:agents_and_control#braitenberg_vehicle_examples|Braitenberg Vehicles]] - below). These are called //**implicit goals**//. We may conjecture that if we want an AI to be able to talk about its goals they will have to be -- in some sense -- //**explicit**//, that is, having a discrete representation in the AI's "mind" that can be manipulated, inspected, compressed / decompressed, and related to other data structures for various purposes.  |+|  \\ What to be aware of  | We can assign goals to an AI without the AI having an explicit data structure that we can say matches the goal directly (see [[/public:t-720-atai:atai-20:agents_and_control#braitenberg_vehicle_examples|Braitenberg Vehicles]] - below). These are called //**implicit goals**//. We may conjecture that if we want an AI to be able to talk about its goals they will have to be -- in some sense -- //**explicit**//, that is, having a discrete representation in the AI's "mind" that can be manipulated, inspected, compressed / decompressed, and related to other data structures for various purposes.  |
  
  
Line 263: Line 263:
 ====Inferred GMI Architectural Features ==== ====Inferred GMI Architectural Features ====
 |  \\ Large architecture  | From the above we can readily infer that if we want GMI, an architecture that is considerably more complex than systems being built in most AI labs today is likely unavoidable. In a complex architecture the issue of concurrency of processes must be addressed, a problem that has not yet been sufficiently resolved in present software and hardware. This scaling problem cannot be addressed by the usual “we’ll wait for Moore’s law to catch up” because the issue does not primarily revolve around //speed of execution// but around the //nature of the architectural principles of the system and their runtime operation//  || |  \\ Large architecture  | From the above we can readily infer that if we want GMI, an architecture that is considerably more complex than systems being built in most AI labs today is likely unavoidable. In a complex architecture the issue of concurrency of processes must be addressed, a problem that has not yet been sufficiently resolved in present software and hardware. This scaling problem cannot be addressed by the usual “we’ll wait for Moore’s law to catch up” because the issue does not primarily revolve around //speed of execution// but around the //nature of the architectural principles of the system and their runtime operation//  ||
-|  \\ Predictable Robustness in Novel Circumstances  | The system must have a robustness in light of all kinds of task-environment and embodiment perturbations, otherwise no reliable plans can be made, and thus no reliable execution of tasks can ever be reached, no matter how powerful the learning capacity. This robustness must be predictable a-priori at some level of abstraction -- for a wide range of novel circumstances it cannot be a complete surprise that the system "holds up". (If this were the case then the system itself would not be able to predict its chances of success in face of novel circumstances, thus eliminating an important part of the "G" from its "AGI" label.)   ||+|  Predictable Robustness in Novel Circumstances  | The system must have a robustness in light of all kinds of task-environment and embodiment perturbations, otherwise no reliable plans can be made, and thus no reliable execution of tasks can ever be reached, no matter how powerful the learning capacity. This robustness must be predictable a-priori at some level of abstraction -- for a wide range of novel circumstances it cannot be a complete surprise that the system "holds up". (If this were the case then the system itself would not be able to predict its chances of success in face of novel circumstances, thus eliminating an important part of the "G" from its "AGI" label.)   ||
 |  \\ Graceful Degradation  | Part of the robustness requirement is that the system be constructed in a way as to minimize potential for catastrophic (and upredictable) failure. A programmer forgets to delimit a command in a compiled program and the whole application crashes; this kind of brittleness is not an option for cognitive systems operating in partially stochastic environments, where perturbations may come in any form at any time (and perfect prediction is impossible).   || |  \\ Graceful Degradation  | Part of the robustness requirement is that the system be constructed in a way as to minimize potential for catastrophic (and upredictable) failure. A programmer forgets to delimit a command in a compiled program and the whole application crashes; this kind of brittleness is not an option for cognitive systems operating in partially stochastic environments, where perturbations may come in any form at any time (and perfect prediction is impossible).   ||
 |  Transversal Functions  | The system must have pan-architectural characteristics that enable it to operate consistently as a whole, to be highly adaptive (yet robust) in its own operation across the board, including metacognitive abilities. Some functions likely to be needed to achieve this include attention, learning, analogy-making capabilities, and self-inspection.   || |  Transversal Functions  | The system must have pan-architectural characteristics that enable it to operate consistently as a whole, to be highly adaptive (yet robust) in its own operation across the board, including metacognitive abilities. Some functions likely to be needed to achieve this include attention, learning, analogy-making capabilities, and self-inspection.   ||
/var/www/cadia.ru.is/wiki/data/attic/public/t-720-atai/atai-20/agents_and_control.1600687788.txt.gz · Last modified: 2024/04/29 13:32 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki