| |
public:t-720-atai:atai-20:agents_and_control [2020/10/29 10:37] – [Goal] thorisson | public:t-720-atai:atai-20:agents_and_control [2024/04/29 13:33] (current) – external edit 127.0.0.1 |
---|
====Goal==== | ====Goal==== |
| \\ What it is | <m>G_{top}~=~{lbrace G_{sub-1}, ~G_{sub-2}, ~... ~G_{sub-n}, G^-_{sub-1},~G^-_{sub-2},~...~G^-_{sub-m} rbrace}</m>, i.e. a set of zero or more subgoals, where \\ <m>G^-</m> are "negative goals" (states to be avoided = constraints) and \\ <m>G~=~{lbrace s_1,~s_2,~...~s_n, ~R rbrace}</m>, where <m>s_n</m> describes a state <m>s~subset~S</m> of a (subset) of a World and \\ <m>R</m> are relevant relations between these. | | | \\ What it is | <m>G_{top}~=~{lbrace G_{sub-1}, ~G_{sub-2}, ~... ~G_{sub-n}, G^-_{sub-1},~G^-_{sub-2},~...~G^-_{sub-m} rbrace}</m>, i.e. a set of zero or more subgoals, where \\ <m>G^-</m> are "negative goals" (states to be avoided = constraints) and \\ <m>G~=~{lbrace s_1,~s_2,~...~s_n, ~R rbrace}</m>, where <m>s_n</m> describes a state <m>s~subset~S</m> of a (subset) of a World and \\ <m>R</m> are relevant relations between these. | |
| Components of <m>s</m> | <m>s={lbrace v_1, ~v_2 ~... ~v_n,~R rbrace}</m>: A set of //patterns//, expressed as variables with error constraints, that refer to the world. | | | \\ Components of <m>s</m> | <m>s={lbrace v_1, ~v_2 ~... ~v_n,~R rbrace}</m>: A set of //patterns//, expressed as variables with error constraints, that refer to the world. | |
| What we can do with it | Define a task: task := goal + timeframe + initial world state | | | What we can do with it | Define a task: task := goal + timeframe + initial world state | |
| Why it is important | Goals are needed for concrete tasks, and tasks are a key part of why we would want AI in the first place. For any complex tasks there will be identifiable sub-goals -- talking about these in compressed manners (e.g. using natural language) is important for learning and for monitoring of task progress. | | | Why it is important | Goals are needed for concrete tasks, and tasks are a key part of why we would want AI in the first place. For any complex tasks there will be identifiable sub-goals -- talking about these in compressed manners (e.g. using natural language) is important for learning and for monitoring of task progress. | |
| Historically speaking | Goals have been with the field of AI from the very beginning, but definitions vary. | | | Historically speaking | Goals have been with the field of AI from the very beginning, but definitions vary. | |
| What to be aware of | We can assign goals to an AI without the AI having an explicit data structure that we can say matches the goal directly (see [[/public:t-720-atai:atai-20:agents_and_control#braitenberg_vehicle_examples|Braitenberg Vehicles]] - below). These are called //**implicit goals**//. We may conjecture that if we want an AI to be able to talk about its goals they will have to be -- in some sense -- //**explicit**//, that is, having a discrete representation in the AI's "mind" that can be manipulated, inspected, compressed / decompressed, and related to other data structures for various purposes. | | | \\ What to be aware of | We can assign goals to an AI without the AI having an explicit data structure that we can say matches the goal directly (see [[/public:t-720-atai:atai-20:agents_and_control#braitenberg_vehicle_examples|Braitenberg Vehicles]] - below). These are called //**implicit goals**//. We may conjecture that if we want an AI to be able to talk about its goals they will have to be -- in some sense -- //**explicit**//, that is, having a discrete representation in the AI's "mind" that can be manipulated, inspected, compressed / decompressed, and related to other data structures for various purposes. | |
| |
| |
====Inferred GMI Architectural Features ==== | ====Inferred GMI Architectural Features ==== |
| \\ Large architecture | From the above we can readily infer that if we want GMI, an architecture that is considerably more complex than systems being built in most AI labs today is likely unavoidable. In a complex architecture the issue of concurrency of processes must be addressed, a problem that has not yet been sufficiently resolved in present software and hardware. This scaling problem cannot be addressed by the usual “we’ll wait for Moore’s law to catch up” because the issue does not primarily revolve around //speed of execution// but around the //nature of the architectural principles of the system and their runtime operation//. || | | \\ Large architecture | From the above we can readily infer that if we want GMI, an architecture that is considerably more complex than systems being built in most AI labs today is likely unavoidable. In a complex architecture the issue of concurrency of processes must be addressed, a problem that has not yet been sufficiently resolved in present software and hardware. This scaling problem cannot be addressed by the usual “we’ll wait for Moore’s law to catch up” because the issue does not primarily revolve around //speed of execution// but around the //nature of the architectural principles of the system and their runtime operation//. || |
| \\ Predictable Robustness in Novel Circumstances | The system must have a robustness in light of all kinds of task-environment and embodiment perturbations, otherwise no reliable plans can be made, and thus no reliable execution of tasks can ever be reached, no matter how powerful the learning capacity. This robustness must be predictable a-priori at some level of abstraction -- for a wide range of novel circumstances it cannot be a complete surprise that the system "holds up". (If this were the case then the system itself would not be able to predict its chances of success in face of novel circumstances, thus eliminating an important part of the "G" from its "AGI" label.) || | | Predictable Robustness in Novel Circumstances | The system must have a robustness in light of all kinds of task-environment and embodiment perturbations, otherwise no reliable plans can be made, and thus no reliable execution of tasks can ever be reached, no matter how powerful the learning capacity. This robustness must be predictable a-priori at some level of abstraction -- for a wide range of novel circumstances it cannot be a complete surprise that the system "holds up". (If this were the case then the system itself would not be able to predict its chances of success in face of novel circumstances, thus eliminating an important part of the "G" from its "AGI" label.) || |
| \\ Graceful Degradation | Part of the robustness requirement is that the system be constructed in a way as to minimize potential for catastrophic (and upredictable) failure. A programmer forgets to delimit a command in a compiled program and the whole application crashes; this kind of brittleness is not an option for cognitive systems operating in partially stochastic environments, where perturbations may come in any form at any time (and perfect prediction is impossible). || | | \\ Graceful Degradation | Part of the robustness requirement is that the system be constructed in a way as to minimize potential for catastrophic (and upredictable) failure. A programmer forgets to delimit a command in a compiled program and the whole application crashes; this kind of brittleness is not an option for cognitive systems operating in partially stochastic environments, where perturbations may come in any form at any time (and perfect prediction is impossible). || |
| Transversal Functions | The system must have pan-architectural characteristics that enable it to operate consistently as a whole, to be highly adaptive (yet robust) in its own operation across the board, including metacognitive abilities. Some functions likely to be needed to achieve this include attention, learning, analogy-making capabilities, and self-inspection. || | | Transversal Functions | The system must have pan-architectural characteristics that enable it to operate consistently as a whole, to be highly adaptive (yet robust) in its own operation across the board, including metacognitive abilities. Some functions likely to be needed to achieve this include attention, learning, analogy-making capabilities, and self-inspection. || |