Both sides previous revisionPrevious revisionNext revision | Previous revision |
public:t-713-mers:mers-23:empirical-reasoning-2 [2023/10/23 14:00] – [Cumulative Learning] thorisson | public:t-713-mers:mers-23:empirical-reasoning-2 [2024/04/29 13:33] (current) – external edit 127.0.0.1 |
---|
====Uncertainty in Physical Worlds==== | ====Uncertainty in Physical Worlds==== |
| What it is | In a dynamic world with a large number of elements and processes, presenting infinite combinatorics, knowing everything is impossible and thus predicting everything is also impossible. || | | What it is | In a dynamic world with a large number of elements and processes, presenting infinite combinatorics, knowing everything is impossible and thus predicting everything is also impossible. || |
| Stems From | ------ Unknown things ------ || | | Stems From | ------ Unknown Things / Phenomena ------ || |
| | Variable Values | E.g. we know it will eventually rain, but not exactly when. | | | | Variable Values | E.g. we know it will eventually rain, but not exactly when. | |
| | Variables | E.g. a gust of wind that hits us as we come around a skyscraper's corner. | | | | Variables | E.g. a gust of wind that hits us as we come around a skyscraper's corner. | |
| | Goals of Others | E.g. when we meet someone in the street and move to our right, but they also move in that direction (to their left), at which point we move to our left, but they move to their right, etc., for a sequence of synchronized stalemate. | | | | Goals of Others | E.g. when we meet someone in the street and move to our right, but they also move in that direction (to their left), at which point we move to our left, but they move to their right, etc., for a sequence of synchronized stalemate. | |
| | Imprecision in Measurements | E.g. the position of your car on the road relative to other cars and the boundaries of the road. | | | | Imprecision in Measurements | E.g. the position of your car on the road relative to other cars and the boundaries of the road. | |
| | ------ Unknowable things ------ || | | | ------ Unknowable Things / Phenomena ------ || |
| | Chains of Events | E.g. for most things which are not possible (or utterly impractical) to measure, for any given time period. | | | | Chains of Events | E.g. for most things which are not possible (or utterly impractical) to measure, for any given time period. | |
| | Living Things | E.g. bacteria, before they were hypothesized and observable through a microscope. | | | | Living Things | E.g. bacteria, before they were hypothesized and observable through a microscope. | |
| Modeling the World | A fundamental method in engineering is to model dynamic systems as part "signal" and part "noise" -- the former is what we have a good handle on, so we can turn it into a 'signal', and the latter is what we (currently) are unable to model, making it random (hence 'noise'). | | | Modeling the World | A fundamental method in engineering is to model dynamic systems as part "signal" and part "noise" -- the former is what we have a good handle on, so we can turn it into a 'signal', and the latter is what we (currently) are unable to model, making it random (hence 'noise'). | |
| Infinite Worlds as 'signal' & 'noise' | In artificial general intelligence it can be useful to think of knowledge in the same way: Anything for which there exists good models (read: useful knowledge) we look at as 'signal' and anything else (which looks more or less random to us) is 'noise'. | | | Infinite Worlds as 'signal' & 'noise' | In artificial general intelligence it can be useful to think of knowledge in the same way: Anything for which there exists good models (read: useful knowledge) we look at as 'signal' and anything else (which looks more or less random to us) is 'noise'. | |
| \\ Mind as 'Engineer' or 'Scientist' | In this view the mind is the engineering trying to get a better handle on the noise in the world, by proposing better (read: more useful) models. \\ The model creation in an intelligent system is essentially //induction//, i.e. the creation of //imagined explanations// for how the world hangs together, resulting in the minds experience of it. | | | Mind as 'Engineer' or 'Scientist' | In this view the mind is the engineering trying to get a better handle on the noise in the world, by proposing better (read: more useful) models. \\ The model creation in an intelligent system is essentially //induction//, i.e. the creation of //imagined explanations// for how the world hangs together, resulting in the minds experience of it. | |
| \\ Engineer or Scientist? | In this view, is a general intelligence more like an engineer or a scientist? A scientist produces theories of the world, while an engineer uses theories to do measurements and meet certain requirements in going about the world. \\ A general intelligence is both rolled into one: It must be able to create theories of the world while also measuring it, sometimes at the same time -- we could say that learning and cognitive development is more like the process of the scientific enterprise and using your acquired (useful) knowledge is more like the process of engineering. | | | \\ Engineer or Scientist? | In this view, is a general intelligence more like an engineer or a scientist? A scientist produces theories of the world, while an engineer uses theories to do measurements and meet certain requirements in going about the world. \\ A general intelligence is both rolled into one: It must be able to create theories of the world while also measuring it, sometimes at the same time -- we could say that learning and cognitive development is more like the process of the scientific enterprise and using your acquired (useful) knowledge is more like the process of engineering. | |
| |
| |
==== Guided Experimentation for New Knowledge Generation ==== | ==== Guided Experimentation for New Knowledge Generation ==== |
| Experimenting on the World | Knowledge-guided experimentation is the process of using one's current knowledge to create more knowledge. When learning about the world, random exploration is by definition the slowest and most ineffective knowledge creation method; in complex worlds it may even be completely useless due to the world's combinatorics. (If the ratio of complexity to lack of knowledge guidance is too high, no learning can take place.) \\ Strategic experimentation for knowledge generation involves conceiving actions that minimize energy and time while optimizing the exclusion of families of hypotheses about how the world works. | | | \\ Experimenting on the World | Knowledge-guided experimentation is the process of using one's current knowledge to create more knowledge. When learning about the world, random exploration is by definition the slowest and most ineffective knowledge creation method; in complex worlds it may even be completely useless due to the world's combinatorics. (If the ratio of complexity to lack of knowledge guidance is too high, no learning can take place.) \\ Strategic experimentation for knowledge generation involves conceiving actions that minimize energy and time while optimizing the exclusion of families of hypotheses about how the world works. | |
| Inspecting One's Knowledge | Inspection of knowledge happens via //**reflection**// -- the ability to apply learning mechanisms to the processes and content of one's own mind. Reflection enables a learner to set itself a goal, then inspect that goal, producing arguments for and against that goal's features (usefulness, justification, time- and energy-dependence, and so on...). In other words, reflection gives a mind a capacity for **//meta-knowledge//**. | | | Inspecting One's Own Knowledge | Inspection of knowledge happens via //**reflection**// -- the ability to apply learning mechanisms to the processes and content of one's own mind. Reflection enables a learner to set itself a goal, then inspect that goal, producing arguments for and against that goal's features (usefulness, justification, time- and energy-dependence, and so on...). In other words, reflection gives a mind a capacity for **//meta-knowledge//**. | |
| Cumulative Learning | Learning that is always on and improves knowledge incrementally over time. | | | Cumulative Learning | Learning that is always on and improves knowledge incrementally over time. | |
| |
| What It Is | The ability of a controller to explain, after the fact or before, why it did something or intends to do it. | | | What It Is | The ability of a controller to explain, after the fact or before, why it did something or intends to do it. | |
| 'Explainability' \\ ≠ \\ 'self-explanation' | If an intelligence X can explain a phenomenon Y, Y is 'explainable' by Y, through some process chosen by Y. \\ \\ In contrast, if an intelligence X can explain itself, its own actions, knowledge, understanding, beliefs, and reasoning, it is capable of self-explanation. The latter is stronger and subsumes the former. | | | 'Explainability' \\ ≠ \\ 'self-explanation' | If an intelligence X can explain a phenomenon Y, Y is 'explainable' by Y, through some process chosen by Y. \\ \\ In contrast, if an intelligence X can explain itself, its own actions, knowledge, understanding, beliefs, and reasoning, it is capable of self-explanation. The latter is stronger and subsumes the former. | |
| Why It Is Important | If a controller does something we don't want it to repeat - e.g. crash an airplane full of people (in simulation mode, hopefully!) - it needs to be able to explain why it did what it did. If it can't, it means it - and //we// - can never be sure of why it did what it did, whether it had any other choice, whether it is likely to do it again, whether it's an evil machine that actually meant to do it, or even how likely it is to do it again. | | | \\ Why It Is Important | If a controller does something we don't want it to repeat - e.g. crash an airplane full of people (in simulation mode, hopefully!) - it needs to be able to explain why it did what it did. If it can't, it means it - and //we// - can never be sure of why it did what it did, whether it had any other choice, whether it is likely to do it again, whether it's an evil machine that actually meant to do it, or even how likely it is to do it again. | |
| \\ Human-Level AI | Even more importantly, to grow and learn and self-inspect the AI system must be able to sort out causal chains. If it can't it will not only be incapable of explaining to others why it is like it is, it will be incapable of explaining to itself why things are the way they are, and thus, it will be incapable of sorting out whether something it did is better for its own growth than something else. Explanation is the big black hole of ANNs: In principle ANNs are black boxes, and thus they are in principle unexplainable - whether to themselves or others. \\ One way to address this is by encapsulating knowledge as hierarchical models that are built up over time, and can be de-constructed at any time (like AERA does). | | | \\ Human-Level AI | Even more importantly, to grow and learn and self-inspect the AI system must be able to sort out causal chains. If it can't it will not only be incapable of explaining to others why it is like it is, it will be incapable of explaining to itself why things are the way they are, and thus, it will be incapable of sorting out whether something it did is better for its own growth than something else. Explanation is the big black hole of ANNs: In principle ANNs are black boxes, and thus they are in principle unexplainable - whether to themselves or others. \\ One way to address this is by encapsulating knowledge as hierarchical models that are built up over time, and can be de-constructed at any time (like AERA does). | |
| |