User Tools

Site Tools


public:t-713-mers:mers-23:empirical-reasoning-2

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
public:t-713-mers:mers-23:empirical-reasoning-2 [2023/10/29 21:16] – [Signal & Noise] thorissonpublic:t-713-mers:mers-23:empirical-reasoning-2 [2024/04/29 13:33] (current) – external edit 127.0.0.1
Line 53: Line 53:
  
 ==== Guided Experimentation for New Knowledge Generation ====   ==== Guided Experimentation for New Knowledge Generation ====  
-|  Experimenting on the World  | Knowledge-guided experimentation is the process of using one's current knowledge to create more knowledge. When learning about the world, random exploration is by definition the slowest and most ineffective knowledge creation method; in complex worlds it may even be completely useless due to the world's combinatorics. (If the ratio of complexity to lack of knowledge guidance is too high, no learning can take place.) \\ Strategic experimentation for knowledge generation involves conceiving actions that minimize energy and time while optimizing the exclusion of families of hypotheses about how the world works.    | +|  \\ Experimenting on the World  | Knowledge-guided experimentation is the process of using one's current knowledge to create more knowledge. When learning about the world, random exploration is by definition the slowest and most ineffective knowledge creation method; in complex worlds it may even be completely useless due to the world's combinatorics. (If the ratio of complexity to lack of knowledge guidance is too high, no learning can take place.) \\ Strategic experimentation for knowledge generation involves conceiving actions that minimize energy and time while optimizing the exclusion of families of hypotheses about how the world works.    | 
-|  Inspecting One's Knowledge  | Inspection of knowledge happens via //**reflection**// -- the ability to apply learning mechanisms to the processes and content of one's own mind. Reflection enables a learner to set itself a goal, then inspect that goal, producing arguments for and against that goal's features (usefulness, justification, time- and energy-dependence, and so on...). In other words, reflection gives a mind a capacity for **//meta-knowledge//**.        |+|  Inspecting One'Own Knowledge  | Inspection of knowledge happens via //**reflection**// -- the ability to apply learning mechanisms to the processes and content of one's own mind. Reflection enables a learner to set itself a goal, then inspect that goal, producing arguments for and against that goal's features (usefulness, justification, time- and energy-dependence, and so on...). In other words, reflection gives a mind a capacity for **//meta-knowledge//**.        |
 |  Cumulative Learning  | Learning that is always on and improves knowledge incrementally over time.    | |  Cumulative Learning  | Learning that is always on and improves knowledge incrementally over time.    |
  
Line 76: Line 76:
 |  What It Is  | The ability of a controller to explain, after the fact or before, why it did something or intends to do it.   | |  What It Is  | The ability of a controller to explain, after the fact or before, why it did something or intends to do it.   |
 |  'Explainability' \\ ≠ \\ 'self-explanation'  | If an intelligence X can explain a phenomenon Y, Y is 'explainable' by Y, through some process chosen by Y. \\ \\ In contrast, if an intelligence X can explain itself, its own actions, knowledge, understanding, beliefs, and reasoning, it is capable of self-explanation. The latter is stronger and subsumes the former.   | |  'Explainability' \\ ≠ \\ 'self-explanation'  | If an intelligence X can explain a phenomenon Y, Y is 'explainable' by Y, through some process chosen by Y. \\ \\ In contrast, if an intelligence X can explain itself, its own actions, knowledge, understanding, beliefs, and reasoning, it is capable of self-explanation. The latter is stronger and subsumes the former.   |
-|  Why It Is Important  | If a controller does something we don't want it to repeat - e.g. crash an airplane full of people (in simulation mode, hopefully!) - it needs to be able to explain why it did what it did. If it can't, it means it - and //we// - can never be sure of why it did what it did, whether it had any other choice, whether it is likely to do it again, whether it's an evil machine that actually meant to do it, or even how likely it is to do it again.     |+|  \\ Why It Is Important  | If a controller does something we don't want it to repeat - e.g. crash an airplane full of people (in simulation mode, hopefully!) - it needs to be able to explain why it did what it did. If it can't, it means it - and //we// - can never be sure of why it did what it did, whether it had any other choice, whether it is likely to do it again, whether it's an evil machine that actually meant to do it, or even how likely it is to do it again.     |
 |  \\ Human-Level AI  | Even more importantly, to grow and learn and self-inspect the AI system must be able to sort out causal chains. If it can't it will not only be incapable of explaining to others why it is like it is, it will be incapable of explaining to itself why things are the way they are, and thus, it will be incapable of sorting out whether something it did is better for its own growth than something else. Explanation is the big black hole of ANNs: In principle ANNs are black boxes, and thus they are in principle unexplainable - whether to themselves or others. \\ One way to address this is by encapsulating knowledge as hierarchical models that are built up over time, and can be de-constructed at any time (like AERA does).   | |  \\ Human-Level AI  | Even more importantly, to grow and learn and self-inspect the AI system must be able to sort out causal chains. If it can't it will not only be incapable of explaining to others why it is like it is, it will be incapable of explaining to itself why things are the way they are, and thus, it will be incapable of sorting out whether something it did is better for its own growth than something else. Explanation is the big black hole of ANNs: In principle ANNs are black boxes, and thus they are in principle unexplainable - whether to themselves or others. \\ One way to address this is by encapsulating knowledge as hierarchical models that are built up over time, and can be de-constructed at any time (like AERA does).   |
  
/var/www/cadia.ru.is/wiki/data/attic/public/t-713-mers/mers-23/empirical-reasoning-2.1698614163.txt.gz · Last modified: 2024/04/29 13:33 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki