User Tools

Site Tools


public:t_720_atai:atai-20:causation

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
public:t_720_atai:atai-20:causation [2020/10/09 15:19] – [Causation] thorissonpublic:t_720_atai:atai-20:causation [2024/04/29 13:33] (current) – external edit 127.0.0.1
Line 26: Line 26:
 ==== Probability ==== ==== Probability ====
  
-|  \\ What It Is  | Probability is a concept that is relevant to a situation where information is missing, which means it is a concept relevant to //knowledge//. \\ A common conceptualization of probability is that it is a measure of the likelihood that an event will occur [[https://en.wikipedia.org/wiki/Probability|REF]]. If it is not know whether event <m>X</m> will be (or has been) observed in situation <m>Y</m> or not, the //probability// of <m>X</m> is the percentage of time <x>X</m> would be observed if the same situation <m>Y</m> occurred an infinite number of times.   |+|  \\ What It Is  | Probability is a concept that is relevant to a situation where information is missing, which means it is a concept relevant to //knowledge//. \\ A common conceptualization of probability is that it is a measure of the likelihood that an event will occur [[https://en.wikipedia.org/wiki/Probability|REF]]. \\ If it is not know whether event <m>X</m> will be (or has been) observed in situation <m>Y</m> or not, the //probability// of <m>X</m> is the percentage of time <m>X</m> would be observed if the same situation <m>Y</m> occurred an infinite number of times.   |
 |  \\ Why It Is Important \\ in AI  | Probability enters into our knowledge of anything for which the knowledge is //**incomplete**//. \\ As in, //everything that humans do every day in every real-world environment//. \\ With incomplete knowledge it is in principle //impossible to know what may happen//. However, if we have very good models for some //limited// (small, simple) phenomenon, we can expect our prediction of what may happen to be pretty good, or at least //**practically useful**//. This is especially true for knowledge acquired through the scientific method, in which empirical evidence and human reason is systematically brought to bear on the validity of the models.    | |  \\ Why It Is Important \\ in AI  | Probability enters into our knowledge of anything for which the knowledge is //**incomplete**//. \\ As in, //everything that humans do every day in every real-world environment//. \\ With incomplete knowledge it is in principle //impossible to know what may happen//. However, if we have very good models for some //limited// (small, simple) phenomenon, we can expect our prediction of what may happen to be pretty good, or at least //**practically useful**//. This is especially true for knowledge acquired through the scientific method, in which empirical evidence and human reason is systematically brought to bear on the validity of the models.    |
 |  How To Compute Probabilities  | Most common method is Bayesian networks, which encode the concept of probability in which probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief [[https://en.wikipedia.org/wiki/Bayesian_probability|REF]]. Which makes it useful for representing an (intelligent) agent's knowledge of some environment, task or phenomenon.   | |  How To Compute Probabilities  | Most common method is Bayesian networks, which encode the concept of probability in which probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief [[https://en.wikipedia.org/wiki/Bayesian_probability|REF]]. Which makes it useful for representing an (intelligent) agent's knowledge of some environment, task or phenomenon.   |
Line 40: Line 40:
 |  \\ \\ In More Detail  | A causal relationship between variables <m>A,B</m> can be defined as a relationship such that changes in <m>A</m> always lead to a change in <m>B</m> and where the timing relationship is such that the former happens always before the latter, <m>forall delim{lbrace} A, B delim{rbrace}:~~t(A)~<~t(B)</m>. \\ Example: A light switch is designed specifically to //cause// the light to turn on and off. \\ In //a causal analysis// based on **abduction** one may reason that, given that light switches don't tend to flip randomly, a light that was **off** but is now **on** may indicate that someone or something flipped the light switch. (The inverse - a light that was on but is now off - has a larger set of reasonable causes, in addition to someone turning it off, a power outage or bulb burnout.)     | |  \\ \\ In More Detail  | A causal relationship between variables <m>A,B</m> can be defined as a relationship such that changes in <m>A</m> always lead to a change in <m>B</m> and where the timing relationship is such that the former happens always before the latter, <m>forall delim{lbrace} A, B delim{rbrace}:~~t(A)~<~t(B)</m>. \\ Example: A light switch is designed specifically to //cause// the light to turn on and off. \\ In //a causal analysis// based on **abduction** one may reason that, given that light switches don't tend to flip randomly, a light that was **off** but is now **on** may indicate that someone or something flipped the light switch. (The inverse - a light that was on but is now off - has a larger set of reasonable causes, in addition to someone turning it off, a power outage or bulb burnout.)     |
 |  Why It Is Important \\ in Science  | Causation is the foundation of empirical science. Without knowledge about causal relations it is impossible to get anything systematically done.     | |  Why It Is Important \\ in Science  | Causation is the foundation of empirical science. Without knowledge about causal relations it is impossible to get anything systematically done.     |
-|  Why It Is Important \\ in AI  | The main purpose of intelligence is to //figure out how to get new stuff done, given limited time and energy (LTE)//, i.e. to get stuff done cheaply but well. To get stuff done means knowing how to produce effects. In this case //reliable// methods for getting stuff done are worth more to an intelligence than unreliable ones. A relationship that approximates a Platonic cause-effect is worth more than one that does not.     |+|  \\ Why It Is Important \\ in AI  | The main purpose of intelligence is to //figure out how to get new stuff done, given limited time and energy (LTE)//, i.e. to get stuff done cheaply but well. \\ To get stuff done means knowing how to produce effects. \\ In this case //reliable// methods for getting stuff done are worth more to an intelligence than unreliable ones. \\ A relationship that approximates a Platonic cause-effect is worth more than one that does not.     |
 |  \\ History  | David Hume (1711-1776) is one of the most influential philosophers addressing the topic. From the Encyclopedia of Philosophy: "...advocate[s] ... that there are no innate ideas and that all knowledge comes from experience, Hume is known for applying this standard rigorously to causation and necessity." [[https://www.iep.utm.edu/hume-cau/|REF]] \\ This makes Hume an //empiricist.//   | |  \\ History  | David Hume (1711-1776) is one of the most influential philosophers addressing the topic. From the Encyclopedia of Philosophy: "...advocate[s] ... that there are no innate ideas and that all knowledge comes from experience, Hume is known for applying this standard rigorously to causation and necessity." [[https://www.iep.utm.edu/hume-cau/|REF]] \\ This makes Hume an //empiricist.//   |
 |  \\ More Recent History  | Causation has been cast by the wayside in statistics for the past 120 years, saying instead that all we can claim about the relationship of any variables is that they correlate. Needless to say this has lead to significant confusion as to what science can and cannot say about causal relationships, such as whether mobile phones cause cancer. Equally badly, the statistical stance has infected some scientific fields to view causation as "unscientific"    | |  \\ More Recent History  | Causation has been cast by the wayside in statistics for the past 120 years, saying instead that all we can claim about the relationship of any variables is that they correlate. Needless to say this has lead to significant confusion as to what science can and cannot say about causal relationships, such as whether mobile phones cause cancer. Equally badly, the statistical stance has infected some scientific fields to view causation as "unscientific"    |
Line 50: Line 50:
  
 |  Correlation Supports Prediction  | Correlation is sufficient for simple prediction (if <m>A</m> and <m>B</m> correlate highly, then it does not matter if we see an <m>A</m> //OR// a <m>B</m>, we can predict that the other is likely on the scene).    | |  Correlation Supports Prediction  | Correlation is sufficient for simple prediction (if <m>A</m> and <m>B</m> correlate highly, then it does not matter if we see an <m>A</m> //OR// a <m>B</m>, we can predict that the other is likely on the scene).    |
-|  Knowledge of Causation Supports Action  | We may know that <m>A</m> and <m>B</m> correlate, but if we don't know whether <m>B</m> is a result of <m>A</m> or vice versa, and we want <m>B</m> to disappear, we don't know whether it will suffice to modify <m>A</m>. \\ //Example: The position of the light switch and the state of the light bulb correlate. Only by knowing that the light switch controls the bulb can we go directly to the switch if we want the light to turn on.  //    |+|  \\ Knowledge of Causation Supports Action  | We may know that <m>A</m> and <m>B</m> correlate, but if we don't know whether <m>B</m> is a result of <m>A</m> or vice versa, and we want <m>B</m> to disappear, we don't know whether it will suffice to modify <m>A</m>. \\ //Example: The position of the light switch and the state of the light bulb correlate. Only by knowing that the light switch controls the bulb can we go directly to the switch if we want the light to turn on.  //    |
 |  **Causal Models** \\ Are Necessary To Guide Action  | While correlation gives us indication of causation, the direction of the "causal arrow" is critically necessary for guiding action. \\ Luckily, knowing which way the arrows point in any large set of correlated variables is usually not too hard to find out, by empirical experimentation.   | |  **Causal Models** \\ Are Necessary To Guide Action  | While correlation gives us indication of causation, the direction of the "causal arrow" is critically necessary for guiding action. \\ Luckily, knowing which way the arrows point in any large set of correlated variables is usually not too hard to find out, by empirical experimentation.   |
 |  Judea Pearl   | Most Fervent Advocate of causality in AI, and the inventor of the Do Calculus. \\ C.f. [[https://ftp.cs.ucla.edu/pub/stat_ser/r284-reprint.pdf|BAYESIANISM AND CAUSALITY, OR, WHY I AM ONLY A HALF-BAYESIAN]].    | |  Judea Pearl   | Most Fervent Advocate of causality in AI, and the inventor of the Do Calculus. \\ C.f. [[https://ftp.cs.ucla.edu/pub/stat_ser/r284-reprint.pdf|BAYESIANISM AND CAUSALITY, OR, WHY I AM ONLY A HALF-BAYESIAN]].    |
Line 70: Line 70:
 |  What It Is  | The ability (of anyone or anything) to explain, after the fact or before, why something happened the way it did, how it could have happened differently but didn't (in general and/or this time).   | |  What It Is  | The ability (of anyone or anything) to explain, after the fact or before, why something happened the way it did, how it could have happened differently but didn't (in general and/or this time).   |
 |  In AI  | The ability of a controller to explain, after the fact or before, why it did something or intends to do it.   | |  In AI  | The ability of a controller to explain, after the fact or before, why it did something or intends to do it.   |
-|  What Explanation Depends On  | \\ **Causation**. It is impossible to explain anything in any useful way without referring to general causal relations.   |  +|  Explanation Depends On Causation  | \\ It is impossible to explain anything in any useful way without referring to general causal relations.   |  
-|  Why It Is Important  | If a controller does something we don't want it to repeat - e.g. crash an airplane full of people (in simulation mode, hopefully!) - it needs to be able to explain why it did what it did. If it can't, it means it - and //we// - can never be sure of it did what it did, whether it had any other choice, or even how likely it is to do it again.     | +|  Why It Is Important  | If a controller does something we don't want it to repeat - e.g. crash an airplane full of people (in simulation mode, hopefully!) - it needs to be able to explain why it did what it did. If it can't, it means it - and //we// - can never be sure of why it did what it did, whether it had any other choice, or under what conditions it might do it again.     | 
-|  Explanation Depends on Causation  | No explanation is without reference to causes; discernible causal structure is a prerequisite for explainability.   | +|  Explanation Depends on Causation \\ No explanation is without reference to causes; discernible causal structure is a prerequisite for explainability.   | 
-|  \\ Human-Level AI  | Even more importantly, to grow and learn and self-inspect the AI system must be able to sort out causal chains. If it can't it will not only be incapable of explaining to others why it is like it is, it will be incapable of explaining to itself why things are the way they are, and thus, it will be incapable of sorting out whether something it did is better for its own growth than something else. Explanation is the big black hole of ANNs: In principle ANNs are black boxes, and thus they are in principle unexplainable - whether to themselves or others. \\ One way to address this is by encapsulating knowledge as hierarchical models that are built up over time, and can be de-constructed at any time (like AERA does).   |+|  \\ Bottom Line for \\ Human-Level AI  | To grow and learn and self-inspect an AI must be able to sort out causal chains. If it can't it will not only be incapable of explaining to others why it is like it is, it will be incapable of explaining to itself why things are the way they are, and thus, it will be incapable of sorting out whether something it did is better for its own growth than something else. Explanation is the big black hole of ANNs: In principle ANNs are black boxes, and thus they are in principle unexplainable - whether to themselves or others. \\ One way to address this is by encapsulating knowledge as hierarchical models that are built up over time, and can be de-constructed at any time (like AERA does).   |
  
  
/var/www/cadia.ru.is/wiki/data/attic/public/t_720_atai/atai-20/causation.1602256797.txt.gz · Last modified: 2024/04/29 13:33 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki