Both sides previous revisionPrevious revisionNext revision | Previous revision |
public:t-719-nxai:nxai-25:empirical-reasoning [2025/04/27 13:59] – [Probability] thorisson | public:t-719-nxai:nxai-25:empirical-reasoning [2025/04/27 17:03] (current) – [REQUIRED READINGS] thorisson |
---|
| |
| |
====READINGS==== | ====REQUIRED READINGS==== |
| |
| * [[https://cis-linux1.temple.edu/~pwang/Publication/learning.pdf|The Logic of Learning]] by P. Wang |
| * [[https://www.iiim.is/wp/wp-content/uploads/2011/05/wang-agisp-2011.pdf|Behavioral Self-Programming by Reasoning]] by P. Wang |
| * [[https://philosophynow.org/issues/106/Critical_Reasoning|Critical Reasoning]] by M. Talbot |
| |
| |
| Related readings: |
| |
\\ | \\ |
| \\ Why It Is Important \\ in AI | Probability enters into our knowledge of anything for which the knowledge is //**incomplete**//. \\ As in, //everything that humans do every day in every real-world environment//. \\ With incomplete knowledge it is in principle //impossible to know what may happen//. However, if we have very good models for some //limited// (small, simple) phenomenon, we can expect our prediction of what may happen to be pretty good, or at least //**practically useful**//. This is especially true for knowledge acquired through the scientific method, in which empirical evidence and human reason is systematically brought to bear on the validity of the models. | | | \\ Why It Is Important \\ in AI | Probability enters into our knowledge of anything for which the knowledge is //**incomplete**//. \\ As in, //everything that humans do every day in every real-world environment//. \\ With incomplete knowledge it is in principle //impossible to know what may happen//. However, if we have very good models for some //limited// (small, simple) phenomenon, we can expect our prediction of what may happen to be pretty good, or at least //**practically useful**//. This is especially true for knowledge acquired through the scientific method, in which empirical evidence and human reason is systematically brought to bear on the validity of the models. | |
| How To Compute Probabilities | Most common method is Bayesian networks, which encode the concept of probability in which probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief [[https://en.wikipedia.org/wiki/Bayesian_probability|REF]]. Which makes it useful for representing an (intelligent) agent's knowledge of some environment, task or phenomenon. | | | How To Compute Probabilities | Most common method is Bayesian networks, which encode the concept of probability in which probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief [[https://en.wikipedia.org/wiki/Bayesian_probability|REF]]. Which makes it useful for representing an (intelligent) agent's knowledge of some environment, task or phenomenon. | |
| \\ How It Works | P(a#b) = {P(b#a) P(a)} / {P(b)} \\ where # means 'given'. | | | How It Works | P(a#b) = {P(b#a) P(a)} / {P(b)} \\ where # means 'given'. | |
| Judea Pearl | Most Fervent Advocate (and self-proclaimed inventor) of Bayesian Networks in AI [[http://ftp.cs.ucla.edu/pub/stat_ser/R246.pdf|REF]]. | | | Judea Pearl | Most Fervent Advocate (and self-proclaimed inventor) of Bayesian Networks in AI [[http://ftp.cs.ucla.edu/pub/stat_ser/R246.pdf|REF]]. | |
| |
====Causation & AI==== | ====Causation & AI==== |
| |
| Correlation Supports Prediction | Correlation is sufficient for simple prediction (if <m>A</m> and <m>B</m> correlate highly, then it does not matter if we see an <m>A</m> //OR// a <m>B</m>, we can predict that the other is likely on the scene). | | | Correlation Supports Prediction | Correlation is sufficient for simple prediction (if A and B correlate highly, then it does not matter if we see an A //OR// a B, we can predict that the other is likely on the scene). | |
| \\ Knowledge of Causation Supports Action | We may know that <m>A</m> and <m>B</m> correlate, but if we don't know whether <m>B</m> is a result of <m>A</m> or vice versa, and we want <m>B</m> to disappear, we don't know whether it will suffice to modify <m>A</m>. \\ //Example: The position of the light switch and the state of the light bulb correlate. Only by knowing that the light switch controls the bulb can we go directly to the switch if we want the light to turn on. // | | | \\ Knowledge of Causation Supports Action | We may know that A and B correlate, but if we don't know whether B is a result of A or vice versa, and we want B to disappear, we don't know whether it will suffice to modify A. \\ //Example: The position of the light switch and the state of the light bulb correlate. Only by knowing that the light switch controls the bulb can we go directly to the switch if we want the light to turn on. // | |
| **Causal Models** \\ Are Necessary To Guide Action | While correlation gives us indication of causation, the direction of the "causal arrow" is critically necessary for guiding action. \\ Luckily, knowing which way the arrows point in any large set of correlated variables is usually not too hard to find out, by empirical experimentation. | | | **Causal Models** \\ Are Necessary To Guide Action | While correlation gives us indication of causation, the direction of the "causal arrow" is critically necessary for guiding action. \\ Luckily, knowing which way the arrows point in any large set of correlated variables is usually not too hard to find out, by empirical experimentation. | |
| Judea Pearl | Most Fervent Advocate of causality in AI, and the inventor of the Do Calculus. \\ C.f. [[https://ftp.cs.ucla.edu/pub/stat_ser/r284-reprint.pdf|BAYESIANISM AND CAUSALITY, OR, WHY I AM ONLY A HALF-BAYESIAN]]. | | | Judea Pearl | Most Fervent Advocate of causality in AI, and the inventor of the Do Calculus. \\ C.f. [[https://ftp.cs.ucla.edu/pub/stat_ser/r284-reprint.pdf|BAYESIANISM AND CAUSALITY, OR, WHY I AM ONLY A HALF-BAYESIAN]]. | |