Both sides previous revisionPrevious revisionNext revision | Previous revision |
public:t_720_atai:atai-20:causation [2020/10/09 15:21] – [Explanation & Explainability] thorisson | public:t_720_atai:atai-20:causation [2024/04/29 13:33] (current) – external edit 127.0.0.1 |
---|
==== Probability ==== | ==== Probability ==== |
| |
| \\ What It Is | Probability is a concept that is relevant to a situation where information is missing, which means it is a concept relevant to //knowledge//. \\ A common conceptualization of probability is that it is a measure of the likelihood that an event will occur [[https://en.wikipedia.org/wiki/Probability|REF]]. If it is not know whether event <m>X</m> will be (or has been) observed in situation <m>Y</m> or not, the //probability// of <m>X</m> is the percentage of time <x>X</m> would be observed if the same situation <m>Y</m> occurred an infinite number of times. | | | \\ What It Is | Probability is a concept that is relevant to a situation where information is missing, which means it is a concept relevant to //knowledge//. \\ A common conceptualization of probability is that it is a measure of the likelihood that an event will occur [[https://en.wikipedia.org/wiki/Probability|REF]]. \\ If it is not know whether event <m>X</m> will be (or has been) observed in situation <m>Y</m> or not, the //probability// of <m>X</m> is the percentage of time <m>X</m> would be observed if the same situation <m>Y</m> occurred an infinite number of times. | |
| \\ Why It Is Important \\ in AI | Probability enters into our knowledge of anything for which the knowledge is //**incomplete**//. \\ As in, //everything that humans do every day in every real-world environment//. \\ With incomplete knowledge it is in principle //impossible to know what may happen//. However, if we have very good models for some //limited// (small, simple) phenomenon, we can expect our prediction of what may happen to be pretty good, or at least //**practically useful**//. This is especially true for knowledge acquired through the scientific method, in which empirical evidence and human reason is systematically brought to bear on the validity of the models. | | | \\ Why It Is Important \\ in AI | Probability enters into our knowledge of anything for which the knowledge is //**incomplete**//. \\ As in, //everything that humans do every day in every real-world environment//. \\ With incomplete knowledge it is in principle //impossible to know what may happen//. However, if we have very good models for some //limited// (small, simple) phenomenon, we can expect our prediction of what may happen to be pretty good, or at least //**practically useful**//. This is especially true for knowledge acquired through the scientific method, in which empirical evidence and human reason is systematically brought to bear on the validity of the models. | |
| How To Compute Probabilities | Most common method is Bayesian networks, which encode the concept of probability in which probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief [[https://en.wikipedia.org/wiki/Bayesian_probability|REF]]. Which makes it useful for representing an (intelligent) agent's knowledge of some environment, task or phenomenon. | | | How To Compute Probabilities | Most common method is Bayesian networks, which encode the concept of probability in which probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief [[https://en.wikipedia.org/wiki/Bayesian_probability|REF]]. Which makes it useful for representing an (intelligent) agent's knowledge of some environment, task or phenomenon. | |
| In AI | The ability of a controller to explain, after the fact or before, why it did something or intends to do it. | | | In AI | The ability of a controller to explain, after the fact or before, why it did something or intends to do it. | |
| Explanation Depends On Causation | \\ It is impossible to explain anything in any useful way without referring to general causal relations. | | | Explanation Depends On Causation | \\ It is impossible to explain anything in any useful way without referring to general causal relations. | |
| Why It Is Important | If a controller does something we don't want it to repeat - e.g. crash an airplane full of people (in simulation mode, hopefully!) - it needs to be able to explain why it did what it did. If it can't, it means it - and //we// - can never be sure of it did what it did, whether it had any other choice, or even how likely it is to do it again. | | | Why It Is Important | If a controller does something we don't want it to repeat - e.g. crash an airplane full of people (in simulation mode, hopefully!) - it needs to be able to explain why it did what it did. If it can't, it means it - and //we// - can never be sure of why it did what it did, whether it had any other choice, or under what conditions it might do it again. | |
| Explanation Depends on Causation | No explanation is without reference to causes; discernible causal structure is a prerequisite for explainability. | | | Explanation Depends on Causation | \\ No explanation is without reference to causes; discernible causal structure is a prerequisite for explainability. | |
| \\ Human-Level AI | Even more importantly, to grow and learn and self-inspect the AI system must be able to sort out causal chains. If it can't it will not only be incapable of explaining to others why it is like it is, it will be incapable of explaining to itself why things are the way they are, and thus, it will be incapable of sorting out whether something it did is better for its own growth than something else. Explanation is the big black hole of ANNs: In principle ANNs are black boxes, and thus they are in principle unexplainable - whether to themselves or others. \\ One way to address this is by encapsulating knowledge as hierarchical models that are built up over time, and can be de-constructed at any time (like AERA does). | | | \\ Bottom Line for \\ Human-Level AI | To grow and learn and self-inspect an AI must be able to sort out causal chains. If it can't it will not only be incapable of explaining to others why it is like it is, it will be incapable of explaining to itself why things are the way they are, and thus, it will be incapable of sorting out whether something it did is better for its own growth than something else. Explanation is the big black hole of ANNs: In principle ANNs are black boxes, and thus they are in principle unexplainable - whether to themselves or others. \\ One way to address this is by encapsulating knowledge as hierarchical models that are built up over time, and can be de-constructed at any time (like AERA does). | |
| |
| |