Next revision | Previous revision |
public:t-720-atai:atai-22:causation [2022/09/16 13:21] – created thorisson | public:t-720-atai:atai-22:causation [2024/04/29 13:33] (current) – external edit 127.0.0.1 |
---|
| \\ More Recent History | Causation has been cast by the wayside in statistics for the past 120 years, saying instead that all we can claim about the relationship of any variables is that they correlate. Needless to say this has lead to significant confusion as to what science can and cannot say about causal relationships, such as whether mobile phones cause cancer. Equally badly, the statistical stance has infected some scientific fields to view causation as "unscientific". | | | \\ More Recent History | Causation has been cast by the wayside in statistics for the past 120 years, saying instead that all we can claim about the relationship of any variables is that they correlate. Needless to say this has lead to significant confusion as to what science can and cannot say about causal relationships, such as whether mobile phones cause cancer. Equally badly, the statistical stance has infected some scientific fields to view causation as "unscientific". | |
| Spurious Correlation | Non-zero correlation due to complete coincidence. | | | Spurious Correlation | Non-zero correlation due to complete coincidence. | |
| \\ Causation & Correlation | What is the relation between causation and correlation? \\ There is no (non-spurious) correlation without causation. \\ There is no causation without correlation. \\ However, causation between two variables does necessitate one of them to be the cause of the other: They can have a shared (possibly hidden) //common cause//. | | | \\ Causation & Correlation | What is the relation between causation and correlation? \\ There is no (non-spurious) correlation without causation. \\ There is no causation without correlation. \\ However, correlation between two variables does not necessitate one of them to be the cause of the other: They can have a shared (possibly hidden) //common cause//. | |
| |
\\ | \\ |
| Explanation Depends On Causation | \\ It is impossible to explain anything in any useful way without referring to general causal relations. | | | Explanation Depends On Causation | \\ It is impossible to explain anything in any useful way without referring to general causal relations. | |
| Why It Is Important | If a controller does something we don't want it to repeat - e.g. crash an airplane full of people (in simulation mode, hopefully!) - it needs to be able to explain why it did what it did. If it can't, it means it - and //we// - can never be sure of why it did what it did, whether it had any other choice, or under what conditions it might do it again. | | | Why It Is Important | If a controller does something we don't want it to repeat - e.g. crash an airplane full of people (in simulation mode, hopefully!) - it needs to be able to explain why it did what it did. If it can't, it means it - and //we// - can never be sure of why it did what it did, whether it had any other choice, or under what conditions it might do it again. | |
| Explanation Depends on Causation | \\ No explanation is without reference to causes; discernible causal structure is a prerequisite for explainability. | | | Explanation Depends on Causation | No explanation is without reference to causes; discernible causal structure is a prerequisite for explainability. | |
| \\ Bottom Line for \\ Human-Level AI | To grow and learn and self-inspect an AI must be able to sort out causal chains. If it can't it will not only be incapable of explaining to others why it is like it is, it will be incapable of explaining to itself why things are the way they are, and thus, it will be incapable of sorting out whether something it did is better for its own growth than something else. Explanation is the big black hole of ANNs: In principle ANNs are black boxes, and thus they are in principle unexplainable - whether to themselves or others. \\ One way to address this is by encapsulating knowledge as hierarchical models that are built up over time, and can be de-constructed at any time (like AERA does). | | | \\ Bottom Line for \\ Human-Level AI | To grow and learn and self-inspect an AI must be able to sort out causal chains. If it can't it will not only be incapable of explaining to others why it is like it is, it will be incapable of explaining to itself why things are the way they are, and thus, it will be incapable of sorting out whether something it did is better for its own growth than something else. Explanation is the big black hole of ANNs: In principle ANNs are black boxes, and thus they are in principle unexplainable - whether to themselves or others. \\ One way to address this is by encapsulating knowledge as hierarchical models that are built up over time, and can be de-constructed at any time (like AERA does). | |
| |