[[http://cadia.ru.is/wiki/public:t-720-atai:atai-16:main|T-720-ATAI-2016 Main]] =====T-720-ATAI-2016===== ====Lecture Notes, F-12 04.03.2016==== \\ \\ \\ \\ ==== What is Reasoning ? ==== | Definition | Reasoning is a computation that makes use of given regularities to reach conclusions that have not been stated explicitly. | | Reasoning is a computation | Given some general rules, reasoning allows us to use those rules to answer questions for which answers had not been provided explicitly. | | Reasoning is an ability of the (human) mind | We can also say that reasoning is, first and foremost, a cognitive ability - an ability of minds to draw inferences (generate absent information). | | Traditional theories of reasoning | - The only inference rule is deduction. \\ - The meaning of a compound term is completely determined by the meaning of its components and the operator that joins the components. \\ - A statement is either true or false. \\ - The truth value of a statement does not change over time. \\ - A contradiction leads to the “proof” of any arbitrary conclusion. \\ - Inference processes follow algorithms, which makes them deterministically predictable, and any conclusion can be accurately reproduced. \\ - Every inference process has a prespecified goal, and the process stops whenever its goal is achieved. \\ Source: [[http://cis-linux1.temple.edu/~pwang/Publication/cognitive_mathematical.pdf|Cognitive Logic versus Mathematical Logic]] by P. Wang | | Arguments against the traditional mathematical approach | Pei Wang has convincingly argued that //**non-axiomatic reasoning**//, i.e. reasoning where no a-priori universals can be provided, is the only kind relevant to an AGI, and that such reasoning follows different rules than axiomatic reasoning. \\ Incidentally, he is the only AI researcher (I am aware of, besides myself) that argues this point -- everyone else is going with some well-known form of reasoning as a model for AI and human reasoning. | \\ \\ ==== Main Types of Reasoning ==== | Deduction | Where premises make the result inevitable, no additional knowledge required. Can be performed mechanically. | | Deduction example | All men are mortal. Socrates is a man. Therefore, Socrates is mortal. | | Abduction | Inferring causes from effect. Requires domain knowledge to be performed. | | Abduction example | The grass is wet. Hence, it may have rained. | | Induction | Generalization from examples. Requires methods for pattern extraction. | | Induction example | All the swans I have observed are white. Hence, all swans are white. | | Analogy | Comparing one pattern to another, given certain constraining prerequisites, noting similarities (and sometimes differences). Analogies can emphasize and hide certain aspect of the topic in focus. They can also help us produce models for novel phenomena. | | Analogy example | "Getting a man to the moon is like threading a needle." - //How// getting a man to the moon is like threading a needle is the crux of the analogy, and helps highlight certain aspects of the former by comparison to the latter. This analogy indicates that getting a man to the moon is difficult. Another analogy, e.g. "getting a man to the moon is like farting in the wind", would do the opposite, since that is generally not considered requiring any focus of attention or special skills or training. | \\ \\ ==== Anatomy of Automatic Reasoning Systems ==== | Knowledge representation | Formal language. Called "a logic" or "the logical part". | | Meaning and truth | Formal semantics. | | Deriving new knowledge | Inference rules. | | Knowledge store | A memory. Implementation of the logical part. Based on computability theory. | | Selection of premises and rules | Automatic control mechanism. Implementation of the logical part, typically first-order predicate logic. | \\ \\ ==== Non-Axiomatic Reasoning ==== | P. Wang | Cognitive logic and mathematical logic are fundamentally different. | | Finiteness | The system has a fixed, constant information-processing capacity. | | Real-time | All tasks have time constraints attached to them. | | Openness | No constraint is put on the content of the experience that the system may have, as long as they are representable in the interface language. | | For a system to work under the above assumption, it should have mechanisms to handle the following situations: | - A new processor is required when all processors are occupied; \\ - Extra memory is required when all memory is already full; \\ - A task comes up when the system is busy with something else; \\ - A task comes up with a time constraint, so exhaustive processing is not affordable; \\ - New knowledge conflicts with previous knowledge; \\ - A question is presented for which no sure answer can be deduced from available knowledge; \\ ...etc. | | source | [[http://cis-linux1.temple.edu/~pwang/Publication/cognitive_mathematical.pdf|Cognitive Logic versus Mathematical Logic]] by P. Wang | | Traditional reasoning | Purpose: derive conclusions from given premises. | | AGI reasoning | Learn to operate in its task-environment, for which few or no premises can be given a-priori. Assumption of insufficient knowledge and resources is built in. The models are created via experience, which by definition will never be "complete" or provide "ground truth" - axiomatic knowledge. The truth value of a statement is determined by available evidence. | | Experience-grounded semantics | A language L used by an agent relates to the environment via experience. Experience is a stream of "sentences" in L. The agent treats terms and sentences in L not solely according to their syntax but also according to their relations to the environment, which is where their meaning stems from. Therefore, to accurately describe the logics of AGI systems is an experience-grounded semantics. \\ Source: [[http://cis-linux1.temple.edu/~pwang/Publication/cognitive_mathematical.pdf|Cognitive Logic versus Mathematical Logic]] by P. Wang | \\ \\ ==== Wason Selection Task ==== | Wason Selection Task | The Wason selection task (or four-card problem) is a logic puzzle devised by Peter Cathcart Wason in 1966 to study of deductive reasoning. \\ Four cards placed on a table, each of which has a number on one side and a colored patch on the other side. Visible faces of the cards show 3, 8, red and brown. \\ [[https://en.wikipedia.org/wiki/Wason_selection_task|Wikipedia]] | | The claim | //"If a card shows an even number on one face, then its opposite face is red."// \\ The rule to be tested is (forall x) (Vowel(x) → Even(x)). | | Your task | Which card(s) must you turn over in order to test the truth of the claim? | | Evaluation | A response that identifies a card that need not be inverted, or that fails to identify a card that needs to be inverted, is incorrect. | | Common answer | "8" | | Common interpretation | "Human reasoning is not logical." | | P. Wang's explanation | In experience-based intelligences the frequency of occurrence is a strong indicator of a pattern. Here the claim becomes a stand-in for a generalization model - heuristic - of a particular relationship. Looking for confirmations of one's model is a rational and cognitively logical thing to do, because certainty is not the norm, so looking for it does not make sense. \\ [[http://cis-linux1.temple.edu/~pwang/Publication/evidence.pdf|Wason's Cards: What is Wrong?]] by P. Wang | | Drinking example | Experiments have shown that when the cards become people and the colors and numbers become the age of people and whether they are drinking alcohol, people always choose to inspect the drink of the the 19-year old to verify the rule "If a person is drinking alcohol they are older than 20". | \\ \\ ==== How Reasoning Can Help Building, Testing, and Selecting Models ==== | Exclusion by reasoning | Given certain competing models for the same thing, reasoning can be used to exclude a certain set of them. | | Miniature experiments | Reasoning can also be used to propose interventions in the world, producing a miniature experiment of sorts, that helps remove alternative competing models. | \\ \\ \\ //EOF//