Both sides previous revisionPrevious revisionNext revision | Previous revision |
public:t-713-mers:mers-23:empirical-reasoning-1 [2023/10/18 09:54] – [Non-Axiomatic Reasoning] thorisson | public:t-713-mers:mers-23:empirical-reasoning-1 [2024/08/20 12:20] (current) – [Empirical Reasoning vs. Mathematical Reasoning] thorisson |
---|
| Why Empirical? | The concept 'empirical' refers to the physical world: We (humans) live in a physical world, which is to some extent governed by rules, some of which we know something about. | | | Why Empirical? | The concept 'empirical' refers to the physical world: We (humans) live in a physical world, which is to some extent governed by rules, some of which we know something about. | |
| Why Reasoning? | For interpreting, managing, understanding, creating and changing **rules**, logic-governed operations are highly efficient and effective. We call such operations 'reasoning'. Since we want to make machines that can operate more autonomously (e.g. in the physical world), reasoning skills is one of those features that such systems should be provided with. | | | Why Reasoning? | For interpreting, managing, understanding, creating and changing **rules**, logic-governed operations are highly efficient and effective. We call such operations 'reasoning'. Since we want to make machines that can operate more autonomously (e.g. in the physical world), reasoning skills is one of those features that such systems should be provided with. | |
| \\ Why Empirical Reasoning? | The physical world is uncertain because we only know part of the rules that govern it. \\ Even where we have good rules, like the fact that heavy things fall down, applying such rules is a challenge, especially when faced with the passage of time. \\ The term **'empirical'** refers to the fact that the reasoning needed for intelligent agents in the physical world are - at all times - subject to limitations in **energy**, **time**, **space** and **knowledge** (also called the "assumption of insufficient knowledge and resources (AIKR)" by AI researcher Pei Wang). | | | \\ Why \\ Empirical Reasoning? | The physical world is uncertain because we only know part of the rules that govern it. \\ Even where we have good rules, like the fact that heavy things fall down, applying such rules is a challenge, especially when faced with the passage of time. \\ The term **'empirical'** refers to the fact that the reasoning needed for intelligent agents in the physical world are - at all times - subject to limitations in **energy**, **time**, **space** and **knowledge** (also called the "assumption of insufficient knowledge and resources (AIKR)" by AI researcher Pei Wang). | |
\\ | \\ |
| |
| Agent | A system that can **sense** and **act** in an environment to do tasks. Sensing and actuating is done via the agent's **transducers**, which are part of its **embodiment**. https://en.wikipedia.org/wiki/Intelligent_agent | | | Agent | A system that can **sense** and **act** in an environment to do tasks. Sensing and actuating is done via the agent's **transducers**, which are part of its **embodiment**. https://en.wikipedia.org/wiki/Intelligent_agent | |
| World / Environment | We call a particular implementation of a set of processes, variables and relationships such that certain values of variables are possible and others cannot, a **World**. \\ An //Environment// in a World is a subset of the World, where the list of what is possible is shorter. | | | World / Environment | We call a particular implementation of a set of processes, variables and relationships such that certain values of variables are possible and others cannot, a **World**. \\ An //Environment// in a World is a subset of the World, where the list of what is possible is shorter. | |
| Perception / Percept | A process (//perception//) and its product (//percept//) that is part of the cognitive apparatus of intelligent systems and whose purpose is to produce outcomes of measurements, in a format that can be used by a learning controller. | | | Perception | A process that is part of the cognitive system of intelligent systems and whose purpose is to measure and produce outcomes of measurements, in a format that can be used by the control apparatus of which it is part. | |
| | Percept | The product of perception -- produced outcomes of measurements. | |
| Goal | A substate of a World. A (steady-)state that could be achieved by an agent, if assigned. | | | Goal | A substate of a World. A (steady-)state that could be achieved by an agent, if assigned. | |
| Sub-Goal | A substate of a World that can serve as an intermediate state towards achieving a (higher-level) goal. | | | Sub-Goal | A substate of a World that can serve as an intermediate state towards achieving a (higher-level) goal. | |
^ **TOPIC** ^ **MATHEMATICAL REASONING** ^ **EMPIRICAL REASONING** ^ | ^ **TOPIC** ^ **MATHEMATICAL REASONING** ^ **EMPIRICAL REASONING** ^ |
| \\ Target Use | Specify/define complete ruleset/system for closed worlds. \\ Intended for use with necessary and sufficient info. \\ Meant for dealing with mathematical domains. | Figure out how to get new things done in open worlds. \\ Intended for use with incomplete and insufficient info. \\ Meant for dealing with physical domains. | | | \\ Target Use | Specify/define complete ruleset/system for closed worlds. \\ Intended for use with necessary and sufficient info. \\ Meant for dealing with mathematical domains. | Figure out how to get new things done in open worlds. \\ Intended for use with incomplete and insufficient info. \\ Meant for dealing with physical domains. | |
| World Assumption | Axiomatic and Platonic (hypothetical): \\ Axioms fully known and enumerated. | Non-axiomatic and uncertain (actual): Every known axiom is defeasible<sup>1</sup> (not guaranteed); \\ at least one unknown axiom exist at all times. | | | World Assumption | Closed and certain. \\ Axioms fully known and enumerated. \\ Axiomatic and Platonic (hypothetical) | Open and uncertain. \\ at least one unknown axiom exist at all times; \\ every known axiom is defeasible<sup>1</sup> (not guaranteed). \\ | |
| Energy, Time, Space | Independent of energy, space and time \\ (unless specifically put into focus). | Limited by time, energy and space; \\ LTE (limited time and energy) is a central concept. | | | Energy, Time, Space | Independent of energy, space and time \\ (unless specifically put into focus). | Limited by time, energy and space; \\ LEST (limited energy, space and time) is a central concept. | |
| Source of Data | Mostly hand-picked by humans from a pre-defined World. | Mostly measured by reasoning system itself, \\ from a mostly undefined World. | | | Source of Data | Mostly hand-picked by humans from a pre-defined World. | Mostly measured by reasoning system itself, \\ from a mostly undefined World. | |
| Human-Generated Info | Large ratio of human to machine-generated info (>1). Human-generated info is detailed and targets specific topics and tasks. | Small ratio of human to machine-generated info (<<1). Human-generated info is provided in a small 'seed' and targets general bootstrapping. | | | Human-Generated Info | Large ratio of human to machine-generated info (>1). Human-generated info is detailed and targets specific topics and tasks. | Small ratio of human to machine-generated info (<<1). Human-generated info is provided in a small 'seed' and targets general bootstrapping. | |
| Data Availability | Most data is available. No hidden data. | Most data is unavailable and/or hidden. | | | Data Availability | Most data is available. No hidden data. | Most data is unavailable and/or hidden. | |
| Data Types | Known a-priori. Statements always syntactically correct; pre-defined syntax. | Mostly not known; tiny dataset provided a-priori. | | | Data Types | Known a-priori. Statements always syntactically correct; pre-defined syntax. | Mostly not known; tiny dataset provided a-priori. | |
| Permitted Values | Bool (True, False) | Combinations of Bool, N, Z, Q, R, C. | | | Permitted Values | Primarily Bool (True, False) | Highly variable combinations of Bool, N, Z, Q, R, C, \\ **as well as 'uncertain' and 'not known'.** | |
| Information Amount | Inevitably sparse (due to being fully known). | Always larger than available processing - overwhelming. | | | Information Amount | Inevitably sparse (due to being fully known). | Always larger than available processing - overwhelming. | |
| Statements | Clear, clean and complete. | Most statements are incomplete; rarely clear and clean. | | | Statements | Clear, clean and complete. | Most statements are incomplete; rarely clear and clean. | |
| Incorrect Statements | Guaranteed to be identifiable. | Cannot be guaranteed to be identifiable. | | | Incorrect Statements | Guaranteed to be identifiable. | Cannot be guaranteed to be identifiable. | |
| Deduction | Safe<sup>2</sup> and complete<sup>3</sup> (due to complete and clean data and semantics). | Defeasible (always, due to incomplete data and semantics). | | | Deduction | Safe<sup>2</sup> and complete<sup>3</sup> (due to complete and clean data and semantics). | Defeasible \\ (//always//, due to incomplete data and semantics). | |
| Abduction | Safe and complete \\ (always, due to complete knowledge). | Defeasible \\ (always, due to incomplete knowledge). | | | Abduction | Safe and complete \\ (always, due to complete knowledge). | Defeasible \\ (always, due to incomplete knowledge). | |
| Induction | Defeasible \\ (always, due to incomplete data). | Defeasible \\ (always, due to incomplete data and semantics). | | | Induction | Defeasible \\ (always, due to incomplete data). | Defeasible \\ (always, due to incomplete data and semantics). | |
| Analogy | Complete \\ (always, due to complete knowledge of data and semantics). | Defeasible \\ (always, due to With incomplete data and semantics). | | | Analogy | Complete \\ (always, due to complete knowledge of data and semantics). | Defeasible \\ (always, due to With incomplete data and semantics). | |
| ||| | | ||| |
| <sup>1</sup> By 'defeasible' is meant that with additional data it may be found to be incorrect. ||| | | <sup>1</sup> By 'defeasible' is meant that it //may// be found to be incorrect, at any time, given additional data, reconsideration of background assumptions or discovery of logic errors. ||| |
| <sup>2</sup> By 'safe' is meant that the output of a reasoning process can be trusted and is provably correct. ||| | | <sup>2</sup> By 'safe' is meant that the output of a reasoning process is provably correct and can be trusted. ||| |
| <sup>3</sup> By 'complete' is meant that the output of a reasoning process leaves nothing unprocessed. ||| | | <sup>3</sup> By 'complete' is meant that the output of a reasoning process leaves nothing unprocessed. ||| |
| |