User Tools

Site Tools


public:t-713-mers:mers-23:empirical-reasoning-1

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
public:t-713-mers:mers-23:empirical-reasoning-1 [2023/10/18 10:37] – [Agents in Worlds] thorissonpublic:t-713-mers:mers-23:empirical-reasoning-1 [2024/08/20 12:20] (current) – [Empirical Reasoning vs. Mathematical Reasoning] thorisson
Line 14: Line 14:
 |  Why Empirical?  | The concept 'empirical' refers to the physical world: We (humans) live in a physical world, which is to some extent governed by rules, some of which we know something about.  | |  Why Empirical?  | The concept 'empirical' refers to the physical world: We (humans) live in a physical world, which is to some extent governed by rules, some of which we know something about.  |
 |  Why Reasoning?  | For interpreting, managing, understanding, creating and changing **rules**, logic-governed operations are highly efficient and effective. We call such operations 'reasoning'. Since we want to make machines that can operate more autonomously (e.g. in the physical world), reasoning skills is one of those features that such systems should be provided with.     | |  Why Reasoning?  | For interpreting, managing, understanding, creating and changing **rules**, logic-governed operations are highly efficient and effective. We call such operations 'reasoning'. Since we want to make machines that can operate more autonomously (e.g. in the physical world), reasoning skills is one of those features that such systems should be provided with.     |
-|  \\ Why Empirical Reasoning?  | The physical world is uncertain because we only know part of the rules that govern it. \\ Even where we have good rules, like the fact that heavy things fall down, applying such rules is a challenge, especially when faced with the passage of time. \\ The term **'empirical'** refers to the fact that the reasoning needed for intelligent agents in the physical world are - at all times - subject to limitations in **energy**, **time**, **space** and **knowledge** (also called the "assumption of insufficient knowledge and resources (AIKR)" by AI researcher Pei Wang).     |+|  \\ Why \\ Empirical Reasoning?  | The physical world is uncertain because we only know part of the rules that govern it. \\ Even where we have good rules, like the fact that heavy things fall down, applying such rules is a challenge, especially when faced with the passage of time. \\ The term **'empirical'** refers to the fact that the reasoning needed for intelligent agents in the physical world are - at all times - subject to limitations in **energy**, **time**, **space** and **knowledge** (also called the "assumption of insufficient knowledge and resources (AIKR)" by AI researcher Pei Wang).     |
 \\ \\
  
Line 38: Line 38:
 ^  **TOPIC**    **MATHEMATICAL REASONING**  ^  **EMPIRICAL REASONING**  ^ ^  **TOPIC**    **MATHEMATICAL REASONING**  ^  **EMPIRICAL REASONING**  ^
 |  \\ Target Use  |  Specify/define complete ruleset/system for closed worlds. \\ Intended for use with necessary and sufficient info. \\ Meant for dealing with mathematical domains.  |  Figure out how to get new things done in open worlds. \\ Intended for use with incomplete and insufficient info. \\ Meant for dealing with physical domains.     | |  \\ Target Use  |  Specify/define complete ruleset/system for closed worlds. \\ Intended for use with necessary and sufficient info. \\ Meant for dealing with mathematical domains.  |  Figure out how to get new things done in open worlds. \\ Intended for use with incomplete and insufficient info. \\ Meant for dealing with physical domains.     |
-|  World Assumption  |  Axiomatic and Platonic (hypothetical): \\ Axioms fully known and enumerated.  |  Non-axiomatic and uncertain (actual): Every known axiom is defeasible<sup>1</sup> (not guaranteed)\\ at least one unknown axiom exist at all times.  +|  World Assumption  |  Closed and certain.  \\ Axioms fully known and enumerated. \\ Axiomatic and Platonic (hypothetical)  |  Open and uncertain. \\ at least one unknown axiom exist at all times; \\ every known axiom is defeasible<sup>1</sup> (not guaranteed)\\   
-|  Energy, Time, Space  |  Independent of energy, space and time \\ (unless specifically put into focus).  |  Limited by time, energy and space; \\ LTE (limited time and energy) is a central concept.  |+|  Energy, Time, Space  |  Independent of energy, space and time \\ (unless specifically put into focus).  |  Limited by time, energy and space; \\ LEST (limited energy, space and time) is a central concept.  |
 |  Source of Data  |  Mostly hand-picked by humans from a pre-defined World.    |  Mostly measured by reasoning system itself, \\ from a mostly undefined World.   | |  Source of Data  |  Mostly hand-picked by humans from a pre-defined World.    |  Mostly measured by reasoning system itself, \\ from a mostly undefined World.   |
 |  Human-Generated Info  |  Large ratio of human to machine-generated info (>1). Human-generated info is detailed and targets specific topics and tasks.     Small ratio of human to machine-generated info (<<1). Human-generated info is provided in a small 'seed' and targets general bootstrapping.   | |  Human-Generated Info  |  Large ratio of human to machine-generated info (>1). Human-generated info is detailed and targets specific topics and tasks.     Small ratio of human to machine-generated info (<<1). Human-generated info is provided in a small 'seed' and targets general bootstrapping.   |
Line 46: Line 46:
 |  Data Availability  |  Most data is available. No hidden data.  |  Most data is unavailable and/or hidden.   | |  Data Availability  |  Most data is available. No hidden data.  |  Most data is unavailable and/or hidden.   |
 |  Data Types  |  Known a-priori. Statements always syntactically correct; pre-defined syntax.    Mostly not known; tiny dataset provided a-priori.  |  Data Types  |  Known a-priori. Statements always syntactically correct; pre-defined syntax.    Mostly not known; tiny dataset provided a-priori. 
-|  Permitted Values  |  Bool (True, False)  |  Combinations of Bool, N, Z, Q, R, C.  |+|  Permitted Values  |  Primarily Bool (True, False)  |  Highly variable combinations of Bool, N, Z, Q, R, C, \\ **as well as 'uncertain' and 'not known'.**  |
 |  Information Amount  |  Inevitably sparse (due to being fully known).  |  Always larger than available processing - overwhelming.   | |  Information Amount  |  Inevitably sparse (due to being fully known).  |  Always larger than available processing - overwhelming.   |
 |  Statements  |  Clear, clean and complete.    Most statements are incomplete; rarely clear and clean.    |  Statements  |  Clear, clean and complete.    Most statements are incomplete; rarely clear and clean.   
 |  Incorrect Statements  |  Guaranteed to be identifiable.    Cannot be guaranteed to be identifiable.  |  Incorrect Statements  |  Guaranteed to be identifiable.    Cannot be guaranteed to be identifiable. 
-|  Deduction  |  Safe<sup>2</sup> and complete<sup>3</sup> (due to complete and clean data and semantics).  |  Defeasible (always, due to incomplete data and semantics).   |+|  Deduction  |  Safe<sup>2</sup> and complete<sup>3</sup> (due to complete and clean data and semantics).  |  Defeasible \\ (//always//, due to incomplete data and semantics).   |
 |  Abduction  |  Safe and complete \\ (always, due to complete knowledge).  |  Defeasible \\ (always, due to incomplete knowledge).   | |  Abduction  |  Safe and complete \\ (always, due to complete knowledge).  |  Defeasible \\ (always, due to incomplete knowledge).   |
 |  Induction  |  Defeasible \\ (always, due to incomplete data).  |  Defeasible \\ (always, due to incomplete data and semantics).   | |  Induction  |  Defeasible \\ (always, due to incomplete data).  |  Defeasible \\ (always, due to incomplete data and semantics).   |
 |  Analogy  |  Complete \\ (always, due to complete knowledge of data and semantics).  |  Defeasible \\ (always, due to With incomplete data and semantics).   | |  Analogy  |  Complete \\ (always, due to complete knowledge of data and semantics).  |  Defeasible \\ (always, due to With incomplete data and semantics).   |
 | ||| | |||
-| <sup>1</sup> By 'defeasible' is meant that with additional data it may be found to be incorrect.  ||| +| <sup>1</sup> By 'defeasible' is meant that it //may// be found to be incorrect, at any time, given additional data, reconsideration of background assumptions or discovery of logic errors.  ||| 
-| <sup>2</sup> By 'safe' is meant that the output of a reasoning process can be trusted and is provably correct.  |||+| <sup>2</sup> By 'safe' is meant that the output of a reasoning process is provably correct and can be trusted.  |||
 | <sup>3</sup> By 'complete' is meant that the output of a reasoning process leaves nothing unprocessed.  ||| | <sup>3</sup> By 'complete' is meant that the output of a reasoning process leaves nothing unprocessed.  |||
  
/var/www/cadia.ru.is/wiki/data/attic/public/t-713-mers/mers-23/empirical-reasoning-1.1697625424.txt.gz · Last modified: 2024/04/29 13:33 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki