Both sides previous revisionPrevious revisionNext revision | Previous revision |
public:t-709-aies-2024:aies-2024:next-gen-ai [2024/09/15 08:39] – thorisson | public:t-709-aies-2024:aies-2024:next-gen-ai [2024/09/15 09:06] (current) – thorisson |
---|
\\ | \\ |
| |
======NEXT-GENERATION AI: Cause-Effect Knowledge, Cumulative Learning, Empirical Reasoning, Reflection, Autonomy, Trustworthiness ====== | ======NEXT-GENERATION AI ====== |
\\ | \\ |
\\ | \\ |
\\ | \\ |
| |
==== Cognitive Growth ==== | |
| What it is | Changes in the cognitive controller (the core "thinking" part) over and beyond basic learning: After a growth burst of this kind the controller can learn differently/better/new things, especially new //categories// of things. | | |
| Human example | [[https://m.youtube.com/watch?v=TRF27F2bn-A|Piaget's Stages of Development (youtube video)]] | | |
| |
\\ | |
\\ | |
| |
=====Reasoning Types===== | =====Empirical Reasoning Types===== |
| \\ Deduction | Figuring out the implication of facts (or predicting what may come). \\ General -> Specific. \\ Producing implications from premises. \\ The //premises// are given; the work involves everything else. \\ Conclusion is unavoidable given the premises (in a deterministic, axiomatic world). | | | \\ Deduction | Figuring out the implication of facts (or predicting what may come). \\ General -> Specific. \\ Producing implications from premises. \\ The //premises// are given; the work involves everything else. \\ Conclusion is unavoidable given the premises (in a deterministic, axiomatic world). | |
| \\ Abduction | Figuring out how things came to be the way they are (or how particular outcomes could be made to come about, or how particular outcomes could be prevented). \\ The //outcome// is given; the work involves everything else. \\ Sherlock Holmes is a genius abducer. | | | \\ Abduction | Figuring out how things came to be the way they are (or how particular outcomes could be made to come about, or how particular outcomes could be prevented). \\ The //outcome// is given; the work involves everything else. \\ Sherlock Holmes is a genius abducer. | |
| \\ Induction | Figuring out the general case. \\ Specific -> General. \\ Making general rules from a (small) set of examples, e.g. 'the sun has risen in the east every morning up until now, hence, the sun will also rise in the east tomorrow. | | | \\ Induction | Figuring out the general case. \\ Specific -> General. \\ Making general rules from a (small) set of examples, e.g. 'the sun has risen in the east every morning up until now, hence, the sun will also rise in the east tomorrow. | |
| \\ Analogy | Figuring out how things are similar or different. \\ Making inferences about how something X may be (or is) through a comparison to something else Y, where X and Y share some observed properties. | | | \\ Analogy | Figuring out how things are similar or different. \\ Making inferences about how something X may be (or is) through a comparison to something else Y, where X and Y share some observed properties. | |
| | Axiomatic Reasoning | When the above methods are used for data and situations where all the rules and data are known and certain, the reasoning is //axiomatic//. \\ This form of reasoning has a long history in mathematics and logic. | |
| | Non-Axiomatic Reasoning | In the physical world, the "rules" are **not** all known and are **not** all certain. This calls for a version of the above reasoning that is //**defeasible**//, that is, where any data, rule, and conclusion **could** be incorrect and thus //defeasible//. | |
| |
| |
\\ | \\ |
| Why Reasoning? | For interpreting, managing, understanding, creating and changing **rules**, logic-governed operations are highly efficient and effective. We call such operations 'reasoning'. Since we want to make machines that can operate more autonomously (e.g. in the physical world), reasoning skills is one of those features that such systems should be provided with. | | | Why Reasoning? | For interpreting, managing, understanding, creating and changing **rules**, logic-governed operations are highly efficient and effective. We call such operations 'reasoning'. Since we want to make machines that can operate more autonomously (e.g. in the physical world), reasoning skills is one of those features that such systems should be provided with. | |
| \\ Why \\ Empirical Reasoning? | The physical world is uncertain because we only know part of the rules that govern it. \\ Even where we have good rules, like the fact that heavy things fall down, applying such rules is a challenge, especially when faced with the passage of time. \\ The term **'empirical'** refers to the fact that the reasoning needed for intelligent agents in the physical world are - at all times - subject to limitations in **energy**, **time**, **space** and **knowledge** (also called the "assumption of insufficient knowledge and resources (AIKR)" by AI researcher Pei Wang). | | | \\ Why \\ Empirical Reasoning? | The physical world is uncertain because we only know part of the rules that govern it. \\ Even where we have good rules, like the fact that heavy things fall down, applying such rules is a challenge, especially when faced with the passage of time. \\ The term **'empirical'** refers to the fact that the reasoning needed for intelligent agents in the physical world are - at all times - subject to limitations in **energy**, **time**, **space** and **knowledge** (also called the "assumption of insufficient knowledge and resources (AIKR)" by AI researcher Pei Wang). | |
| Trustworthy Reasoning | Because the physical world is not deterministic, and nobody knows all its rules (not even day-to-day!), to do reasoning in the physical world requires a special type of reasoning: Non-axiomatic reasoning. | | Trustworthy Reasoning | Because the physical world is not deterministic, and nobody knows all the relevant rules (not even in everyday activities) -- and can in fact //never// know them -- reasoning in the physical world requires a special type of reasoning: Non-axiomatic reasoning. Achieving trustworthiness in non-axiomatic reasoning is a huge challenge that AI has not come to grips with yet. | |
\\ | \\ |
\\ | \\ |
| |
====Cumulative Learning==== | ====Cumulative Learning==== |
| \\ What it Is | Unifies several separate research tracks in a coherent form easily relatable to AGI requirements: Multitask learning, lifelong learning, transfer learning and few-shot learning. | | | What it Is | Unifies several separate research tracks in a coherent form easily relatable to AGI requirements: Multitask learning, lifelong learning, transfer learning and few-shot learning. | |
| \\ Multitask Learning | The ability to learn more than one task, either at once or in sequence. \\ The cumulative learner's ability to generalize, investigate, and reason will affect how well it implements this ability. \\ //Subsumed by cumulative learning because knowledge is contextualized as it is acquired, meaning that the system has a place and a time for every tiny bit of information it absorbs.// | | | \\ Multitask Learning | The ability to learn more than one task, either at once or in sequence. \\ The cumulative learner's ability to generalize, investigate, and reason will affect how well it implements this ability. \\ //Subsumed by cumulative learning because knowledge is contextualized as it is acquired, meaning that the system has a place and a time for every tiny bit of information it absorbs.// | |
| \\ Online Learning | The ability to learn continuously, uninterrupted, and in real-time from experience as it comes, and without specifically iterating over it many times. \\ //Subsumed by cumulative learning because new information, which comes in via experience, is //integrated// with prior knowledge at the time it is acquired, so a cumulative learner is //always learning// as it's doing other things.// | | | \\ Online Learning | The ability to learn continuously, uninterrupted, and in real-time from experience as it comes, and without specifically iterating over it many times. \\ //Subsumed by cumulative learning because new information, which comes in via experience, is //integrated// with prior knowledge at the time it is acquired, so a cumulative learner is //always learning// as it's doing other things.// | |
\\ | \\ |
| |
====Trustworthiness==== | ==== Cognitive Growth ==== |
| | What it is | Changes in the cognitive controller (the core "thinking" part) over and beyond basic learning: After a growth burst of this kind the controller can learn differently/better/new things, especially new //categories// of things. | |
| What It Is | The ability of a machine's owner to trust that the machine will do what it is supposed to do. | | | Why it Matters | In humans, cognitive growth seems to be nature's method for ensuring safety when knowledge is extremely lacking. Instead of allowing a human baby to walk around and do things, nature makes human babies start with extremely primitive cognitive abilities that grow over time under the guidance of a competent caretaker. | |
| Why It Is Important | Any machine created by humans is created for a purpose. The more reliably it does its job (and nothing else) the more trustworthy it is. Trusting simple machines like thermostats involves mostly durability, since they have very few open variables (unbound variables at time of manufacture). | | | Human example | [[https://m.youtube.com/watch?v=TRF27F2bn-A|Piaget's Stages of Development (youtube video)]] | |
| Human-Level AI | To make human-level AI trustworthy is very different from creating simple machines because so many variables are unbound at manufacture time. What does trustworthiness mean in this context? We can look at human trustworthiness: Numerous methods exist for ensuring trustworthiness (license to drive, air traffic controller training, certification programs, etc.). We can have the same certification programs for all humans because their principles of operation are shared at multiple levels of detail (biology, sociology, psychology). For an AI this is different because the variability in the makeup of the machines is enormous. This makes trustworthiness of AI robots a complex issue. | | |
| To Achieve Trustworthiness | Requires **reliability**, and **predictability** at multiple levels of operation. Trustworthiness can be ascertained through special certification programs geared directly at the **kind of robot/AI system in question** (kind of like certifying a particular horse as safe for a particular circumstance and purpose, e.g. horseback riding kids). | | |
| Trustworthiness Methods | For AI are in their infancy. | | |
| |
\\ | \\ |
\\ | \\ |
| \\ |
| \\ |
| //2024(c)K.R.Thórisson// |