User Tools

Site Tools


public:t-709-aies-2024:aies-2024:next-gen-ai

DCS-T-709-AIES-2024 Main
Link to Lecture Notes





NEXT-GENERATION AI



What is Needed for Cognitive Autonomy


Selection
Autonomous selection of variables. Very few if any existing learning methods can decide for themselves whether, from a set of variables with potential relevance for its learning, any one of them (a) is relevant, (b) and if so how much, and (c ) in what way.
Autonomous selection of processes. Very few if any existing learning methods decide what kind of learning algorithms to employ (learning to learn).
Goal-Generation Very few if any existing learning methods can generate their own (sub-) goals. Of those that might be said to be able to, none can do so freely for any topic or domain.
Control of Resources By “resources” we mean computing power (think time), time, and energy, at the very least.
Few if any existing learning methods are any good at (a) controlling their resource use, (b) planning for it, (c ) assessing it, or (d) explaining it.
Novelty To handle novelty autonomously a system needs
autonomous hypothesis creation related to variables, relations, and transfer functions.


Four Dimensions of Control System Autonomy

“Autonomy comparison framework focusing on mental capabilities.
Embodiment is not part of the present framework, but is included here for contextual completeness.”
From Thorisson & Helgason 2012 source



Empirical Reasoning Types


Deduction
Figuring out the implication of facts (or predicting what may come).
General → Specific.
Producing implications from premises.
The premises are given; the work involves everything else.
Conclusion is unavoidable given the premises (in a deterministic, axiomatic world).

Abduction
Figuring out how things came to be the way they are (or how particular outcomes could be made to come about, or how particular outcomes could be prevented).
The outcome is given; the work involves everything else.
Sherlock Holmes is a genius abducer.

Induction
Figuring out the general case.
Specific → General.
Making general rules from a (small) set of examples, e.g. 'the sun has risen in the east every morning up until now, hence, the sun will also rise in the east tomorrow.

Analogy
Figuring out how things are similar or different.
Making inferences about how something X may be (or is) through a comparison to something else Y, where X and Y share some observed properties.
Axiomatic Reasoning When the above methods are used for data and situations where all the rules and data are known and certain, the reasoning is axiomatic.
This form of reasoning has a long history in mathematics and logic.
Non-Axiomatic Reasoning In the physical world, the “rules” are not all known and are not all certain. This calls for a version of the above reasoning that is defeasible, that is, where any data, rule, and conclusion could be incorrect and thus defeasible.



Considerations for Empirical Reasoning

Why Empirical? The concept 'empirical' refers to the physical world: We (humans) live in a physical world, which is to some extent governed by rules, some of which we know something about.
Why Reasoning? For interpreting, managing, understanding, creating and changing rules, logic-governed operations are highly efficient and effective. We call such operations 'reasoning'. Since we want to make machines that can operate more autonomously (e.g. in the physical world), reasoning skills is one of those features that such systems should be provided with.

Why
Empirical Reasoning?
The physical world is uncertain because we only know part of the rules that govern it.
Even where we have good rules, like the fact that heavy things fall down, applying such rules is a challenge, especially when faced with the passage of time.
The term 'empirical' refers to the fact that the reasoning needed for intelligent agents in the physical world are - at all times - subject to limitations in energy, time, space and knowledge (also called the “assumption of insufficient knowledge and resources (AIKR)” by AI researcher Pei Wang).
Trustworthy Reasoning Because the physical world is not deterministic, and nobody knows all the relevant rules (not even in everyday activities) – and can in fact never know them – reasoning in the physical world requires a special type of reasoning: Non-axiomatic reasoning. Achieving trustworthiness in non-axiomatic reasoning is a huge challenge that AI has not come to grips with yet.



Cumulative Learning

What it Is Unifies several separate research tracks in a coherent form easily relatable to AGI requirements: Multitask learning, lifelong learning, transfer learning and few-shot learning.

Multitask Learning
The ability to learn more than one task, either at once or in sequence.
The cumulative learner's ability to generalize, investigate, and reason will affect how well it implements this ability.
Subsumed by cumulative learning because knowledge is contextualized as it is acquired, meaning that the system has a place and a time for every tiny bit of information it absorbs.

Online Learning
The ability to learn continuously, uninterrupted, and in real-time from experience as it comes, and without specifically iterating over it many times.
Subsumed by cumulative learning because new information, which comes in via experience, is integrated with prior knowledge at the time it is acquired, so a cumulative learner is always learning as it's doing other things.

Lifelong Learning
Means that an AI system keeps learning and integrating knowledge throughout its operational lifetime: learning is “always on”.
Whichever way this is measured we expect at a minimum the `learning cycle' – alternating learning and non-learning periods – to be free from designer tampering or intervention at runtime. Provided this, the smaller those periods become (relative to the shortest perception-action cycle, for instance), to the point of being considered virtually or completely continuous, the better the “learning always on” requirement is met.
Subsumed by cumulative learning because the continuous online learning is steady and ongoing all the time – why switch it off?

Robust Knowledge Acquisition
The antithesis of which is brittle learning, where new knowledge results in catastrophic perturbations of prior knowledge (and behavior).
Subsumed by cumulative learning because new information is integrated continuously online, which means the increments are frequent and small, and inconsistencies in the prior knowledge get exposed in the process and opportunities for fixing small inconsistencies are also frequent because the learning is life-long; which means new information is highly unlikely to result in e.g. catastrophic forgetting.

Transfer Learning
The ability to build new knowledge on top of old in a way that the old knowledge facilitates learning the new. While interference/forgetting should not occur, knowledge should still be defeasible: the physical world is non-axiomatic so any knowledge could be proven incorrect in light of contradicting evidence.
Subsumed by cumulative learning because new information is integrated with old information, which may result in exposure of inconsistencies, missing data, etc., which is then dealt with as a natural part of the cumulative learning operations.

Few-Shot Learning
The ability to learn something from very few examples or very little data. Common variants include one-shot learning, where the learner only needs to be told (or experience) something once, and zero-shot learning, where the learner has already inferred it without needing to experience or be told.
Subsumed by cumulative learning because prior knowledge is transferrable to new information, meaning that (theoretically) only the delta between what has been priorly learned and what is required for the new information needs to be learned.



Cognitive Growth

What it is Changes in the cognitive controller (the core “thinking” part) over and beyond basic learning: After a growth burst of this kind the controller can learn differently/better/new things, especially new categories of things.
Why it Matters In humans, cognitive growth seems to be nature's method for ensuring safety when knowledge is extremely lacking. Instead of allowing a human baby to walk around and do things, nature makes human babies start with extremely primitive cognitive abilities that grow over time under the guidance of a competent caretaker.
Human example Piaget's Stages of Development (youtube video)





2024©K.R.Thórisson

/var/www/cadia.ru.is/wiki/data/pages/public/t-709-aies-2024/aies-2024/next-gen-ai.txt · Last modified: 2024/09/15 09:06 by thorisson

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki