Table of Contents

T-720-ATAI-2019 Main
Links to Lecture Notes

T-720-ATAI-2019

Lecture Notes: Curiosity, Creativity





Reasoning

What It Is The establishment of axioms for the world and applying logic to these.
But The World Is Non-Axiomatic ! Yes. But there is no way to apply logic unless we hypothesize some pseudo-axioms. The only difference between this and mathematics is that in science we must accept that the so-called “laws” of physics may be only conditionally correct (or possibly even completely incorrect, in light of our goal of figuring out the “ultimate” truth about how the universe works).
Deduction Results of two statements that logically are necessarily true.
Example: If it's true that all swans are white, and Joe is a swan, then Joe must be white.
Abduction Reasoning from conclusions to (likely) causes.
Example: If the light is now on, but it was off just a minute ago, someone must have flipped the switch.
Note that in the reverse case different abductions may be entertained, because of the way the world works: If the light is off now, and it was on just a minute ago, someone may have flipped the switch OR a fuse may have been blown.
Induction Generalization from observation.
Example: All the swans I have ever seen have been white, hence I hypothesize that all swans are white.



Meaning

What It Is Something of great importance to people. Meaning seems “extracted” from other people's actions, utterances, attitudes, etc. It is generally considered to require intelligence.
Why It Is Important Meaning seems to enter almost every aspect of cognition.
My Theory Meaning is generated when a causal-relational model is used to compute the implications of some action, state, event, etc. Any agent that does so will extract meaning when the implications interact with its goals in some way.



Common Sense in AI

Status of Understanding in AI Since the 70s the concept of understanding has been relegated to the fringes of research. The only AI contexts it regularly appears in are “language understanding”, “scene understanding” and “image understanding”.
What Took Its Place What took the place of understanding in AI is common sense. Unfortunately the concept of common sense does not capture at all what we generally mean by “understanding”.
Projects The best known project on common sense is the CYC project, which started in the 80s and is apparently still going. It is the best funded, longest running AI project in history.
Main Methodology The foundation of CYC is formal logic, represented in predicate logic statements and structures.
Key Results Results from the CYC project are similar to the expert systems of the 80s - these systems are brittle and unpredictable.
Apparently the CYC system is being commercialized by a company called Lucid REF.



Understanding

What It Is A concept that people use all the time about each other's cognition. With respect to achieving a task, given that the target of the understanding is all or some aspects of the task, more of it is generally considered better than less of it.
Why It Is Important Seems to be connected to “real intelligence” - when a machine does X reliably and repeatedly we say that it is “capable” of doing X qualify it with “… but it doesn't 'really' understand what it's doing”.
What Does It Mean? No well-known scientific theory exists.
Normally we do not hand control of anything over to anyone who doesn't understand it. All other things being equal, this is a recipe for disaster.
Evaluating Understanding Understanding any X can be evaluated along four dimensions: 1. Being able to predict X, 2. being able to achieve goals with respect to X, 3. being able to explain X, and 4. being able to “re-create” X (“re-create” here means e.g. creating a simulation that produces X and many or all its side-effects.)



Kris' Theory of Understanding

What It Is A way to talk about understanding in the context of AGI.
Why It Is Important Only theory of understanding in the field of AI.
What Does It Mean? Normally we do not hand control of anything over to anyone who doesn't understand it. All other things being equal, this is a recipe for disaster. We need to build systems that we can trust. We cannot trust an agent that doesn't understand what its doing or the context it's in.
My Theory Understanding involves the manipulation of causal-relational models (e.g. those in the AERA AGI-aspiring architecture).
Phenomenon, Model Phenomenon <m>Phi</m>: Any group of inter-related variables in the world, some or all of which can be measured.
Models <m>M</m>: A set of information structures that reference the variables of <m>Phi</m> and their relations <m>R</m> such that they can be used, applying processes <m>P</m> that manipulate <m>M</m>, to (a) predict, (b) achieve goals with respect to, (c ) explain, and (d) (re-)create <m>Phi</m>.
Definition of Understanding An agent's understanding of a phenomenon <m>Phi</m> to some level <m>L</m> when it posses a set of models <m>M</m> and relevant processes <m>P</m> such that it can use <m>M</m> to (a) predict, (b) achieve goals with respect to, (c ) explain, and (d) (re-)create <m>Phi</m>. Insofar as the nature of relations between variables in <m>Phi</m> determine their behavior, the level <m>L</m> to which the phenomenon <m>Phi</m> is understood by the agent is determined by the completeness and the accuracy to which <m>M</m> matches the variables and their relations in <m>Phi.</m>
REF About Understanding by Thórisson et al.





2019©K.R.Thórisson
EOF