[[http://cadia.ru.is/wiki/public:t-720-atai:atai-19:main|T-720-ATAI-2019 Main]] \\ [[http://cadia.ru.is/wiki/public:t-720-atai:atai-19:Lecture_Notes|Links to Lecture Notes]] =====T-720-ATAI-2019===== ==== Lecture Notes: Curiosity, Creativity ==== \\ \\ \\ \\ ====Reasoning==== | What It Is | The establishment of axioms for the world and applying logic to these. | | But The World Is Non-Axiomatic ! | Yes. But there is no way to apply logic unless we hypothesize some pseudo-axioms. The only difference between this and mathematics is that in science we must accept that the so-called "laws" of physics may be only conditionally correct (or possibly even completely incorrect, in light of our goal of figuring out the "ultimate" truth about how the universe works). | | Deduction | Results of two statements that logically are necessarily true. \\ //Example: If it's true that all swans are white, and Joe is a swan, then Joe must be white//. | | Abduction | Reasoning from conclusions to (likely) causes. \\ //Example: If the light is now on, but it was off just a minute ago, someone must have flipped the switch//. \\ Note that in the reverse case different abductions may be entertained, because of the way the world works: //If the light is off now, and it was on just a minute ago, someone may have flipped the switch OR a fuse may have been blown.// | | Induction | Generalization from observation. \\ //Example: All the swans I have ever seen have been white, hence I hypothesize that all swans are white//. | \\ \\ ====Meaning==== | What It Is | Something of great importance to people. Meaning seems "extracted" from other people's actions, utterances, attitudes, etc. It is generally considered to require intelligence. | | Why It Is Important | Meaning seems to enter almost every aspect of cognition. | | My Theory | Meaning is generated when a causal-relational model is used to compute the //implications// of some action, state, event, etc. Any agent that does so will extract meaning when the implications interact with its goals in some way. | \\ \\ ====Common Sense in AI==== | Status of Understanding in AI | Since the 70s the concept of //understanding// has been relegated to the fringes of research. The only AI contexts it regularly appears in are "language understanding", "scene understanding" and "image understanding". | | What Took Its Place | What took the place of understanding in AI is //common sense//. Unfortunately the concept of common sense does not capture at all what we generally mean by "understanding". | | Projects | The best known project on common sense is the CYC project, which started in the 80s and is apparently still going. It is the best funded, longest running AI project in history. | | Main Methodology | The foundation of CYC is formal logic, represented in predicate logic statements and structures. | | Key Results | Results from the CYC project are similar to the expert systems of the 80s - these systems are brittle and unpredictable. \\ Apparently the CYC system is being commercialized by a company called Lucid [[https://www.technologyreview.com/s/600984/an-ai-with-30-years-worth-of-knowledge-finally-goes-to-work/|REF]]. | \\ \\ ====Understanding==== | What It Is | A concept that people use all the time about each other's cognition. With respect to achieving a task, given that the target of the understanding is all or some aspects of the task, more of it is generally considered better than less of it. | | Why It Is Important | Seems to be connected to "real intelligence" - when a machine does X reliably and repeatedly we say that it is "capable" of doing X qualify it with "... but it doesn't 'really' understand what it's doing". | | What Does It Mean? | No well-known scientific theory exists. \\ Normally we do not hand control of anything over to anyone who doesn't understand it. All other things being equal, this is a recipe for disaster. | | Evaluating Understanding | Understanding any X can be evaluated along four dimensions: 1. Being able to predict X, 2. being able to achieve goals with respect to X, 3. being able to explain X, and 4. being able to "re-create" X ("re-create" here means e.g. creating a simulation that produces X and many or all its side-effects.) | \\ \\ ====Kris' Theory of Understanding==== | What It Is | A way to talk about understanding in the context of AGI. | | Why It Is Important | Only theory of understanding in the field of AI. | | What Does It Mean? | Normally we do not hand control of anything over to anyone who doesn't understand it. All other things being equal, this is a recipe for disaster. We need to build systems that we can trust. We cannot trust an agent that doesn't understand what its doing or the context it's in. | | My Theory | Understanding involves the manipulation of causal-relational models (e.g. those in the AERA AGI-aspiring architecture). | | Phenomenon, Model | Phenomenon Phi: Any group of inter-related variables in the world, some or all of which can be measured.\\ Models M: A set of information structures that reference the variables of Phi and their relations R such that they can be used, applying processes P that manipulate M, to (a) predict, (b) achieve goals with respect to, (c ) explain, and (d) (re-)create Phi. | | Definition of Understanding | An agent's **understanding** of a phenomenon Phi to some level L when it posses a set of models M and relevant processes P such that it can use M to (a) predict, (b) achieve goals with respect to, (c ) explain, and (d) (re-)create Phi. Insofar as the nature of relations between variables in Phi determine their behavior, the level L to which the phenomenon Phi is understood by the agent is determined by the //completeness// and the //accuracy// to which M matches the variables and their relations in Phi. | | REF | [[http://alumni.media.mit.edu/~kris/ftp/AGI16_understanding.pdf|About Understanding]] by Thórisson et al. | \\ \\ \\ \\ 2019(c)K.R.Thórisson \\ //EOF//