User Tools

Site Tools


public:t-720-atai:atai-19:lecture_notes_w6

T-720-ATAI-2019 Main
Links to Lecture Notes

T-720-ATAI-2019

Lecture Notes: Knowledge Representation, Reasoning, Understanding, Meaning







Concepts

Data Measurement.
Information Data that can be / is used or formatted for purpose.
Knowledge A set of interlinked information that can be used to plan, produce action, and interpret new information.
Thought The drive- and goal-driven processes of a situated knowledge-based system.



Representation


What it is
A way to encode data/measurements.
A representation is what you have when you pick something to stand for something else, like the lines forming the word “cup” used in particular contexts are used to represent (implicate, point to) an object with some features and properties.
All knowledge used for intelligent action must have a representation.

What it Involves
A particular process (computation, thought) is given a particular pattern (e.g. the text “cup” or the word “cup” uttered – or simply by the form of the light falling on a retina, at a particular time in a particular context) that acts as a “pointer” to an internal representation, an information structure that is rich enough to answer questions about a particular phenomenon that this “pointer” pattern points to, without having to perform any other action than to manipulate that information structure in particular ways.

Why it is Important
Mathematically:
With the amount of information in the physical world vastly outnumbering the ability of any system to store it all in a lookup table, methods for information storage and retrieval with greater compression are needed.
Historically:
- The founding fathers of AI spoke frequently of representations in the first three decades of AI research.
- Skinnerian psychology and Brooksian AI – both “representation-free” methodologies – largely outlawed the concept of representation from AI from the mid-80s onward.
- Post 2000s: The rise of ANNs has helped continue this trend.

Good Regulator Theorem
Meanwhile, Conant & Ashby's Good Regulator Theorem proved (yes, proved) that

every good controller (“regulator”) of a system must be a model of that system.

Good Regulator Paper
Why That Matters A model is by definition a representation (of the thing that it is a model of).
Bottom Line Referring to the last table on this page, AGI is unlikely to be achieved without sophisticated methods for representation of complex things, and sophisticated methods for their creation, manipulation, and management.
This is the role of a cognitive architecture.



Meaning


What It Is
Something of great importance to people.
Meaning seems “extracted” from other people's actions, utterances, attitudes, etc.
Proper handling of meaning is generally considered to require intelligence.
Why It Is Important Meaning seems to enter almost every aspect of cognition.
My Theory Meaning is generated when a causal-relational model is used to compute the implications of some action, state, event, etc. Any agent that does so will extract meaning when the implications interact with its goals in some way.



Symbols & Meaning


What are Symbols?
Peirce's Theory of Semiotics (signs) proposes 3 parts to a sign: a sign/symbol, an object, and an interpretant.
Example of symbol: an arbitrary pattern, e.g. a written word (with acceptable error ranges whose threshold determine when it is either 'uninterpretable' or 'inseparable from other symbols'.
Example of object: an automobile (clustering of atoms in certain ways).
Example of interpretant: Your mind as it experiences something in your mind's eye when you read the word “automobile”. The last part is the most complex thing, because obviously what you see and I see when we read the word “automobile” is probably not exactly the same.
“Symbol” Peirce used various terms for this, including “sign”, “representamen”, “representation”, and “ground”. Others have suggested “sign-vehicle”. What is mean in all cases is a pattern that can be used to stand for something else, and thus requires an interpretation to be used as such.
Peirce's Innovation Detaching the symbol/sign from the object signified, and introducing the interpretation process as a key entity. This makes it possible to explain why people misunderstand each other, and how symbols and meaning can grow and change in a culture.
Meaning Philosophers are still grappling with the topic of “meaning”, and it is far from settled. It is highly relevant to AI, especially AGI - an AGI that cannot extract the meaning of a joke, threat, promise, or explanation is hardly worth its label.
Current Approach Meaning stems from two main sources. Firstly, acquired and tested models form a graph of relations; the comprehensiveness of this graph determines the level of understanding that the models can support with respect to a particular phenomenon. Meaning is not possible without (some level of) understanding. Secondly, meaning comes from the context of the usage of symbols, where the context is provided by (a) who/what uses the symbols, (b) in what particular task-environment, using ( c) what particular syntactic constraints.
Prerequisites for using symbols A prerequisite for communication is a shared interpretation method, shared interpretation of syntax (context), and shared knowledge (object).
Where the Symbols “are” When we use the term “symbol” in daily conversation we typically are referring to its meaning, not its form (sign). The meaning of symbols emerges from the interpretation process which is triggered by the contextual use of a sign: A sign's relation to forward models, in the pragmatic and syntactic context, produces a meaning - that which is signified. Thus, more than being “stored in a database”, symbols are continuously and dynamically being “computed based on knowledge”.
Models & Symbols Both are representations - but models contain more than symbols; if symbols are pointers models are machines.



So, What Are Models?

Model A model of something is an information structure that behaves in some ways like the thing being modeled.
‘Model’ here actually means exactly the same as the word when used in the vernacular — look up any dictionary defnition and that is what it means. A model of something is not the thing itself, it is in some way a ‘mirror image’ of it, typically with some unimportant details removed, and represented in a way that allows for various manipulations for the purpose of making predictions (answering questions), where the form of allowed manipulations are particular to the representation of the model and the questions to be answered.
Example A model of Earth sits on a shelf in my daughter’s room. With it I can answer questions about the gross layout of continents, and names assigned to various regions as they were around 1977 (because that’s when I got it for my confirmation :-) ). A model requires a process for using it. In this example that process is humans that can read and manipulate smallish objects.
Computational Models A typical type of question to be answered with computational (mathematical) models are what-if questions, and a typical method of manipulation is running simulations (producing deductions). Along with this we need the appropriate computational machine.

Model (again)
A 'model' in this conception has a target phenomenon that it applies to, and it has a form of representation, comprehensiveness, and level of detail; these are the primary features that determine what a model is good for. A computational model of the world in raw machine-readable form is not very efficient for quickly identifying all the countries adjacent to Switzerland - for that a traditional globe is much better.
Model Acquisition The ability to create models of (observed) phenomena.



System & Architectural Requirements for Using Models

Effectiveness Creation of models must be effective - otherwise a system will spend too much time creating useless or bad models.
Making the model creation effective may require e.g. parallelizing the execution of operations on them.
Efficiency Operations on models listed above must be efficient lest they interfere with the normal operation of the system / agent.
One way to achieve temporal efficiency is to parallelize their execution, and make them simple.
Scalability For any moderately interesting / complex environment, a vast number of models may be entertained and considered at any point in time, and thus a large set of potential models must be manipulatable by the system / agent.



Problems with Feedback-Only Controllers



Thermostat
A cooling thermostat has a built-in supersimple model of its task-environment, one that is sufficient for it to do its job. It consists of a few variables, an on-off switch, two thresholds, and two simple rules that tie these together; the sensed temperature variable, the upper threshold for when to turn the heater on, and the lower threshold for when to turn the heater off. The thermostat never has to decide which model is appropriate, it is “baked into it“ by the thermostat’s designer. It is not a predictive (forward) model, this is a strict feedback model.
The thermostat cannot change its model, this can only be done by the user opening it and twiddling some thumbscrews.
Limitation Because the system designer knows beforehand which signals cause perturbations in <m>o</m> and can hard-wire these from the get-go in the thermostat, there is no motivation to create a model-creating controller (it is much harder!).
Other “state of the art” systems The same is true for expert systems, subsumption robots, and general game playing machines: their model is to tightly baked into their architecture by the designer. Yes, there are some variables in these that can be changed automatically “after the machine leaves the lab” (without designer intervention), but they are parameters inside a (more or less) already-determined model.
What Can We Do? Feed-forward control! Which requires models.



Benefits of Combined Feed-forward + Feedback Controllers

Ability to Predict With the ability to predict comes the ability to deal with events that happen faster than the perception-action cycle of the controller, as well as the ability to anticipate events far into the future.

Greater Potential to Learn
A machine that is free to create, select, and evaluate models operating on observable and hypothesized variables has potential to learn anything (within the confines of the algorithms it has been given for these operations) because as long as the range of possible models is reasonably broad and general, the topics, tasks, domains, and worlds it could (in theory) handle becomes vastly larger than systems where a particular model is given to the system a priori (I say ‘in theory’ because there are other factors, e.g. the ergodicity of the environment and resource constraints that must be favorable to e.g. the system’s speed of learning).
Greater Potential for Cognitive Growth A system that can build models of its own model creation, selection, and evaluation has the ability to improve its own nature. This is in some sense the ultimate AGI (depending on the original blueprint, original seed, and some other factors of course) and therefore we only need two levels of this, in theory, for a self-evolving potentially omniscient/omnipotent (as far as the universe allows) system.
Bottom Line AGI without both feed-forward and feed-back mechanisms is fairly unthinkable.



Reasoning

What It Is The establishment of axioms for the world and applying logic to these.
But The World Is Non-Axiomatic ! Yes. But there is no way to apply logic unless we hypothesize some pseudo-axioms. The only difference between this and mathematics is that in science we must accept that the so-called “laws” of physics may be only conditionally correct (or possibly even completely incorrect, in light of our goal of figuring out the “ultimate” truth about how the universe works).
Deduction Results of two statements that logically are necessarily true.
Example: If it's true that all swans are white, and Joe is a swan, then Joe must be white.
Abduction Reasoning from conclusions to causes.
Example: If the light is on, and it was off just a minute ago, someone must have flipped the switch.
Induction Generalization from observation.
Example: All the swans I have ever seen have been white, hence I hypothesize that all swans are white.



Understanding

What It Is A concept that people use all the time about each other's cognition. With respect to achieving a task, given that the target of the understanding is all or some aspects of the task, more of it is generally considered better than less of it.
Why It Is Important Seems to be connected to “real intelligence” - when a machine does X reliably and repeatedly we say that it is “capable” of doing X qualify it with ”… but it doesn't 'really' understand what it's doing“.
What Does It Mean? No well-known scientific theory exists.
Normally we do not hand control of anything over to anyone who doesn't understand it. All other things being equal, this is a recipe for disaster.
My Theory Understanding involves the manipulation of causal-relational models (like we discussed in the context of the AERA AGI-aspiring architecture).
Evaluating Understanding Understanding any X can be evaluated along four dimensions: 1. Being able to predict X, 2. being able to achieve goals with respect to X, 3. being able to explain X, and 4. being able to “re-create” X (“re-create” here means e.g. creating a simulation that produces X and many or all its side-effects.)



Important Concepts for AGI

Autonomy The ability to do tasks without interference / help from others / outside.
Reasoning The application of logical rules to knowledge.
Attention The management of processing, memory, and sensory resources.
Meta-Cognition The ability of a system to reason about itself.

Understanding
The phenomenon of “understanding” has been neglected in AI and AGI. Modern AI systems do not understand.
Yet the concept seems crucial when talking about human intelligence; the concept holds explanatory power - we do not assign responsibilities for a task to someone or something with a demonstrated lack of understanding of the task. Moreover, the level of understanding can be evaluated.
Understanding of a particular phenomenon <m>phi</m> is the potential to perform actions and answer questions with respect to <m>phi</m>. Example: Is an automobile heavier or lighter than a human?

Explanation
When performed by an agent, the ability to transform knowledge about X from a formulation primarily (or only) good for execution with respect to X to a formulation good for being communicated (typically involving some form of linearization, incremental introduction of concepts and issues, in light of an intended receiving agent with a particular a-priori knowledge).
Is it possible to explain something that you don't understand?

Learning
Acquisition of information in a form that enables more successful completion of tasks. We call information in such a form “knowledge” or “practical knowledge”. (There is also the concept of “impractical knowledge”, which sometimes people feel must be the case of “useless trivia” that seems to be useless for anything, but can in fact turn out to be useful at any point, as for instance using such trivia to wow others with one's knowledge of trivia.)
Life-long Learning Incremental acquisition of knowledge throughout a (non-trivially long) lifetime.
Transfer Learning The ability to transfer what has been learned in one task to another.
Imagination The ability to evaluate potential contingencies. Also used to describe the ability to predict.
Relies on reasoning and understanding.
Creativity A measure for the uniqueness of solutions to problems produced by an agent; or the ability of an agent to produce solution(s) where other agents could not. Also used as a synonym of intelligence.
An emergent property of an intelligent agent that relies on several of the above features.





2019©K. R. Thórisson

EOF

/var/www/cadia.ru.is/wiki/data/pages/public/t-720-atai/atai-19/lecture_notes_w6.txt · Last modified: 2024/04/29 13:33 by 127.0.0.1

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki