[[http://cadia.ru.is/wiki/public:t-720-atai:atai-16:main|T-720-ATAI-2016 Main]] =====T-720-ATAI-2016===== ====Lecture Notes F-6 29.01.2016==== \\ \\ \\ ====Important Concepts for AGI==== | Attention | The management of processing, memory, and sensory resources. | | Meta-Cognition | The ability of a system to reason about itself. | | Reasoning | The application of logical rules to knowledge. | | Creativity | A measure for the uniqueness of Solutions to Problems produced by an Agent; or the ability of an Agent to produce Solution(s) where other Agents could not. Also used as a synonym of intelligence. | | Imagination | The ability to evaluate potential contingencies. Also used to describe the ability to predict. | | Understanding | The ability of an Agent to achieve Goals with respect to a phenomenon Phi, referring to a Goal state G = S subset V. \\ An agent's A //level of understanding// of a phenomenon Phi is the size of the set G of dynamically assigned goals \\ {G_i(phi subset Phi) | {forall G_i:} G_i <> G_{i+1}} \\ with respect to phenomenon Phi which A can achieve. \\ Understanding is crucial but has been neglected in AI and AGI; Modern AI systems do not understand; "intelligence" as the concept is typically used does not need to -- and is not intended to -- imply understanding. | | Explanation | When performed by an Agent, the ability to transform knowledge about X from a formulation primarily (or only) good for execution with respect to X to a formulation good for being communicated (typically involving some form of linearization, incremental introduction of concepts and issues, in light of an intended receiving Agent with a particular a-priori knowledge). | | Learning | Acquisition of information in a form that enables more successful completion of tasks. We call information in such a form "knowledge" or "practical knowledge". (There is also the concept of "impractical knowledge", which sometimes people feel must be the case of "useless trivia" that seems to be useless for anything, but can in fact turn out to be useful at any point, as for instance using such trivia to wow others with one's knowledge of trivia.) | | Life-long learning | Incremental acquisition of knowledge throughout a (non-trivially long) lifetime. | | Transfer learning | The ability to transfer what has been learned in one task to another. | | Autonomy | The ability to do tasks without interference / help from others. | \\ \\ ====*Understanding==== | Aaron Sloman on understanding \\ [[http://www.cs.bham.ac.uk/research/projects/cogaff/Sloman.ijcai85.pdf|Ref]] | "Filing cabinets contain information but understand nothing. Computers are more active than cab- inets, but so are copiers and card-sorters, which understand nothing. Is there a real distinction between understanding and mere manipulation? Unlike cabinets and copiers, suitably pro- grammed computers appear to understand. They respond to commands by performing tasks; they print out answers to questions; they paraphrase stories or answer questions about them. Does this show they attach meanings to symbols? Or are the meanings ‘derivative’ on OUR understanding them, as claimed by Searle([10])? Is real understanding missing from simulated understanding just as real wetness is missing from a simulated tornado? Or is a mental process like calculation: if simulated in detail, it is replicated?" | | | // [10] Searle, J.R., ‘Minds, Brains, and Programs’, with commentaries by other authors and Searle’s reply, in The Behavioural and Brain Sciences Vol 3 no 3, 417-457, 1980. // | ====*Attention==== | What it is | Resource management, plain and simple. | | Resources | Processing speed, information storage size. | | Resource management | The organization of behavior so as to make the agent more efficient and effective. | | Often ignored | Very few cognitive systems or AI architectures address this issue specifically. Partly this is due to the fact that time is not considered important in information architectures, mainly due to computer science curricula not addressing the issue, due to the fact that this is a rather immature subject, which is in large part due to Alan Turing completely ignoring time. | | Attention is most obviously needed | ...when, for any Agent A with perceptors (perceptual processes) p, the maximum number of variables V in a World/Environment {(W{/}E)} providing simultaneous inputs to p is vastly smaller than the total number of V in {(W{/}E)} \\ that //could// act as inputs. | \\ \\ ====*Meta-Cognition==== | What it is | "Thinking about thinking". Intuitively it is what we do when we ponder our own behavior, understanding, interpretation, logic, etc. | | Why it is important | Meta-cognition is necessary for any system that intends to improve its fundamental cognitive functions. | \\ \\ ====*Reasoning==== | What it is | Collective name for a set of logical operations that can be performed on data. | | Why it's important | For any World that is non-random reasoning provides a way to model. \\ Since the main difference between humans and other animals can be said to be language and logic, logic is important if we are interested in human-level intelligence (and beyond). | \\ \\ ====Learning==== | What it is | The acquisition of information in order to improve performance with respect to some Goal or set of Goals. | | Learning from experience | A method for learning. Also called "learning by doing": An Agent A does action a to phenomenon p in context c and uses the result to improve its ability to act on Goals involving p. All higher-level Earth-bound intelligences learn from experience. | | Learning by observation | A method for learning. An Agent A learns how to achieve Goal G by receiving realtime information about some other Agent A' achieving Goal G by doing action a. | | Learning from reasoning | A method for learning. Using deduction, induction and abduction to simulate, generalize, and infer, respectively, new information from acquired information. | | Multi-objective learning | Learning while aiming to achieve more than one Goal. | | Transfer learning | A method for learning faster. Applying already-acquired knowledge to a new or newish Problem. | | *System-wide ampliative learning | What we could call a combination of all of the above. Requires most or all cognitive faculties marked with a * on this page. | \\ \\ ====*Life-Long Learning==== | What it is | Colloquially: The learning that happens throughout a lifetime. \\ In AI: A particular focus of learning research targeting how systems can change their learning over //long periods// of time. "Duration" doesn't refer to a particular number of hours or years but rather indicates the expectations on the system being engineered that it learn over long periods of time, "long" relative to prior such machine learners. | | Why is it important | Systems without it could hardly be considered to have general intelligence. | \\ \\ ====*Cognitive Autonomy==== | What it is | A term we use to refer to the independence of agents - the more independent they are (of their designers, of outside aid, etc.) the more autonomous they are. | | Why is it important | Systems without it could hardly be considered to have general intelligence. | \\ \\ | {{public:t-720-atai:autonomy-dimensions1.png?800|Autonomy Dimensions.}} | | //"Autonomy comparison framework focusing on mental capabilities. Embodiment is not part of the present framework, but is included here for contextual completeness."// | | Thórisson, K. R. & H. P. Helgason (2012). Cognitive Architectures & Autonomy: A Comparative Review. //Journal of Artificial General Intelligence,// **3**(2):1-30. [[http://xenia.media.mit.edu/~kris/ftp/AutonomyCogArchReview-ThorissonHelgason-JAGI-2012.pdf|PDF]] | \\ \\ | {{public:t-720-atai:belew-evolution-learning-culture1.png|Evolution, Learning & Culture}} | | A metaphor for the interaction between evolutionary programs, cultural influences, and individual learning. | | From [[http://www.complex-systems.com/pdf/04-1-2.pdf|R. K. Belew '89]] | \\ \\ \\ 2016 (c) K. R. Thórisson \\ \\ //EOF//