[[public:t-720-atai:atai-20:main|T-720-ATAI-2020 Main]]
\\
\\
======Learning / Evaluation======
\\
\\
=====Evaluation of Intelligent Systems=====
\\
\\
====The Challenge of Evaluating Intelligence====
| \\ Without Evaluation... | ...there can be no comparison. \\ Without comparison there can be no indication of direction. \\ Without direction there can be no systematic scientific (or otherwise) effort to deepen understanding. |
| Without Definition ... | ...there can be no evaluation. |
| The Challenge | 'Intelligence' is an ill-defined concept. |
| What Can Be Done? | We must create/select a proper //working definition// of intelligence, one that allows us to measure //something// we consider to be of importance to the concept of intelligence. |
| How Can That Be Done? | That is the question....If we don't know what we're measuring, we will not know what our data means. |
\\
\\
====What Are We Trying to Evaluate?====
| Proposed Definitions | "Intelligence" as a concept must be broken into smaller parts. \\ "Adaptation" seems too broad. \\ "Behavior" is difficult to measure unless it's codified in domain-dependent methods (e.g. verbal, motor, ...). |
| \\ Alternatives | What if we could avoid definitions? Competitions (e.g. games, robofootball, specific (single-goal tasks) have been proposed in its place. \\ Turing proposed the 'imitation game' ("Turing Test") as a placeholder for a definitive definition (the Turing Test is most correctly seen as a working definition). |
| \\ Shortcomings | Mostly single-goal (physical world requires multiple simultaneous goals). \\ Mostly easily measurable goals (PW often has ill-defined goals). \\ Mostly toy-like (no noise; PW has lots of noise.) \\ Mostly limited-count variables (PW has infinite number of vars). |
| Current Status | Scientists still working on how to properly measure learning and intelligence. |
\\
====Sources of Potential Evaluation Methods====
| **Psychology** | Uses tests based on a single measure at a single point in time. Produces a single "IQ" score. ||
| | Method | Creates a set of test items that can be assigned to a sample pool of people at various ages and measured on their ability to distinguish them from each other (diversity). Subset of test items selected based on the "largest discriminatory power" and normalized for age groups. |
| | Pros | Well established method for human intelligence. |
| | Cons | Present and future AI systems are very different from human intelligence. Worse, the normalization of standard psychometrics for humans isn't possible for AIs because they are not likely to consist of populations of similar AI systems. Even if they did, these methods only provide relative measurements. Another serious problem is that they rely heavily on a subject's prior knowledge and training. |
| **AI** | Board games, robo-football, a handful of toy problems (e.g. mountain car, diving for gold). ||
| | Method | Standard board games that humans play are used unmodified or in simplified versions to distinguish between the best AI systems capable of playing these board games. |
| | Pros | Simple tests with a single measure provide unequivocal scores that can be compared. Relatively easy to implement and administer. |
| | Cons | A single dimension to measure intelligence on is too simplistic, subject to the same problems that IQ tests are subject to. All systems in the first 40 years of AI could only play a single board game (the General Game Playing Competition was intended to address this limitation). |
| **AGI** | Turing Test, Piaget-McGyver Room, Lovelace Test, Toy-Box Problem ||
| | Method | Human-like conditions extended to apply to intelligent machines. |
| | Pros | Better than single-measure methods in many ways. |
| | Cons | Measure intelligence at a single point in time. Many are difficult to implement and administer. |
\\
\\
====The "Turing Test"====
| What it is | A test for intelligence proposed by Alan Turing in 1950. |
| Why it's relevant | Proposed as a way to get a pragmatic/working definition of the //concept of intelligence//. \\ The first proposal for how to evaluate an intelligent machine. |
| \\ Method | It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." We now ask the question, "What will happen when a machine takes the part of A in this game?" |
| Pros | It is difficult to imagine an honest, collaborative machine playing this game for several days or months could ever fool a human into thinking it was a grown human unless it really understood a great deal. |
| \\ Cons | Targets evaluation at a single point in time. Anchored in human language, social convention and dialogue. The Loebner Prize competition has been running for some decades, offering a large financial prize for the first machine to "pass the Turing Test". None of the competing machines has thus far offered any significant advances in the field of AI, and most certainly not to AGI. //"It's important to note that Turing never meant for his test to be the official benchmark as to whether a machine or computer program can actually think like a human"// (- Mark Reidl) |
| \\ Implementations | The Loebner Prize competition has been running for some decades, offering a large financial prize for the first machine to "pass the Turing Test". None of the competing machines has thus far offered any significant advances in the field of AI, and most certainly not to AGI. |
| Bottom Line | //"It's important to note that Turing never meant for his test to be the official benchmark as to whether a machine or computer program can actually think like a human"// (- Mark Reidl) |
| \\ Links | [[https://chatbotsmagazine.com/how-to-win-a-turing-test-the-loebner-prize-3ac2752250f1|2017 Loebner prize article]] \\ [[https://artistdetective.wordpress.com/2019/09/21/loebner-prize-2019/|Blog entry on Lobner Prize competitor 2019]] \\ [[https://www.pandorabots.com/mitsuku/|Feel free to chat with Mitsuku, 2019 Lobner Prize winner]] |
\\
\\
====Piaget-McGyver Room====
| What it is | A proposal for evaluating the intelligence of an agent. |
| Short Description | [W]e define a room, the Piaget-MacGyver Room (PMR), which is such that, an [information-processing] artifact can credibly be classified as general-intelligent if and only if it can succeed on any test constructed from the ingredients in this room. No advance notice is given to the engineers of the artifact in question as to what the test is going to be. |
| Why it's relevant | One of the first attempts at explicitly getting away from a specific test or test suite for testing intelligence. |
| \\ Pros | Being very open ended, the evaluation method prevents specific targeted skills to be pre-built into the AI to be evaluated. \\ Targeting the physical world means perception must be integrated into the cognition. \\ Could also be constructed virtually. |
| \\ Cons | Perhaps too open-ended. \\ Leaves almost everything undefined. \\ Requires further gradients on the "level of difficulty" to be provided by the evaluators. |
| REF | [[http://kryten.mm.rpi.edu/Bringsjord_Licato_PAGI_071512.pdf|Bringsjord & Licato]] |
\\
\\
====The Toy Box Problem====
| What it is | A proposal for evaluating the intelligence of an agent. |
| Short Description | Based on a box filled with toys of various kinds that will be the subject of evaluation, either directly or in reference to new unseen objects that only bear a resemblance to them. |
| Why it's relevant | One of several new and novel methods proposed for this purpose; focuses on variety, novelty and exploration. |
| \\ Method | A robot is given a box of previously unseen toys. The toys vary in shape, appearance and construction materials. Some toys may be entirely unique, some toys may be identical, and yet other toys may share certain characteristics (such as shape or construction materials). The robot has an opportunity to play and experiment with the toys, but is subsequently tested on its knowledge of the toys. It must predict the responses of new interactions with toys, and the likely behavior of previously unseen toys made from similar materials or of similar shape or appearance. Furthermore, should the toy box be emptied onto the floor, it must also be able to generate an appropriate sequence of actions to return the toys to the box without causing damage to any toys (or itself). |
| Pros | Includes perception and action explicitly. Specifically designed as a stepping stone towards general intelligence; a solution to the simplest instances should not require universal or human-like intelligence. |
| Cons | Limited to a single instance in time. Somewhat too limited to dexterity guided by vision, missing out on reasoning, creativity, and many other factors. |
| REF | [[http://agi-conf.org/2010/wp-content/uploads/2009/06/paper_54.pdf|Johnston]] |
\\
\\
====Lovelace Test 2.0====
| What it is | A proposal for how to evaluate the creativity. |
| Short Description | Replacing that which is to be tested - intelligence - with the related concept of creativity. |
| Why it's relevant | The only test focusing explicitly on creativity. |
| Method | Artificial agent a is challenged as follows: \\ a must create an artifact o of type t; \\ o must conform to a set of constraints C where c_i~in~C is any criterion expressible in natural language; \\ a human evaluator h, having chosen t and C, is satisfied that o is a valid instance of t and meets C; and \\ a human referee r determines the combination of t and C to not be unrealistic for an average human. |
| Pros | Brings creativity to the forefront of intelligence testing. |
| Cons | Narrow focus on creativity. Too restricted to human experience and knowledge (last point). |
| REF | [[http://arxiv.org/pdf/1410.6142v3.pdf|Riedl]] |
\\
\\
====Requirements for Evaluation: Features That Evaluators Should Be Able To Control====
| Determinism | Both full determinism and partial stochasticity (for realism regarding, e.g. noise, stochastic events, etc.) must be supported. |
| Ergodicity | The reachability of (aspects of) states from others determines the degree to which the agent can undo things and get second chances. |
| Continuity | For evaluation to be relevant to e.g.robotics, it is critical to appropriately represent continuous spatial and temporal (and other) variables. The degree to which continuity is approximated (discretization granularity) should be changeable for any variable. |
| Asynchronicity | Any action in the task-environment, including sensors and controls, may operate on arbitrary time scales and interact at any time, letting an agent respond when it can. |
| Dynamism | A static task-environment’s state only changes in response to the AI’s actions. The most simplistic ones are step-lock, where the agent makes one move and the environment responds with another (e.g. board games). More complex environments can be dynamic to various degrees in terms of speed and magnitude, and may be caused by interactions between environmental factors, or simply due to the passage of time. |
| Observability | Task-environments can be partially observable to varying degrees, depending on the type, range, refresh rate, and precision of available sensors, affecting the difficulty and general nature of the task-environment. |
| Controllability | The control that the agent can exercise over the environ- ment to achieve its goals can be partial or full, depending on the capability, type, range, inherent latency, and precision of available actuators. |
| Multiple Parallel Causal Chains | Any generally intelligent system in a complex environment is likely to be trying to meet multiple objectives, that can be co-dependent in various ways through any number of causal chains in the task-environment. Actions, observations, and tasks may occur sequentially or in parallel (at the same time). Needed to implement real- world clock environments. |
| Periodicity | Many structures and events in nature are repetitive to some extent, and therefore contain a (learnable) periodic cycle – e.g. the day-night cycle or blocks of identical houses. |
| Repeatability | Both fully deterministic and partially stochastic environ- ments must be fully repeatable, for traceable transparency. |
| REF | [[http://alumni.media.mit.edu/~kris/ftp/AGIEvaluationFlexibleFramework-ThorissonEtAl2015.pdf|Thorisson, Bieger, Schiffel & Garrett]] |
\\
\\
====Requirements for Evaluation: Settings That Must Be Obtainable====
| Complexity | Environment is complex with diverse interacting objects. |
| Dynamicity | Environment is dynamic. |
| Regularity | Task-relevant regularities exist at multiple time scales. |
| Task Diversity | Tasks can be complex, diverse, and novel. |
| Interactions | Agent/environment/task interactions are complex and limited. |
| Computational limitations | Agent computational resources are limited. |
| Persistence | Agent existence is long-term and continual. |
| REF | [[http://www.atlantis-press.com/php/download_paper.php?id=1900|Laird et al.]] |
\\
\\
====Example Frameworks for Evaluating AI Systems====
| \\ \\ Merlin | A significant problem facing researchers in reinforcement and multi-objective learning is the lack of good benchmarks. Merlin (for Multi-objective Environments for Reinforcement LearnINg) is a software tool and method for enabling the creation of random problem instances, including multi-objective learning problems, with specific structural properties. Merlin provides the ability to control task features in predictable ways allowing researchers to build a more detailed understanding about what features of a problem interact with a given learning algorithm, improving or degrading its performance. | [[http://alumni.media.mit.edu/~kris/ftp/Tunable-generic-Garrett-etal-2014.pdf|Paper]] by Garrett et al. |
| AI Gym | Gym is a toolkit developed by OpenAI for developing and comparing reinforcement learning algorithms. It supports teaching agents everything from walking to playing games like Pong or Pinball. | [[https://gym.openai.com|Link]] to Website. |
| \\ SAGE | Framework that allows modular construction of simulated physical task-environments for evaluating intelligent control systems. A proto-task theory on which the framework is built aims for a deeper understanding of tasks in general, with a future goal of providing a theoretical foundation for all resource-bounded real-world tasks. Tasks constructed in the framework can be rooted in physics, to varying desired degrees, allowing their execution to analyze the performance of control systems in terms of expended time and energy. SAGE is intended for evaluating both narrow AI and AGI systems on numerous easily-constructed tasks. | \\ [[http://alumni.media.mit.edu/~kris/ftp/SAGE-EberdingEtAl-AGI-2020.pdf|Paper]] by Eberding et al. |
\\
\\
====State of the Art====
| \\ Summary | Practically all proposals to date for evaluating intelligence leave out some major important aspects of intelligence. Virtually no proposals exist for evaluation of knowledge transfer, attentional capabilities, knowledge acquisition, knowledge capacity, knowledge retention, multi-goal learning, social intelligence, creativity, reasoning, cognitive growth, and meta-learning / integrated cognitive control -- all of which are quite likely vital to achieving general intelligence on par with human. |
| What is needed | A theory of intelligence that allows us to construct adequate, thorough, and comprehensive tests of intelligence and intelligent behavior. |
| \\ What can be done | In leu of such a theory (which still is not forthcoming after over 100 years of psychology and 60 years of AI) we could use a multi-dimensional "Lego" kit for exploring various means of measuring intelligence and intelligent performance, so as to be able to evaluate the pros and cons of various approaches, methods, scales, etc. \\ Some sort of kit meeting part or all of the requirements listed above would go a long way to bridging the gap, and possibly generate some ideas that could speed up theoretical development. |
\\
\\
\\
\\
2020(c)K.R.Thórisson \\
//EOF//