TASK THEORY - MAIN PAGE
The evaluation of intelligent systems is a complex task, due to (among other things) the fact that they adapt and change over time, and operate in environments that do the same. When a system has *general* intelligence the task gets even harder. What is needed is a better theoretical basis for both learning/adaptation and task construction/decomposition. Task Theory focuses on the latter - hopfully easier - part of the challenge: creating a mathematical framework around task-environments, grounded in physics, that can be (ultimately) used to predict and explain the difference and similarities between task-environments in physical terms. Among the things we like to look at is the time taken to do a task, how it could be discretized, what energy consuption it requires, how it can be decomposed, etc.
A major highest-level goal of this effort is:
- Given a learner and a task, to say whether and how well the learner can learn the task, identify which parts it will have trouble with, if any, remove parts to make task simpler or harder for the learner, etc. - all without doing any experiments. - Given two or more tasks and two or more learners, enumerate their similarties and differences in a way that relates to how a set of learners would perform when learning and performing the task, in physical parameters in cluding the time, energy, best-case/worst-case/average-case, etc. - without doing any physical experiments.
This will ultimately require two theories, one about tasks and one about learners, and they would have to be compatible. But we have to start somewhere, and making theory of learners and intelligence seems harder than making a theory about tasks, for at least two reasons:
- Tasks have more easily-observable, measurable, and clear-cut parameters than intelligent behavior.
- Tasks have physical properties that can be related directly to physics, which is approximately 2000 years ahead of psychology and AI as a science.
While we won’t get there any time soon, we have already started. We need to move along, because I think real progress towards a theory of intelligence needs proper methods of testing it, comparing it, measuring it, and evaluating it.
- A New AI Evaluation Cosmos: Ready to Play the Game? by J. Hernandez-Orallo et al.
- This paper gives you a good idea of how most researchers are currently addressing evaluation of AI systems. https://www.aaai.org/ojs/index.php/aimagazine/article/view/2748/2650
- Evaluation of General-Purpose Artificial Intelligence: Why, What & How by J. Bieger et al.
- - This paper provides a good overview of our approach to what we call Task Theory - an attempt at creating a more detailed framework that is not only practical but takes some steps in the direction of a theory of physical work. http://alumni.media.mit.edu/~kris/ftp/EGPAI_2016_paper_9.pdf
- Towards Flexible Task-Environments For Comprehensive Evaluation of Artificial Intelligent Systems & Automatic Leaners by K. R. Thórisson et al.
- Lists the many desired features of a flexible task environment framwork that can be used to evaluate intelligent systems. http://alumni.media.mit.edu/~kris/ftp/AGIEvaluationFlexibleFramework-ThorissonEtAl2015.pdf
- Why Artificial Intelligence Needs a Task Theory — And What It Might Look Like by K. R. Thórisson et al.
- Some initial ideas for a mathematical framework. http://alumni.media.mit.edu/~kris/ftp/AGI16_task_theory.pdf
- FraMoTEC: Modular Task-Environment Construction Framework for Evaluating Adaptive Control Systems by Thorarensen et al.
- First framework that attempted to do an actual implementation addressing the aim of Task Theory. http://alumni.media.mit.edu/~kris/ftp/EGPAI_2016_paper_8.pdf
Papers that might give us some more ideas for how to think about this:
- Theory of Hybrid Automata by T. A. Henzinger.
- HYBRID SYSTEMS: GENERALIZED SOLUTIONS AND ROBUST STABILITY by R. Goebel et al.