User Tools

Site Tools


public:t-713-mers:mers-25:task_theory

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
public:t-713-mers:mers-25:task_theory [2025/08/26 10:49] leonardpublic:t-713-mers:mers-25:task_theory [2025/08/26 13:41] (current) – [Discussion Prompts] leonard
Line 29: Line 29:
 ==== Intricacy & Difficulty ==== ==== Intricacy & Difficulty ====
  
-|  Intricacy (Observer)  | Structural complexity of a task, derived from number of variables, their couplings, and constraints in {V, F, C}. Defined **independently of the agent** (Eberding 2021).  | +|  Intricacy (Observer)  | Structural complexity of a task, derived from number of variables, their couplings, and constraints in {V, F, C}. Defined **independently of the agent**. 
-|  Effective Intricacy (Agent)  | How complicated the task *appears to an agent*, given its sensors, prior knowledge, reasoning, and precision. For a perfect agent, effective intricacy → 0. |+|  Effective Intricacy (Agent)  | How complicated the task **appears to an agent**, given its sensors, prior knowledge, reasoning, and precision. For a perfect agent, effective intricacy → 0. | 
 +|  Intricacy of Tasks  | Based on (at least) three dimensions: | 
 +|  | The minimal number of causal-relational models needed to represent the relations of the causal structure related to the goal(s). | 
 +|  | The number, length and type of mechanisms of causal chains that affect observable variables on a causal path to at least one goal. | 
 +|  | The number of hidden confounders influencing causal structures related to the goal. |
 |  Difficulty  | A relation: **Difficulty(T, Agent) = f(Intricacy(T), Agent Capacities)**. Same task can be easy for one agent, impossible for another. | |  Difficulty  | A relation: **Difficulty(T, Agent) = f(Intricacy(T), Agent Capacities)**. Same task can be easy for one agent, impossible for another. |
 |  Example  | Catching a ball: Observer sees physical intricacy (variables: position, velocity, gravity, timing). Agent: a human child has low effective intricacy after learning; a simple robot has very high effective intricacy. | |  Example  | Catching a ball: Observer sees physical intricacy (variables: position, velocity, gravity, timing). Agent: a human child has low effective intricacy after learning; a simple robot has very high effective intricacy. |
-|  Connection to ERS  | Difficulty is the bridge between **objective task description** (for observers) and **empirical performance measures** (for agents). ERS requires both views: tasks must be defined *in the world* (observer) but evaluated *through agent behavior*. |+|  Connection to ERS  | Difficulty is the bridge between **objective task description** (for observers) and **empirical performance measures** (for agents). ERS requires both views: tasks must be defined **in the world** (observer) but evaluated **through agent behavior**. | 
 + 
 +\\ 
 +==== Example of a Task with different Intricacy ==== 
 + 
 +{{ :public:t-713-mers:tasktheoryflowchart.png?nolink&700 |}} 
 +Taken from [[https://www.researchgate.net/profile/Kristinn-Thorisson/publication/357637172_About_the_Intricacy_of_Tasks/links/620d1c8fc5934228f9701333/About-the-Intricacy-of-Tasks.pdf|About the Intricacy of Tasks]] by L.M. Eberding et al.
  
 \\ \\
Line 49: Line 59:
 |  Periodicity  | Whether the environment exhibits cycles or repeating structures that can be exploited for prediction. | |  Periodicity  | Whether the environment exhibits cycles or repeating structures that can be exploited for prediction. |
 |  Repeatability  | Whether experiments in the environment can be repeated under the same conditions, producing comparable results. | |  Repeatability  | Whether experiments in the environment can be repeated under the same conditions, producing comparable results. |
 +
 +\\
 +==== Levels of Detail in Task Theory ====
 +
 +|  What it is  | Tasks can be described at different levels of detail — from coarse abstract goals to fine-grained physical variables. The chosen level shapes both evaluation (observer) and execution (agent). |
 +|  Observer’s Perspective  | The observer can choose how finely to specify variables, transformations, and constraints. A higher level of detail allows precise measurement but may make analysis intractable. |
 +|  Agent’s Perspective  | The agent perceives and reasons at its own level of detail, often coarser than the environment’s “true” detail. Mismatch between observer’s definition and agent’s accessible level creates difficulty. |
 +|  Coarse Level  | Only abstract goals and broad categories of variables are specified. Example: “Deliver package to location.” |
 +|  Intermediate Level  | Includes some measurable variables and causal relations. Example: “Move package from x to y using navigation map.” |
 +|  Fine Level  | Explicit representation of detailed physical dynamics, constraints, and noise. Example: “Motor torque, wheel slip, GPS error bounds, battery usage.” |
 +|  Implications for ERS  | Enables systematic scaling of task complexity in experiments. \\ Supports fair comparison: two agents can be tested at the same or different levels of detail. \\ Clarifies where errors originate: poor reasoning vs. inadequate detail in task definition. |
 +
 +\\
 +
 +==== Intricacy and Level of Detail ====
 +
 +|  Maximum Intricacy  | Any agent that is constrained by resources (time, energy, computation power, etc.) has a maximal intricacy of tasks it can solve. |
 +|  Problem  | Even simple tasks like walking to the bus station, if defined in the finest level of detail (every motor command, etc.), have a massive intricacy attached. Planning through every step is computationally infeasible. |
 +|  Changing the task  | If a task is too intricate to be performed, the task must be adjusted to fit the agent's capabilities. However, we still want to get the task done! |
 +|  Changing the Level of Detail  | Is the only way to change the task, thus changing its intricacy without losing the goal of the task. |
  
 \\ \\
Line 59: Line 89:
  
 \\ \\
-==== Discussion Prompts ==== 
  
-|  Question  | Observer Angle  | Agent Angle  | 
-|------------|-----------------|--------------| 
-|  How is a "task" different from a "problem" in classical AI?  | Problem = symbolic puzzle; Task = measurable transformation in a world | Must act in the world to achieve it | 
-|  Why must tasks be agent-independent?  | To compare systems systematically | Otherwise evaluation collapses into “how this agent did” | 
-|  Can you think of a task with low intricacy but high difficulty for humans?  | Observer: low variable count | Agent: limited memory/attention makes it hard (e.g., memorizing 200 digits) | 
-|  What role does causality play in defining tasks?  | Observer: rules F define dynamics | Agent: must infer/approximate causal relations from data | 
-|  How does a variable-task simulator (like SAGE) help ERS?  | Observer: controls task parameters systematically | Agent: experiences wide range of tasks, supports empirical generality tests | 
/var/www/cadia.ru.is/wiki/data/attic/public/t-713-mers/mers-25/task_theory.1756205362.txt.gz · Last modified: 2025/08/26 10:49 by leonard

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki