User Tools

Site Tools


public:t-713-mers:mers-25:task_theory

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
public:t-713-mers:mers-25:task_theory [2025/08/26 11:33] – [Intricacy and Level of Detail] leonardpublic:t-713-mers:mers-25:task_theory [2025/08/26 13:41] (current) – [Discussion Prompts] leonard
Line 31: Line 31:
 |  Intricacy (Observer)  | Structural complexity of a task, derived from number of variables, their couplings, and constraints in {V, F, C}. Defined **independently of the agent**.  | |  Intricacy (Observer)  | Structural complexity of a task, derived from number of variables, their couplings, and constraints in {V, F, C}. Defined **independently of the agent**.  |
 |  Effective Intricacy (Agent)  | How complicated the task **appears to an agent**, given its sensors, prior knowledge, reasoning, and precision. For a perfect agent, effective intricacy → 0. | |  Effective Intricacy (Agent)  | How complicated the task **appears to an agent**, given its sensors, prior knowledge, reasoning, and precision. For a perfect agent, effective intricacy → 0. |
 +|  Intricacy of Tasks  | Based on (at least) three dimensions: |
 +|  | The minimal number of causal-relational models needed to represent the relations of the causal structure related to the goal(s). |
 +|  | The number, length and type of mechanisms of causal chains that affect observable variables on a causal path to at least one goal. |
 +|  | The number of hidden confounders influencing causal structures related to the goal. |
 |  Difficulty  | A relation: **Difficulty(T, Agent) = f(Intricacy(T), Agent Capacities)**. Same task can be easy for one agent, impossible for another. | |  Difficulty  | A relation: **Difficulty(T, Agent) = f(Intricacy(T), Agent Capacities)**. Same task can be easy for one agent, impossible for another. |
 |  Example  | Catching a ball: Observer sees physical intricacy (variables: position, velocity, gravity, timing). Agent: a human child has low effective intricacy after learning; a simple robot has very high effective intricacy. | |  Example  | Catching a ball: Observer sees physical intricacy (variables: position, velocity, gravity, timing). Agent: a human child has low effective intricacy after learning; a simple robot has very high effective intricacy. |
 |  Connection to ERS  | Difficulty is the bridge between **objective task description** (for observers) and **empirical performance measures** (for agents). ERS requires both views: tasks must be defined **in the world** (observer) but evaluated **through agent behavior**. | |  Connection to ERS  | Difficulty is the bridge between **objective task description** (for observers) and **empirical performance measures** (for agents). ERS requires both views: tasks must be defined **in the world** (observer) but evaluated **through agent behavior**. |
 +
 +\\
 +==== Example of a Task with different Intricacy ====
 +
 +{{ :public:t-713-mers:tasktheoryflowchart.png?nolink&700 |}}
 +Taken from [[https://www.researchgate.net/profile/Kristinn-Thorisson/publication/357637172_About_the_Intricacy_of_Tasks/links/620d1c8fc5934228f9701333/About-the-Intricacy-of-Tasks.pdf|About the Intricacy of Tasks]] by L.M. Eberding et al.
  
 \\ \\
Line 79: Line 89:
  
 \\ \\
-==== Discussion Prompts ==== 
  
-|  Question  | Observer Angle  | Agent Angle  | 
-|------------|-----------------|--------------| 
-|  How is a "task" different from a "problem" in classical AI?  | Problem = symbolic puzzle; Task = measurable transformation in a world | Must act in the world to achieve it | 
-|  Why must tasks be agent-independent?  | To compare systems systematically | Otherwise evaluation collapses into “how this agent did” | 
-|  Can you think of a task with low intricacy but high difficulty for humans?  | Observer: low variable count | Agent: limited memory/attention makes it hard (e.g., memorizing 200 digits) | 
-|  What role does causality play in defining tasks?  | Observer: rules F define dynamics | Agent: must infer/approximate causal relations from data | 
-|  How does a variable-task simulator (like SAGE) help ERS?  | Observer: controls task parameters systematically | Agent: experiences wide range of tasks, supports empirical generality tests | 
/var/www/cadia.ru.is/wiki/data/attic/public/t-713-mers/mers-25/task_theory.1756207998.txt.gz · Last modified: 2025/08/26 11:33 by leonard

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki