User Tools

Site Tools


public:t-622-arti-15-1:lab_1_-_agents

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
public:t-622-arti-15-1:lab_1_-_agents [2015/01/13 15:59] – created stephanpublic:t-622-arti-15-1:lab_1_-_agents [2024/04/29 13:33] (current) – external edit 127.0.0.1
Line 17: Line 17:
 The robot is equipped with a dust sensor and a touch sensor. If there is dirt at current location of the robot, the agent will sense "DIRT". If the robot is bumping into an obstacle or wall, the agent will sense "BUMP". The robot is equipped with a dust sensor and a touch sensor. If there is dirt at current location of the robot, the agent will sense "DIRT". If the robot is bumping into an obstacle or wall, the agent will sense "BUMP".
 The goal is to clean all cells and return to the initial location before turning the robot off. Note, a full charge of the battery of the robot will only last for a limited number of actions. The goal is to clean all cells and return to the initial location before turning the robot off. Note, a full charge of the battery of the robot will only last for a limited number of actions.
 +
 +To make this a bit easier you can use the following assumptions:
 +  * The room is rectangular (not necessarily quadratic). It has only 4 straight walls that meet at right angles. There are no obstacles in the room. That is, the strategy "Go until you bump into a wall then turn right and repeat" will make the agent walk straight to a wall and then around the room along the wall.
 +  * The room is fairly small, so that 100 actions are enough to visit every cell, suck all the dirt and return home given a halfway decent algorithm (at least for the small environments, for the big one you may need between 100 and 200 actions).
  
 ===== Tasks ===== ===== Tasks =====
-  - Characterise the environment (is it static or dynamic, deterministic or stochastic, ...).+  - Characterise the environment (is it static or dynamic, deterministic or stochastic, ...) according to all 6 properties mentioned on slide 13 (Agents) or section 2.3.2 in the book.
   - Develop a strategy for the agent such that it cleans every cell and outline the agent function.   - Develop a strategy for the agent such that it cleans every cell and outline the agent function.
-  - Fill out the missing parts of the vacuum cleaner Java program (see below) such that it encodes your agent function.+  - Implement the missing parts of the vacuum cleaner Java program (see below) such that it encodes your agent function.
   - Test your program with all three provided environments. Record the number of steps it takes to finish each environment and the resulting points.   - Test your program with all three provided environments. Record the number of steps it takes to finish each environment and the resulting points.
   - Is your agent rational? Justify your answer.   - Is your agent rational? Justify your answer.
- 
-To make this a bit easier you can use the following assumptions: 
-  * The room is rectangular. It has only 4 straight walls that meet at right angles. There are no obstacles in the room. That is, the strategy "Go until you bump into a wall then turn right and repeat" will make the agent walk straight to a wall and then around the room along the wall. 
-  * The room is fairly small, so that 100 actions are enough to visit every cell, suck all the dirt and return home given a halfway decent algorithm. 
  
 ===== Submit ===== ===== Submit =====
-  * Answers to questions 1, 2, 4 and 5 (as doc, txt or pdf) and your source code in a zip archive.+  * Answers to questions 1, 2, 4 and 5 (as doc, txt or pdf) and your source code in a zip archive on MySchool.
  
 ===== Material ===== ===== Material =====
/var/www/cadia.ru.is/wiki/data/attic/public/t-622-arti-15-1/lab_1_-_agents.1421164761.txt.gz · Last modified: 2024/04/29 13:32 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki