Table of Contents

Lab 1 - Agents

Note: For this lab, you can work together in teams of up to 4 students. You can use Piazza to find team mates and discuss problems.

You will need a Java Development Kit (JDK) and a Java IDE (a text editor should do as well).

Problem Description

Implement the control program for a vacuum cleaner agent. The environment is a rectangular grid of cells. Some cells may contain dirt. The agent is located somewhere in this grid and facing in one of the four directions: north, south, east or west. The agent can execute the following actions:

The robot is equipped with a dust sensor and a touch sensor. If there is dirt at current location of the robot, the agent will sense “DIRT”. If the robot is bumping into an obstacle or wall, the agent will sense “BUMP”. The goal is to clean all cells and return to the initial location before turning the robot off. Note, a full charge of the battery of the robot will only last for a limited number of actions.

To make this a bit easier you can use the following assumptions:

Tasks

  1. Characterise the environment (is it static or dynamic, deterministic or stochastic, …) according to all 6 properties mentioned on slide 13 (Agents) or section 2.3.2 in the book.
  2. Develop a strategy for the agent such that it cleans every cell and outline the agent function.
  3. Implement the missing parts of the vacuum cleaner Java program (see below) such that it encodes your agent function.
  4. Test your program with all three provided environments. Record the number of steps it takes to finish each environment and the resulting points.
  5. Is your agent rational? Justify your answer.

Submit

Material

The file contains code for implementing an agent in the src directory. The agent is actually a server process which listens on some port and waits for the real robot or a simulator to send a message. It will then reply with the next action the robot is supposed to execute.

The zip file also contains the description of three example environments (vacuumcleaner*.gdl) and a simulator (gamecontroller-gui.jar). To test your agent:

You can see here, what the example environment looks like. Of course, you shouldn't assume any fixed size, initial location or locations of the dirt in your implementation. This is just an example environment.

Hints

For implementing your agent:

As a general hint: “Days of programming can save you minutes of thinking.” Think of a strategy, the rules to implement it and the information you need to decide on the actions before you start implementing it.