This is an old revision of the document!
Table of Contents
Lab 1 - Agents
Problem Description
Implement the control program for a vacuum cleaner agent. The environment is a rectangular grid of cells which may contain dirt or an obstacle. The agent is located in this grid and facing in one of the four directions: north, south, east or west. The agent can execute the following actions:
- TURN_ON: This action initialises the robot and has to be executed first.
- TURN_RIGHT, TURN_LEFT: lets the robot rotate 90° clockwise/counter-clockwise
- GO: lets the agent attempt to move to the next cell in the direction it is currently facing.
- SUCK: suck the dirt in the current cell
- TURN_OFF: turns the robot off. Once turned off, it can only be turned on again after emptying the dust-container manually.
The robot is equipped with a dust sensor and a touch sensor. If there is dirt at current location of the robot, the agent will sense “DIRT”. If the robot is bumping into an obstacle, the agent will sense “BUMP”. The goal is to clean all cells and return to the initial location before turning the robot off. Note, a full charge of the battery of the robot will only last for 100 actions.
Tasks
- Develop a simple reflex vacuum cleaner agent. Is it rational? (Ok, this is a bit of a trick task. You should find out, that it is not actually possible to implement this with a simple reflex agent.)
- Develop a model-based reflex agent that is able to clean every cell. Is it rational?
Material
The file contains code for implementing an agent in the src directory. The agent is actually a server process which listens on some port and waits for the real robot or a simulator to send a message. It will then reply with the next action the robot is supposed to execute.
The zip file also contains the description of an example environment (vacuumcleaner.gdl) a simulator (gamecontroller-gui.jar). To test your agent:
- Start the simulator (execute gamecontroller-gui.jar with either double-click or using the command “java -jar gamecontroller-gui.jar” on the command line).
- Setup the simulator as shown in this picture:
Hints
For implementing your agent:
- Add a new class that implements the “Agent” interface. Look at RandomAgent.java to see how this is done.
- You have to implement the method “nextAction” which gets a Collection of percepts as input and has to return the next action the agent is supposed to execute.
- Before you start programming a complicated strategy, think about it. The things your agent has to do are:
- execute TURN_ON
- visit every cell and suck up any dirt it finds on the way
- return to the initial location
- TURN_OFF