# Center for Analysis and Design of Intelligent Agents

### Site Tools

public:t-622-arti-15-1:lab_1_-_agents

# Lab 1 - Agents

Note: For this lab, you can work together in teams of up to 4 students. You can use Piazza to find team mates and discuss problems.

You will need a Java Development Kit (JDK) and a Java IDE (a text editor should do as well).

## Problem Description

Implement the control program for a vacuum cleaner agent. The environment is a rectangular grid of cells. Some cells may contain dirt. The agent is located somewhere in this grid and facing in one of the four directions: north, south, east or west. The agent can execute the following actions:

• TURN_ON: This action initialises the robot and has to be executed first.
• TURN_RIGHT, TURN_LEFT: lets the robot rotate 90° clockwise/counter-clockwise
• GO: lets the agent attempt to move to the next cell in the direction it is currently facing.
• SUCK: suck the dirt in the current cell
• TURN_OFF: turns the robot off. Once turned off, it can only be turned on again after emptying the dust-container manually.

The robot is equipped with a dust sensor and a touch sensor. If there is dirt at current location of the robot, the agent will sense “DIRT”. If the robot is bumping into an obstacle or wall, the agent will sense “BUMP”. The goal is to clean all cells and return to the initial location before turning the robot off. Note, a full charge of the battery of the robot will only last for a limited number of actions.

To make this a bit easier you can use the following assumptions:

• The room is rectangular (not necessarily quadratic). It has only 4 straight walls that meet at right angles. There are no obstacles in the room. That is, the strategy “Go until you bump into a wall then turn right and repeat” will make the agent walk straight to a wall and then around the room along the wall.
• The room is fairly small, so that 100 actions are enough to visit every cell, suck all the dirt and return home given a halfway decent algorithm (at least for the small environments, for the big one you may need between 100 and 200 actions).

1. Characterise the environment (is it static or dynamic, deterministic or stochastic, …) according to all 6 properties mentioned on slide 13 (Agents) or section 2.3.2 in the book.
2. Develop a strategy for the agent such that it cleans every cell and outline the agent function.
3. Implement the missing parts of the vacuum cleaner Java program (see below) such that it encodes your agent function.
4. Test your program with all three provided environments. Record the number of steps it takes to finish each environment and the resulting points.

## Submit

• Answers to questions 1, 2, 4 and 5 (as doc, txt or pdf) and your source code in a zip archive on MySchool.

## Material

The file contains code for implementing an agent in the src directory. The agent is actually a server process which listens on some port and waits for the real robot or a simulator to send a message. It will then reply with the next action the robot is supposed to execute.

The zip file also contains the description of three example environments (vacuumcleaner*.gdl) and a simulator (gamecontroller-gui.jar). To test your agent:

• Start the simulator (execute gamecontroller-gui.jar with either double-click or using the command “java -jar gamecontroller-gui.jar” on the command line).
• Setup the simulator as shown in this picture:

• If there is a second role called “RANDOM” in the game, leave it as “Random”.
• Run the “Main” class in the project. If you added your own agent class, make sure that it is used in the main method of Main.java. You can also execute the “ant run” on the command line, if you have Ant installed.
• The output of the agent should say “NanoHTTPD is listening on port 4001”, which indicates that your agent is ready and waiting for messages to arrive on the specified port.
• Now push the “Start” button in the simulator and your agent should get some messages and reply with the actions it wants to execute. At the end, the output of the simulator tells you how many points your agent got: “Game over! results: 0”. In the given environment you will only get non-zero points if you manage to clean everything, return to the initial location, and turn off the robot within 100 steps. If the output of the simulator contains any line starting with “SEVERE”, something is wrong. The two most common problems are the network connection (e.g., due to a firewall) between the simulator and the agent or the agent sending illegal moves.
• The output of the simulator shows the true state of the environment (which your robot can not see). Use that for debugging, e.g., to see if you managed to return to the home position.

You can see here, what the example environment looks like. Of course, you shouldn't assume any fixed size, initial location or locations of the dirt in your implementation. This is just an example environment.