Table of Contents

Programming Assignment 1 - Search

Use the forum, if you have any questions or problems with the assignment. Start early, so you still have time to ask in case of problems!

Problem Description

Find a good plan for the vacuum cleaner agent. The Environment is still the same as in the first lab: It is a rectangular grid of cells each of which may contain dirt. Only now the grid may also contain obstacles. The agent is located in this grid and facing in one of the four directions: north, south, east or west. The agent can execute the following actions:

However, the agent now has complete information about the environment. That is, the agent knows where it is initially, how big the environment is, where the obstacles are and which cells are dirty. The goal is to clean all dirty cells, return to the initial location and turn off the robot.

Your actions have the following costs:

Tasks

  1. Develop a model of the environment. What constitutes a state of the environment? What is a successor state resulting of executing an action in a certain state? Which action is legal under which conditions? Maybe you can abstract from certain aspects of the environment to make the state space smaller.
  2. Implement the model:
    • Create a data structures for states.
    • Implement methods to compute the legal moves in a state and the successor state resulting of executing an action.
    • Implement the goal test, i.e., a method telling you whether a certain state fulfills all goal conditions.
  3. Assess the following blind search algorithms wrt. their completeness, optimality, space and time complexity in the given environment: Depth-First Search, Breadth-First Search, Uniform-Cost Search. If one of the algorithms is not complete, how could you fix it? Note: Do this step before you implement the algorithms, so you know what to expect when you run the algorithms. Otherwise you might be very surprised.
  4. Implement the three algorithms and make sure to keep track of the number of state expansions and the maximum size of the frontier. Run all the given environments with the three algorithms and compare the results wrt.
    • number of state expansions (time complexity)
    • maximum size of the frontier (space complexity)
    • quality (cost) of the found solution
    • computation time for finding a solution.
  5. Think of a good (admissible) heuristic function for estimating the remaining cost given an arbitrary state of the environment. Make sure, that your heuristics is really admissible!
  6. Implement A*-Search with an evaluation function that uses your heuristics. Run your agent with all the given environments and compare the results with those of the blind search methods.
  7. Post the results (number of state expansions, maximum size of the frontier, cost of the found solution) here.
  8. Try to improve your heuristics, but make sure that it does not increase the overall runtime of search algorithm. You can easily create a heuristics that tells you the optimal costs but is as expensive as a blind search. Try to avoid this.

Material

The files in the archive are similar to those in the first lab.

The archive contains code for implementing an agent in the src directory. The agent is actually a server process which listens on some port and waits for the real robot or a simulator to send a message. It will then reply with the next action the robot is supposed to execute.

The zip file also contains the description of some example environments (vacuumcleaner*.gdl) and a simulator (gamecontroller-gui.jar). To test your agent:

  1. Start the simulator (execute gamecontroller-gui.jar with either double-click or using the command “java -jar gamecontroller-gui.jar” on the command line).
  2. Setup the simulator as shown in this picture:

  1. If there is a second role called “Random” in the game, leave it as “Random”.
  2. Run the “Main” class in the project. If you added your own agent class, make sure that it is used in the main method of Main.java. You can also execute the “ant run” on the command line, if you have Ant installed. The output of the agent should say “NanoHTTPD is listening on port 4001”, which indicates that your agent is ready and waiting for messages to arrive on the specified port.
  3. Now push the “Start” button in the simulator and your agent should get some messages and reply with the actions it wants to execute. At the end, the output of the simulator tells you how many points your agent got: “Game over! results: 0”. Those points correspond more or less with negated costs. That is, more points means the solution had lower costs.
  4. If the output of the simulator contains any line starting with “SEVERE”, something is wrong. The two most common problems are the network connection (e.g., due to a firewall) between the simulator and the agent or the agent sending illegal moves.

Of course, you shouldn't assume any fixed size, initial location or locations of the dirt in your implementation. All this information will be given to your agent but is not fixed.

Hints

For implementing your agent:

For developing an admissible heuristics:

Handing In

Please hand in a zip file containing:

To create the zip file you can just edit the “student” property in the build.xml and call the “zip” target using ant (execute “ant zip” in the directory containing the build.xml).

The deadline is Monday, 11.02.2013. If you hand in after the deadline, your grade will be lowered by 0.5 for each day your submission is late.