Table of Contents
Programming Assignment 1 - Search
Use the forum, if you have any questions or problems with the assignment. Start early, so you still have time to ask in case of problems!
Problem Description
Find a good plan for the vacuum cleaner agent. The Environment is still the same as in the first lab: It is a rectangular grid of cells each of which may contain dirt. Only now the grid may also contain obstacles. The agent is located in this grid and facing in one of the four directions: north, south, east or west. The agent can execute the following actions:
- TURN_ON: This action initialises the robot and has to be executed first.
- TURN_RIGHT, TURN_LEFT: lets the robot rotate 90° clockwise/counter-clockwise
- GO: lets the agent attempt to move to the next cell in the direction it is currently facing.
- SUCK: suck the dirt in the current cell
- TURN_OFF: turns the robot off. Once turned off, it can only be turned on again after emptying the dust-container manually.
However, the agent now has complete information about the environment. That is, the agent knows where it is initially, how big the environment is, where the obstacles are and which cells are dirty. The goal is to clean all dirty cells, return to the initial location and turn off the robot.
Your actions have the following costs:
- 1 + 15*D, if you TURN_OFF the robot in the home location and there are D dirty cells left
- 100 + 15*D, if you TURN_OFF the robot, but not in the home location and there are D dirty cells left
- 5 for SUCK, if the current location of the robot does not contain dirt
- 1 for SUCK, if the current location of the robot contains dirt
- 1 for all other actions
Tasks
- Develop a model of the environment. What constitutes a state of the environment? What is a successor state resulting of executing an action in a certain state? Which action is legal under which conditions? Maybe you can abstract from certain aspects of the environment to make the state space smaller.
- Implement the model:
- Create a data structures for states.
- Implement methods to compute the legal moves in a state and the successor state resulting of executing an action.
- Implement the goal test, i.e., a method telling you whether a certain state fulfills all goal conditions.
- Assess the following blind search algorithms wrt. their completeness, optimality, space and time complexity in the given environment: Depth-First Search, Breadth-First Search, Uniform-Cost Search. If one of the algorithms is not complete, how could you fix it? Note: Do this step before you implement the algorithms, so you know what to expect when you run the algorithms. Otherwise you might be very surprised.
- Implement the three algorithms and make sure to keep track of the number of state expansions and the maximum size of the frontier. Run all the given environments with the three algorithms and compare the results wrt.
- number of state expansions (time complexity)
- maximum size of the frontier (space complexity)
- quality (cost) of the found solution
- computation time for finding a solution.
- Think of a good (admissible) heuristic function for estimating the remaining cost given an arbitrary state of the environment. Make sure, that your heuristics is really admissible!
- Implement A*-Search with an evaluation function that uses your heuristics. Run your agent with all the given environments and compare the results with those of the blind search methods.
- Post the results (number of state expansions, maximum size of the frontier, cost of the found solution) here.
- Try to improve your heuristics, but make sure that it does not increase the overall runtime of search algorithm. You can easily create a heuristics that tells you the optimal costs but is as expensive as a blind search. Try to avoid this.
Material
The files in the archive are similar to those in the first lab.
The archive contains code for implementing an agent in the src directory. The agent is actually a server process which listens on some port and waits for the real robot or a simulator to send a message. It will then reply with the next action the robot is supposed to execute.
The zip file also contains the description of some example environments (vacuumcleaner*.gdl) and a simulator (gamecontroller-gui.jar). To test your agent:
- Start the simulator (execute gamecontroller-gui.jar with either double-click or using the command “java -jar gamecontroller-gui.jar” on the command line).
- Setup the simulator as shown in this picture:
- If there is a second role called “Random” in the game, leave it as “Random”.
- Run the “Main” class in the project. If you added your own agent class, make sure that it is used in the main method of Main.java. You can also execute the “ant run” on the command line, if you have Ant installed. The output of the agent should say “NanoHTTPD is listening on port 4001”, which indicates that your agent is ready and waiting for messages to arrive on the specified port.
- Now push the “Start” button in the simulator and your agent should get some messages and reply with the actions it wants to execute. At the end, the output of the simulator tells you how many points your agent got: “Game over! results: 0”. Those points correspond more or less with negated costs. That is, more points means the solution had lower costs.
- If the output of the simulator contains any line starting with “SEVERE”, something is wrong. The two most common problems are the network connection (e.g., due to a firewall) between the simulator and the agent or the agent sending illegal moves.
Of course, you shouldn't assume any fixed size, initial location or locations of the dirt in your implementation. All this information will be given to your agent but is not fixed.
Hints
For implementing your agent:
- Add a new class that implements the “Agent” interface. Look at RandomAgent.java to see how this is done.
- You have to implement the methods “init” and “nextAction”. “init” will be called once at the start and should be used to run the search while “nextAction” gets a collection of percepts as input (which you do not actually need here) and has to return the next action the agent is supposed to execute. So, if you found a solution in “init” you have to store it somewhere and then execute it step by step in “nextAction”.
- Implement a class “State” that contains all the information about a state that you need to keep track of. The State class should have a method that returns a list of all moves that are legal in the state and a method that takes a move and returns the state that results from executing the move.
- Distinguish between nodes of the search tree and states! Each node has an associated state, but there may be several nodes that have the same state associated. In addition to a state, a node contains a reference to its parent (unless it is the root node), the move that was taken to get to the node and (if necessary) the path cost associated for getting from the root node to the node.
For developing an admissible heuristics:
- Remember: The cost of the optimal solution of a relaxation of the problem is an admissible heuristics. How could you relax the problem, such that you instantly know the cost of the optimal solution?
- The maximum of two admissible heuristics is again admissible. The same holds for consistent heuristics. So, if you have two admissible heuristics you can simply take the maximum value of both and you will get an admissible heuristics that is at least as good.
Handing In
Please hand in a zip file containing:
- a short description of your heuristic and an explanation why it is admissible
- the results of the experiments and which conclusions you draw from them
- the source code and an executable jar file for your agent
To create the zip file you can just edit the “student” property in the build.xml and call the “zip” target using ant (execute “ant zip” in the directory containing the build.xml).
The deadline is Monday, 11.02.2013. If you hand in after the deadline, your grade will be lowered by 0.5 for each day your submission is late.