User Tools

Site Tools


public:t-622-arti-15-1:prog1

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
public:t-622-arti-15-1:prog1 [2015/01/22 16:28] – created stephanpublic:t-622-arti-15-1:prog1 [2024/04/29 13:33] (current) – external edit 127.0.0.1
Line 1: Line 1:
 ====== Programming Assignment 1 - Search ====== ====== Programming Assignment 1 - Search ======
  
-**Use the [[https://piazza.com/ru.is/spring2015/t622arti/home|Piazza page]], if you have any questions or problems with the assignment. Start early, so you still have time to ask in case of problems!**+**Use the [[https://piazza.com/ru.is/spring2015/t622arti|Piazza page]], if you have any questions or problems with the assignment. Start early, so you still have time to ask in case of problems!**
  
 You can work on this assignment in a group (up to 4 students per group). You can work on this assignment in a group (up to 4 students per group).
Line 9: Line 9:
 The environment is very similar to the one in [[public:t-622-arti-15-1:lab_1_-_agents|the first lab]]: The environment is very similar to the one in [[public:t-622-arti-15-1:lab_1_-_agents|the first lab]]:
 It is a rectangular grid of cells each of which may contain dirt. Only now the grid may also contain obstacles. The agent is located in this grid and facing in one of the four directions: north, south, east or west. It is a rectangular grid of cells each of which may contain dirt. Only now the grid may also contain obstacles. The agent is located in this grid and facing in one of the four directions: north, south, east or west.
 +
 +[[http://ggpserver.general-game-playing.de/ggpserver/public/view_state.jsp?matchID=vacuumcleaner_obstacles_3.1360524880799&stepNumber=2&role=AGENT|Here]] you see an example of what the environment might look like.
 +
 The agent can execute the following actions: The agent can execute the following actions:
   * TURN_ON: This action initialises the robot and has to be executed first.   * TURN_ON: This action initialises the robot and has to be executed first.
Line 20: Line 23:
  
 Your actions have the following costs: Your actions have the following costs:
-  * 1 + 15*D, if you TURN_OFF the robot in the home location and there are D dirty cells left +  * 1 + 50*D, if you TURN_OFF the robot in the home location and there are D dirty cells left 
-  * 100 + 15*D, if you TURN_OFF the robot, but not in the home location and there are D dirty cells left+  * 100 + 50*D, if you TURN_OFF the robot, but not in the home location and there are D dirty cells left
   * 5 for SUCK, if the current location of the robot does not contain dirt   * 5 for SUCK, if the current location of the robot does not contain dirt
   * 1 for SUCK, if the current location of the robot contains dirt   * 1 for SUCK, if the current location of the robot contains dirt
Line 32: Line 35:
      * Implement methods to compute the legal moves in a state and the successor state resulting of executing an action.      * Implement methods to compute the legal moves in a state and the successor state resulting of executing an action.
      * Implement the goal test, i.e., a method telling you whether a certain state fulfils all goal conditions.      * Implement the goal test, i.e., a method telling you whether a certain state fulfils all goal conditions.
 +  - Estimate the size of the state space assuming the environment has width W, length L and D dirty spots.
   - Assess the following blind search algorithms wrt. their completeness, optimality, space and time complexity in the given environment: Depth-First Search, Breadth-First Search, Uniform-Cost Search. If one of the algorithms is not complete, how could you fix it? **Note: Do this step before you implement the algorithms, so you know what to expect when you run the algorithms. Otherwise you might be very surprised.**   - Assess the following blind search algorithms wrt. their completeness, optimality, space and time complexity in the given environment: Depth-First Search, Breadth-First Search, Uniform-Cost Search. If one of the algorithms is not complete, how could you fix it? **Note: Do this step before you implement the algorithms, so you know what to expect when you run the algorithms. Otherwise you might be very surprised.**
   - Implement the three algorithms and make sure to keep track of the number of state expansions and the maximum size of the frontier. Run all the given environments with the three algorithms and compare the results wrt.   - Implement the three algorithms and make sure to keep track of the number of state expansions and the maximum size of the frontier. Run all the given environments with the three algorithms and compare the results wrt.
Line 37: Line 41:
     * maximum size of the frontier (space complexity)     * maximum size of the frontier (space complexity)
     * quality (cost) of the found solution     * quality (cost) of the found solution
-    * computation time for finding a solution.+    * computation time (in seconds) for finding a solution.
   - Think of a good (admissible) heuristic function for estimating the remaining cost given an arbitrary state of the environment. Make sure, that your heuristics is really admissible!   - Think of a good (admissible) heuristic function for estimating the remaining cost given an arbitrary state of the environment. Make sure, that your heuristics is really admissible!
   - Implement A*-Search with an evaluation function that uses your heuristics. Run your agent with all the given environments and compare the results with those of the blind search methods.   - Implement A*-Search with an evaluation function that uses your heuristics. Run your agent with all the given environments and compare the results with those of the blind search methods.
-  - Post the results (number of state expansions, maximum size of the frontier, cost of the found solution) on Piazza. +  - Post the results (number of state expansions, maximum size of the frontier, cost of the found solution) on Piazza, to see how well you are doing compared to the other students. 
-  - Try to improve your heuristics, but make sure that it does not increase the overall runtime of search algorithm. You can easily create a heuristics that tells you the optimal costs but is as expensive as a blind search. Try to avoid this.+  - Optionally (up to 10% bonus points): Implement detection of revisited states in A*-Search. Re-run the experiments and comment on the difference in the results
 +  - Try to improve your heuristics while keeping it admissible, but make sure that it does not increase the overall runtime of the search algorithm. You can easily create a heuristics that tells you the optimal costs but is as expensive as a blind search. Try to avoid this.
  
 ===== Material ===== ===== Material =====
Line 68: Line 73:
   * Implement a class "State" that contains all the information about a state that you need to keep track of. The State class should have a method that returns a list of all moves that are legal in the state and a method that takes a move and returns the state that results from executing the move.   * Implement a class "State" that contains all the information about a state that you need to keep track of. The State class should have a method that returns a list of all moves that are legal in the state and a method that takes a move and returns the state that results from executing the move.
   * Distinguish between nodes of the search tree and states! Each node has an associated state, but there may be several nodes that have the same state associated. In addition to a state, a node contains a reference to its parent (unless it is the root node), the move that was taken to get to the node and (if necessary) the path cost associated for getting from the root node to the node.   * Distinguish between nodes of the search tree and states! Each node has an associated state, but there may be several nodes that have the same state associated. In addition to a state, a node contains a reference to its parent (unless it is the root node), the move that was taken to get to the node and (if necessary) the path cost associated for getting from the root node to the node.
 +  * Keep the code for all the algorithms and make it easy to switch between them (e.g., with a command line parameter or by changing a single line of code).
  
 For developing an admissible heuristics: For developing an admissible heuristics:
Line 75: Line 81:
 ===== Handing In ===== ===== Handing In =====
 Please hand in a zip file containing: Please hand in a zip file containing:
-  * a short description of your heuristic and an explanation why it is admissible 
-  * the results of the experiments and which conclusions you draw from them 
   * the source code and an executable jar file for your agent   * the source code and an executable jar file for your agent
-To create the zip file you can just edit the "student" property in the build.xml and call the "zip" target using ant (execute "ant zip" in the directory containing the build.xml).+  * a short report (pdf or txt file) with: 
 +    * answers to questions 3 and 4 
 +    * a short description of your heuristics and why you think it is admissible 
 +    * the results of the experiments and the conclusions you draw from those results (tasks 5 and 7) 
 + 
 +To create the zip file you can just edit the "student" property in the build.xml, put the report in the same directory as the build.xml and call the "zip" target using ant (execute "ant zip" in the directory containing the build.xml).
  
 The deadline is Tuesday, 03.02.2014. If you hand in after the deadline, your grade will be lowered by 1 for each day your submission is late. The deadline is Tuesday, 03.02.2014. If you hand in after the deadline, your grade will be lowered by 1 for each day your submission is late.
 +
 +===== Grading =====
 +
 +  * 70% - implementation (correct implementation of the model, algorithms and heuristics, quality/readability of the code)
 +  * 30% - report (answers to questions and report on results)
 +
 +Bonus points:
 +  * up to 5% for the agents that find the optimal solution for the most environments fastest
 +  * up to 10% for (correctly) implementing detection of revisited states and reporting on those results (task 9)
/var/www/cadia.ru.is/wiki/data/attic/public/t-622-arti-15-1/prog1.1421944128.txt.gz · Last modified: 2024/04/29 13:32 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki