User Tools

Site Tools


public:t-622-arti-12-1:prog1

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
public:t-622-arti-12-1:prog1 [2012/02/02 14:57] – [Problem Description] stephanpublic:t-622-arti-12-1:prog1 [2024/04/29 13:33] (current) – external edit 127.0.0.1
Line 1: Line 1:
 ====== Programming Assignment 1 - Search ====== ====== Programming Assignment 1 - Search ======
 +
 +**Use the [[http://ruclasses.proboards.com/index.cgi?board=arti2012&action=display&thread=104|forum]] or email me, if you have any questions or problems with the assignment. Start early, so you still have time to ask in case of problems!**
  
 ===== Problem Description ===== ===== Problem Description =====
Line 19: Line 21:
   * 100 + 15*D, if you TURN_OFF the robot, but not in the home location and there are D dirty cells left   * 100 + 15*D, if you TURN_OFF the robot, but not in the home location and there are D dirty cells left
   * 5 for SUCK, if the current location of the robot does not contain dirt   * 5 for SUCK, if the current location of the robot does not contain dirt
 +  * 1 for SUCK, if the current location of the robot contains dirt
   * 1 for all other actions   * 1 for all other actions
  
Line 27: Line 30:
      * Implement methods to compute the legal moves in a state and the successor state resulting of executing an action.      * Implement methods to compute the legal moves in a state and the successor state resulting of executing an action.
      * Implement the goal test, i.e., a method telling you whether a certain state fullfills all goal conditions.      * Implement the goal test, i.e., a method telling you whether a certain state fullfills all goal conditions.
-  - Assess the following blind search algorithms wrt. their completeness, optimality, space and time complexity in the given environment: Depth-First Search, Breadth-First Search, Uniform-Cost Search. If one of the algorithms is not complete, how could you fix it?+  - Assess the following blind search algorithms wrt. their completeness, optimality, space and time complexity in the given environment: Depth-First Search, Breadth-First Search, Uniform-Cost Search. If one of the algorithms is not complete, how could you fix it? **Note: Do this step first before you implement the algorithms, so you know what to expect when you run the algorithms. Otherwise you might be very surprised.**
   - Implement the three algorithms and make sure to keep track of the number of state expansions and the maximum size of the frontier. Run all the given environments with the three algorithms and compare the results wrt.   - Implement the three algorithms and make sure to keep track of the number of state expansions and the maximum size of the frontier. Run all the given environments with the three algorithms and compare the results wrt.
     * number of state expansions (time complexity)     * number of state expansions (time complexity)
Line 40: Line 43:
 ===== Material ===== ===== Material =====
   * {{:public:t-622-arti-12-1:vacuumcleaner_utility_agent.zip|java project for the agent development}}   * {{:public:t-622-arti-12-1:vacuumcleaner_utility_agent.zip|java project for the agent development}}
-  * some bigger environments will follow soon+  * {{:public:t-622-arti-12-1:bigger_worlds.zip|some bigger test environments}} (updated 14.02.2012, see the worlds.txt file to see how big the state spaces are)
  
 The files in the archive are similar to those in the first lab. The files in the archive are similar to those in the first lab.
Line 50: Line 53:
   - Setup the simulator as shown in this picture:   - Setup the simulator as shown in this picture:
     {{:public:t-622-arti-12-1:gamecontroller-settings.png?nolink&|}}     {{:public:t-622-arti-12-1:gamecontroller-settings.png?nolink&|}}
-  - If there is a second role called "in the game, leave it as "Random".+  - If there is a second role called "Random" in the game, leave it as "Random".
   - Run the "Main" class in the project. If you added your own agent class, make sure that it is used in the main method of Main.java. You can also execute the "ant run" on the command line, if you have [[http://ant.apache.org/|Ant]] installed. The output of the agent should say "NanoHTTPD is listening on port 4001", which indicates that your agent is ready and waiting for messages to arrive on the specified port.   - Run the "Main" class in the project. If you added your own agent class, make sure that it is used in the main method of Main.java. You can also execute the "ant run" on the command line, if you have [[http://ant.apache.org/|Ant]] installed. The output of the agent should say "NanoHTTPD is listening on port 4001", which indicates that your agent is ready and waiting for messages to arrive on the specified port.
   - Now push the "Start" button in the simulator and your agent should get some messages and reply with the actions it wants to execute. At the end, the output of the simulator tells you how many points your agent got: "Game over! results: 0". Those points correspond more or less with negated costs. That is, more points means the solution had lower costs.   - Now push the "Start" button in the simulator and your agent should get some messages and reply with the actions it wants to execute. At the end, the output of the simulator tells you how many points your agent got: "Game over! results: 0". Those points correspond more or less with negated costs. That is, more points means the solution had lower costs.
Line 61: Line 64:
   * Add a new class that implements the "Agent" interface. Look at RandomAgent.java to see how this is done.   * Add a new class that implements the "Agent" interface. Look at RandomAgent.java to see how this is done.
   * You have to implement the methods "init" and "nextAction". "init" will be called once at the start and should be used to run the search while "nextAction" gets a collection of percepts as input (which you do not actually need here) and has to return the next action the agent is supposed to execute. So, if you found a solution in "init" you have to store it somewhere and than execute it step by step in "nextAction".   * You have to implement the methods "init" and "nextAction". "init" will be called once at the start and should be used to run the search while "nextAction" gets a collection of percepts as input (which you do not actually need here) and has to return the next action the agent is supposed to execute. So, if you found a solution in "init" you have to store it somewhere and than execute it step by step in "nextAction".
 +  * Implement a class "State" that contains all the information about a state that you need to keep track of. The State class should have a method that returns a list of all moves that are legal in the state and a method that takes a move and returns the state that results from executing the move.
 +  * Distinguish between nodes of the search tree and states! Each node has an associated state, but there may be several nodes that have the same state associated. In addition to a state, a node contains a reference to is parent (unless it is the root node), the move that was taken to get to the node and (if necessary) the path cost associated for getting from the root node to the node.
 +
 +For developing an admissible heuristics:
 +  * Remember: The cost of the optimal solution of a relaxation of the problem is an admissible heuristics. How could you relax the problem, such that you instantly know the cost of the optimal solution?
 +  * The maximum of two admissible heuristics is again admissible. The same holds for consistent heuristics. So, if you have two admissible heuristics you can simply take the maximum value of both and you will get an admissible heuristics that is at least as good.
  
 ===== Handing In ===== ===== Handing In =====
Line 69: Line 78:
 To create the zip file you just edit the "student" property in the build.xml and call the "zip" target using ant (execute "ant zip" in the directory containing the build.xml). To create the zip file you just edit the "student" property in the build.xml and call the "zip" target using ant (execute "ant zip" in the directory containing the build.xml).
  
-The deadline is Sunday12.02.2012.+The deadline is Tuesday14.02.2012. You can still hand in after the deadline, but your grade will be lowered by 0.5 for each day your submission is late.
/var/www/cadia.ru.is/wiki/data/attic/public/t-622-arti-12-1/prog1.1328194647.txt.gz · Last modified: 2024/04/29 13:32 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki