public:t-622-arti-15-1:prog1
Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
public:t-622-arti-15-1:prog1 [2015/01/22 16:28] – created stephan | public:t-622-arti-15-1:prog1 [2024/04/29 13:33] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== Programming Assignment 1 - Search ====== | ====== Programming Assignment 1 - Search ====== | ||
- | **Use the [[https:// | + | **Use the [[https:// |
You can work on this assignment in a group (up to 4 students per group). | You can work on this assignment in a group (up to 4 students per group). | ||
Line 9: | Line 9: | ||
The environment is very similar to the one in [[public: | The environment is very similar to the one in [[public: | ||
It is a rectangular grid of cells each of which may contain dirt. Only now the grid may also contain obstacles. The agent is located in this grid and facing in one of the four directions: north, south, east or west. | It is a rectangular grid of cells each of which may contain dirt. Only now the grid may also contain obstacles. The agent is located in this grid and facing in one of the four directions: north, south, east or west. | ||
+ | |||
+ | [[http:// | ||
+ | |||
The agent can execute the following actions: | The agent can execute the following actions: | ||
* TURN_ON: This action initialises the robot and has to be executed first. | * TURN_ON: This action initialises the robot and has to be executed first. | ||
Line 20: | Line 23: | ||
Your actions have the following costs: | Your actions have the following costs: | ||
- | * 1 + 15*D, if you TURN_OFF the robot in the home location and there are D dirty cells left | + | * 1 + 50*D, if you TURN_OFF the robot in the home location and there are D dirty cells left |
- | * 100 + 15*D, if you TURN_OFF the robot, but not in the home location and there are D dirty cells left | + | * 100 + 50*D, if you TURN_OFF the robot, but not in the home location and there are D dirty cells left |
* 5 for SUCK, if the current location of the robot does not contain dirt | * 5 for SUCK, if the current location of the robot does not contain dirt | ||
* 1 for SUCK, if the current location of the robot contains dirt | * 1 for SUCK, if the current location of the robot contains dirt | ||
Line 32: | Line 35: | ||
* Implement methods to compute the legal moves in a state and the successor state resulting of executing an action. | * Implement methods to compute the legal moves in a state and the successor state resulting of executing an action. | ||
* Implement the goal test, i.e., a method telling you whether a certain state fulfils all goal conditions. | * Implement the goal test, i.e., a method telling you whether a certain state fulfils all goal conditions. | ||
+ | - Estimate the size of the state space assuming the environment has width W, length L and D dirty spots. | ||
- Assess the following blind search algorithms wrt. their completeness, | - Assess the following blind search algorithms wrt. their completeness, | ||
- Implement the three algorithms and make sure to keep track of the number of state expansions and the maximum size of the frontier. Run all the given environments with the three algorithms and compare the results wrt. | - Implement the three algorithms and make sure to keep track of the number of state expansions and the maximum size of the frontier. Run all the given environments with the three algorithms and compare the results wrt. | ||
Line 37: | Line 41: | ||
* maximum size of the frontier (space complexity) | * maximum size of the frontier (space complexity) | ||
* quality (cost) of the found solution | * quality (cost) of the found solution | ||
- | * computation time for finding a solution. | + | * computation time (in seconds) |
- Think of a good (admissible) heuristic function for estimating the remaining cost given an arbitrary state of the environment. Make sure, that your heuristics is really admissible! | - Think of a good (admissible) heuristic function for estimating the remaining cost given an arbitrary state of the environment. Make sure, that your heuristics is really admissible! | ||
- Implement A*-Search with an evaluation function that uses your heuristics. Run your agent with all the given environments and compare the results with those of the blind search methods. | - Implement A*-Search with an evaluation function that uses your heuristics. Run your agent with all the given environments and compare the results with those of the blind search methods. | ||
- | - Post the results (number of state expansions, maximum size of the frontier, cost of the found solution) on Piazza. | + | - Post the results (number of state expansions, maximum size of the frontier, cost of the found solution) on Piazza, to see how well you are doing compared to the other students. |
- | - Try to improve your heuristics, but make sure that it does not increase the overall runtime of search algorithm. You can easily create a heuristics that tells you the optimal costs but is as expensive as a blind search. Try to avoid this. | + | - Optionally (up to 10% bonus points): Implement detection of revisited states in A*-Search. Re-run the experiments and comment on the difference in the results. |
+ | - Try to improve your heuristics | ||
===== Material ===== | ===== Material ===== | ||
Line 68: | Line 73: | ||
* Implement a class " | * Implement a class " | ||
* Distinguish between nodes of the search tree and states! Each node has an associated state, but there may be several nodes that have the same state associated. In addition to a state, a node contains a reference to its parent (unless it is the root node), the move that was taken to get to the node and (if necessary) the path cost associated for getting from the root node to the node. | * Distinguish between nodes of the search tree and states! Each node has an associated state, but there may be several nodes that have the same state associated. In addition to a state, a node contains a reference to its parent (unless it is the root node), the move that was taken to get to the node and (if necessary) the path cost associated for getting from the root node to the node. | ||
+ | * Keep the code for all the algorithms and make it easy to switch between them (e.g., with a command line parameter or by changing a single line of code). | ||
For developing an admissible heuristics: | For developing an admissible heuristics: | ||
Line 75: | Line 81: | ||
===== Handing In ===== | ===== Handing In ===== | ||
Please hand in a zip file containing: | Please hand in a zip file containing: | ||
- | * a short description of your heuristic and an explanation why it is admissible | ||
- | * the results of the experiments and which conclusions you draw from them | ||
* the source code and an executable jar file for your agent | * the source code and an executable jar file for your agent | ||
- | To create the zip file you can just edit the " | + | * a short report (pdf or txt file) with: |
+ | * answers to questions 3 and 4 | ||
+ | * a short description of your heuristics and why you think it is admissible | ||
+ | * the results of the experiments and the conclusions you draw from those results (tasks 5 and 7) | ||
+ | |||
+ | To create the zip file you can just edit the " | ||
The deadline is Tuesday, 03.02.2014. If you hand in after the deadline, your grade will be lowered by 1 for each day your submission is late. | The deadline is Tuesday, 03.02.2014. If you hand in after the deadline, your grade will be lowered by 1 for each day your submission is late. | ||
+ | |||
+ | ===== Grading ===== | ||
+ | |||
+ | * 70% - implementation (correct implementation of the model, algorithms and heuristics, quality/ | ||
+ | * 30% - report (answers to questions and report on results) | ||
+ | |||
+ | Bonus points: | ||
+ | * up to 5% for the agents that find the optimal solution for the most environments fastest | ||
+ | * up to 10% for (correctly) implementing detection of revisited states and reporting on those results (task 9) |
/var/www/cadia.ru.is/wiki/data/attic/public/t-622-arti-15-1/prog1.1421944128.txt.gz · Last modified: 2024/04/29 13:32 (external edit)