User Tools

Site Tools


public:t-622-arti-12-1:lab_1_-_agents

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
public:t-622-arti-12-1:lab_1_-_agents [2012/01/19 17:16] stephanpublic:t-622-arti-12-1:lab_1_-_agents [2024/04/29 13:33] (current) – external edit 127.0.0.1
Line 1: Line 1:
-;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; +====== Lab 1 - Agents ======
-;; Vacuum cleaner world +
-;; +
-;; Author: Stephan Schiffel +
-;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;+
  
-;; Fluents +===== Problem Description ===== 
-;;;;;;;;;; +Implement the control program for a vacuum cleaner agent. 
-;; at(?x,?m,?n)     ?x :: agent | wall | dirt +The environment is a rectangular grid of cells which may contain dirt or an obstacle. The agent is located in this grid and facing in one of the four directions: northsoutheast or west. 
-;;                  ?m,?n :: 1 | 2 | 3 | 4 | ..+The agent can execute the following actions: 
-;; orientation(?o)  ?o :: north south east west +  * TURN_ONThis action initialises the robot and has to be executed first
-;; stopped +  * TURN_RIGHT, TURN_LEFT: lets the robot rotate 90° clockwise/counter-clockwise 
-;; step(?x)         ?:: 1 | 2 | 3 | ..| 60 +  * GO: lets the agent attempt to move to the next cell in the direction it is currently facing. 
-;; +  * SUCK: suck the dirt in the current cell 
-;; Actions +  * TURN_OFF: turns the robot off. Once turned off, it can only be turned on again after emptying the dust-container manually.
-;;;;;;;;;; +
-;; turn_on; go; turn_left; turn_right; suck; turn_off +
-;; +
-;; Percepts +
-;;;;;;;;;;; +
-;; bump; dirt +
-;;+
  
-;; roles and initial state+The robot is equipped with a dust sensor and a touch sensor. If there is dirt at current location of the robot, the agent will sense "DIRT". If the robot is bumping into an obstacle, the agent will sense "BUMP"
 +The goal is to clean all cells and return to the initial location before turning the robot off. Note, a full charge of the battery of the robot will only last for 100 actions.
  
-(role agent) +===== Tasks ===== 
-(role random)+  - Develop a simple reflex vacuum cleaner agent. Is it rational? (Ok, this is a bit of a trick task. You should find out, that it is not actually possible to implement this with a simple reflex agent.)
  
-(init (at dirt 1 3)) +  - Develop a model-based reflex agent that is able to clean every cell. Is it rational?
-(init (at dirt 2 4)) +
-(init (at dirt 4 1)) +
-(init (at dirt 3 2)) +
-(init (at dirt 5 5)) +
-(init (step 0)) +
-(init (points 50))+
  
-(<= (legal random (setup_agent ?x ?y ?o)) +To make it a bit easier you can use the following assumptions: 
-    (true (step 0)) +  * The room is rectangular. It has only 4 straight walls that meet at right angles. There are no obstacles in the room. That is, the strategy "Go until you bump into a wall then turn right and repeat" will make the agent walk straight to a wall and then around the room along the wall. 
- (coordinate ?x) +  * The room is fairly small, so that 100 actions are enough to visit every cell, suck all the dirt and return home given a halfway decent algorithm.
- (coordinate ?y) +
- (direction ?o) +
-)+
  
-(<(legal random (add_dirt ?x ?y)) +===== Material ===== 
-    (true (step ?s)) +  * {{:public:t-622-arti-12-1:vacuumcleaner.zip|java project for the agent development}} 
- (distinct ?s 0) +  * {{:public:t-622-arti-12-1:gdls.zip|some environment descriptions (one fixed and two with a random initial location)}}
- (smaller ?s 6) +
- (coordinate ?x) +
- (coordinate ?y) +
-)+
  
-(<= (legal random noop) +The file contains code for implementing an agent in the src directory. The agent is actually a server process which listens on some port and waits for the real robot or a simulator to send a message. It will then reply with the next action the robot is supposed to execute.
-    (true (step ?s)) +
- (smaller 5 ?s) +
-)+
  
-;; action preconditions+The zip file also contains the description of an example environment (vacuumcleaner.gdl) a simulator (gamecontroller-gui.jar). To test your agent: 
 +  - Start the simulator (execute gamecontroller-gui.jar with either double-click or using the command "java -jar gamecontroller-gui.jar" on the command line). 
 +  - Setup the simulator as shown in this picture: 
 +    {{:public:t-622-arti-12-1:gamecontroller-settings.png?nolink&|}} 
 +  - Run the "Main" class in the project. If you added your own agent class, make sure that it is used in the main method of Main.java. You can also execute the "ant run" on the command line, if you have [[http://ant.apache.org/|Ant]] installed. 
 +    The output of the agent should say "NanoHTTPD is listening on port 4001", which indicates that your agent is ready and waiting for messages to arrive on the specified port. 
 +  - Now push the "Start" button in the simulator and your agent should get some messages and reply with the actions it wants to execute. At the end, the output of the simulator tells you how many points your agent got: "Game over! results: 0". In the given environment you will only get non-zero points if you manage to clean everything, return to the initial location, and turn off the robot within 100 steps. If the output of the simulator contains any line starting with "SEVERE", something is wrong. The two most common problems are the network connection (e.g., due to a firewall) between the simulator and the agent or the agent sending illegal moves.
  
-(<(legal agent turn_on) +You can see [[http://130.208.241.192/ggpserver/public/view_state.jsp?matchID=vacuum_cleaner_1.1326993828477&stepNumber=2&role=RANDOM|here]], what the example environment looks like. Of course, you shouldn't assume any fixed size, initial location or locations of the dirt in your implementation. This is just an example environment.
-   (true (step 0)) +
-+
-(<(legal agent go) +
-   (not (true (step 0))) +
-+
-(<(legal agent turn_left) +
-   (not (true (step 0))) +
-+
-(<= (legal agent turn_right) +
-   (not (true (step 0))) +
-+
-(<= (legal agent suck) +
-   (true (at agent ?m ?n)) +
-   ; (true (at dirt ?m ?n)) +
-+
-(<= (legal agent turn_off) +
-   (true (at agent 1 1)) +
-)+
  
-;; home location+[[http://ruclasses.proboards.com/index.cgi?action=gotopost&board=arti2012&thread=103&post=853|Here]] I described how you visualise what your agent is doing.
  
-(<(next (home ?x ?y)) +===== Hints ===== 
- (true (home ?x ?y)) +For implementing your agent: 
-)+  * Add a new class that implements the "Agent" interface. Look at RandomAgent.java to see how this is done. 
 +  * You have to implement the method "nextAction" which gets a collection of percepts as input and has to return the next action the agent is supposed to execute. 
 +  * Before you start programming a complicated strategy, think about it. The things your agent has to do are: 
 +     - execute TURN_ON 
 +     - visit every cell and suck up any dirt it finds on the way 
 +     - return to the initial location 
 +     - TURN_OFF 
 +  * For this your agent needs an internal model of the world. Figure out, what you need to remember about the current state of the world. Ideally, you create an object "State" that contains everything you need to remember and update this object depending on which action you executed and which percepts you got. Then you can implement rules that say which action should be executed depending on what properties the current state has.
  
-(<= (next (home ?x ?y)) +As a last and general hint: "Days of programming can save you minutes of thinking." Think of a strategy, the rules to implement it and the information you need to decide on the actions **before** you start implementing it.
- (does random (setup_agent ?x ?y ?o)) +
-+
- +
-;; action effects +
- +
-(<= (next (at agent ?m ?n)) +
-   (next_position ?m ?n) +
-+
- +
-(<= (next (orientation ?o)) +
-   (does random (setup_agent ?x ?y ?o)) +
-+
- +
-(<= (next (orientation ?o)) +
-   (does agent turn_left) +
-   (true (orientation ?p)) +
-   (clockwise ?o ?p) +
-+
- +
-(<= (next (orientation ?o)) +
-   (does agent turn_right) +
-   (true (orientation ?p)) +
-   (clockwise ?p ?o) +
-+
- +
-(<= (next (orientation ?o)) +
-   (true (orientation ?o)) +
-   (not (does agent turn_left)) +
-   (not (does agent turn_right)) +
-+
- +
-(<= (next (at dirt ?m ?n)) +
-   (true (at dirt ?m ?n)) +
-   (not (does agent suck)) +
-+
- +
-(<= (next (at dirt ?m ?n)) +
- (true (at dirt ?m ?n)) +
- (does agent suck) +
- (true (at agent ?m1 ?n1)) +
- (distinct ?m1 ?m) +
-+
-(<= (next (at dirt ?m ?n)) +
- (true (at dirt ?m ?n)) +
- (does agent suck) +
- (true (at agent ?m1 ?n1)) +
- (distinct ?n1 ?n) +
-+
- +
-(<= (next stopped) +
-   (does agent turn_off) +
-+
- +
-(<= (next (at wall ?m ?n)) +
-   (true (at wall ?m ?n)) +
-+
- +
-;; step counter +
- +
-(<= (next (step ?y)) +
-    (true (step ?x)) +
-    (succ ?x ?y) +
-+
- +
-;; points +
- +
-(<= (next (points ?y)) +
-    (true (points ?x)) +
- (not (does agent suck)) +
-    (minus ?x 1 ?y) +
-+
- +
-(<= (next (points ?y)) +
-    (true (points ?x)) +
- (does agent suck) +
- (true (at agent ?m ?n)) +
- (not (true (at dirt ?m ?n))) +
-    (minus ?x 5 ?y) +
-+
- +
-(<= (next (points ?y)) +
-    (true (points ?x)) +
- (does agent suck) +
- (true (at agent ?m ?n)) +
- (true (at dirt ?m ?n)) +
-    (plus ?x 10 ?y) +
-+
- +
-;; percepts +
- +
-(<= (sees agent dirt) +
-   (next_position ?m ?n) +
-   (true (at dirt ?m ?n)) +
-   (not (does agent suck)) +
-+
- +
-(<= (sees agent bump) +
-   is_bumping +
-+
- +
-(<= is_bumping +
-   (does agent go) +
-   facing_wall +
-+
- +
-;; termination & goal +
- +
-(<= terminal +
-   (true stopped) +
-+
- +
-(<= terminal +
- (true (step 100)) +
-+
- +
-(goal random 0) +
- +
-(<= (goal agent ?p) +
- (true (points ?p)) +
- (true stopped) +
- (not dirty) +
-+
- +
-(<= (goal agent 0) +
- (not (true stopped)) +
-+
- +
-(<= (goal agent 0) +
- dirty +
-+
- +
-(<= dirty +
- (true (at dirt ?x ?y)) +
-+
- +
-;; auxiliary +
-(<= (next_position ?x ?y) +
- (does random (setup_agent ?x ?y ?o)) +
-+
- +
-(<= (next_position ?m ?n) +
-   (does agent go) +
-   (true (at agent ?i ?j)) +
-   (true (orientation ?o)) +
-   (adjacent ?i ?j ?o ?m ?n)) +
- +
-(<= (next_position ?m ?n) +
-   (does agent go) +
-   facing_wall +
-   (true (at agent ?m ?n))) +
- +
-(<= (next_position ?m ?n) +
-   (not (does agent go)) +
-   (true (at agent ?m ?n))) +
- +
-(<= (next_to ?x ?m ?n) +
-   (true (at ?x ?i ?j)) +
-   (adjacent ?m ?n ?o ?i ?j)) +
- +
-(<= (trajectory ?m ?n ?o ?m ?n) +
-   (coordinate ?m) +
-   (coordinate ?n) +
-   (direction ?o)) +
-(<= (trajectory ?i ?j ?o ?m ?n) +
-   (adjacent ?i ?j ?o ?k ?l) +
-   (trajectory ?k ?l ?o ?m ?n)) +
- +
-(<= (adjacent ?m ?i north ?m ?j) +
-   (succ ?i ?j) +
-   (coordinate ?m)) +
-(<= (adjacent ?m ?i south ?m ?j) +
-   (succ ?j ?i) +
-   (coordinate ?m)) +
-(<= (adjacent ?i ?n east ?j ?n) +
-   (succ ?i ?j) +
-   (coordinate ?n)) +
-(<= (adjacent ?i ?n west ?j ?n) +
-   (succ ?j ?i) +
-   (coordinate ?n)) +
- +
-(<= facing_wall +
-   (true (at agent ?m ?n)) +
-   (max ?n) +
-   (true (orientation north))) +
-(<= facing_wall +
-   (true (at agent ?m 1)) +
-   (true (orientation south))) +
-(<= facing_wall +
-   (true (at agent ?m ?n)) +
-   (max ?m) +
-   (true (orientation east))) +
-(<= facing_wall +
-   (true (at agent 1 ?n)) +
-   (true (orientation west))) +
- +
-(direction north) +
-(direction south) +
-(direction east) +
-(direction west) +
- +
-(clockwise north east) +
-(clockwise east south) +
-(clockwise south west) +
-(clockwise west north) +
- +
-(coordinate 1) +
-(coordinate 2) +
-(coordinate 3) +
-(coordinate 4) +
-(coordinate 5) +
- +
-(succ 1 2) +
-(succ 2 3) +
-(succ 3 4) +
-(succ 4 5) +
- +
-(max 5) +
- +
-(succ 0 1) +
-(succ 1 2) +
-(succ 2 3) +
-(succ 3 4) +
-(succ 4 5) +
-(succ 5 6) +
-(succ 6 7) +
-(succ 7 8) +
-(succ 8 9) +
-(succ 9 10) +
-(succ 10 11) +
-(succ 11 12) +
-(succ 12 13) +
-(succ 13 14) +
-(succ 14 15) +
-(succ 15 16) +
-(succ 16 17) +
-(succ 17 18) +
-(succ 18 19) +
-(succ 19 20) +
-(succ 20 21) +
-(succ 21 22) +
-(succ 22 23) +
-(succ 23 24) +
-(succ 24 25) +
-(succ 25 26) +
-(succ 26 27) +
-(succ 27 28) +
-(succ 28 29) +
-(succ 29 30) +
-(succ 30 31) +
-(succ 31 32) +
-(succ 32 33) +
-(succ 33 34) +
-(succ 34 35) +
-(succ 35 36) +
-(succ 36 37) +
-(succ 37 38) +
-(succ 38 39) +
-(succ 39 40) +
-(succ 40 41) +
-(succ 41 42) +
-(succ 42 43) +
-(succ 43 44) +
-(succ 44 45) +
-(succ 45 46) +
-(succ 46 47) +
-(succ 47 48) +
-(succ 48 49) +
-(succ 49 50) +
-(succ 50 51) +
-(succ 51 52) +
-(succ 52 53) +
-(succ 53 54) +
-(succ 54 55) +
-(succ 55 56) +
-(succ 56 57) +
-(succ 57 58) +
-(succ 58 59) +
-(succ 59 60) +
-(succ 60 61) +
-(succ 61 62) +
-(succ 62 63) +
-(succ 63 64) +
-(succ 64 65) +
-(succ 65 66) +
-(succ 66 67) +
-(succ 67 68) +
-(succ 68 69) +
-(succ 69 70) +
-(succ 70 71) +
-(succ 71 72) +
-(succ 72 73) +
-(succ 73 74) +
-(succ 74 75) +
-(succ 75 76) +
-(succ 76 77) +
-(succ 77 78) +
-(succ 78 79) +
-(succ 79 80) +
-(succ 80 81) +
-(succ 81 82) +
-(succ 82 83) +
-(succ 83 84) +
-(succ 84 85) +
-(succ 85 86) +
-(succ 86 87) +
-(succ 87 88) +
-(succ 88 89) +
-(succ 89 90) +
-(succ 90 91) +
-(succ 91 92) +
-(succ 92 93) +
-(succ 93 94) +
-(succ 94 95) +
-(succ 95 96) +
-(succ 96 97) +
-(succ 97 98) +
-(succ 98 99) +
-(succ 99 100) +
- +
-(number 0) +
-(<= (number ?y) +
-        (succ ?x ?y) +
-+
- +
-(<= (plus ?x 0 ?x) +
-        (number ?x) +
-+
-(<= (plus 100 ?x 100) +
- (number ?x) +
-+
-(<= (plus ?x ?y ?sum) +
- (succ ?x ?x1) +
- (succ ?y1 ?y) +
- (plus ?x1 ?y1 ?sum) +
-+
- +
-(<= (minus ?x 0 ?x) +
-        (number ?x) +
-+
-(<= (minus 0 ?x 0) +
- (number ?x) +
-+
-(<= (minus ?x ?y ?sum) +
- (succ ?x1 ?x) +
- (succ ?y1 ?y) +
- (minus ?x1 ?y1 ?sum) +
-+
- +
-(<= (smaller ?x ?y) +
-        (succ ?x ?y) +
-+
-(<= (smaller ?x ?y) +
-        (succ ?z ?y) +
-        (smaller ?x ?z) +
-)+
  
/var/www/cadia.ru.is/wiki/data/attic/public/t-622-arti-12-1/lab_1_-_agents.1326993407.txt.gz · Last modified: 2024/04/29 13:32 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki