public:t-622-arti-11-1:lab_2_materials
This is an old revision of the document!
Lab 2: Comparing Agent Programs (WARNING: work in progress)
In this lab session we will finish lab 1 and write a new kind of agent program mantaining a state.
- Getting Started:
Start by getting the file agents2.lisp. This is largely the same file as we had for the last lab session, except some fixes to accommodate stateful agents, and code to run experiments.- In order to run the experiments later, you can use both functions: random-agent-program and random-agent-measure from lab 1 (that means you can copy them in the new file agents2.lisp
);
- You only have to change the random-agent-program declaration in order to accommodate a small change in the way the function is called during experiments.
;; OLD declaration (defun random-agent-program (percept) ;; NEW declaration (defun random-agent-program (agent percept)
- Creating a Stateful Agent Program:
Fill in the function RANDOM-AGENT-WITH-STATE-PROGRAM, found in agents2.lisp. This should be the program of an agent that:- Uses the STATE field of the agent struct to remember what squares has already visited (and eventually cleaned). Remember you can use this field in this way:
;; HINT: getting the value of the state field in the agent structure (agent-state agent) ;; HINT: setting a new value for the state field (setf (agent-state agent) <new value>)
- The agent should walk randomly, but it should avoid revisiting squares that has already visited (and the are already clean).
- The agent's objective is to clean the whole environment.
- If the agent is sure everything is clean, it should return the action
'idle
- Be careful not to be to strict, e.g. if you make it absolutely impossible for the agent to travel through clean squares - can it get in trouble so that it is unable to meet its objective? If so, find a solution.
- Use the functions at the bottom of agents2.lisp to run experiments and evaluate your agent. To try your agent on all possible 3×3 environments with two dirty squares, enter this in the REPL:
(run-experiment (generate-environments 3 3 2) (lambda () (make-agent :program #'random-agent-with-state-program)) #'perf-measure 1000)
This will return the number of experiments performed, and the average performance value.
- Try different agents (e.g. stateless random walk) and different performance measures. Does the stateful agent behave better than the others with any particular performance measure? You can add print commands to the experiment functions to get more information about the agents behaviour if you like.
- BONUS If you finish the stateful agent and running the experiments, try creating an agent that moves through the environment in some orderly pattern. Make sure it visits all squares and cleans the dirty ones. Is the orderly agent more efficient than the random one?
/var/www/cadia.ru.is/wiki/data/attic/public/t-622-arti-11-1/lab_2_materials.1295438418.txt.gz · Last modified: 2024/04/29 13:32 (external edit)