User Tools

Site Tools


public:t-622-arti-11-1:lab_2_materials

This is an old revision of the document!


Lab 2: Comparing Agent Programs (WARNING: work in progress)

In this lab session we will finish lab 1 and write a new kind of agent program mantaining a state.

  1. Getting Started:
    Start by getting the file agents2.lisp. This is largely the same file as we had for the last lab session, except some fixes to accommodate stateful agents, and code to run experiments.
    1. In order to run the experiments later, you can use both functions: random-agent-program and random-agent-measure from lab 1 (that means you can copy them in the new file agents2.lisp =));
    2. You only have to change the random-agent-program declaration in order to accommodate a small change in the way the function is called during experiments.
      ;; OLD declaration
      (defun random-agent-program (percept)
      ;; NEW declaration
      (defun random-agent-program (agent percept)
  2. Creating a Stateful Agent Program:
    Fill in the function RANDOM-AGENT-WITH-STATE-PROGRAM, found in agents2.lisp. This should be the program of an agent that:
    • Uses the STATE field of the agent struct to remember what squares has already visited (and eventually cleaned). Remember you can use this field in this way:
      ;; HINT: getting the value of the state field in the agent structure
      (agent-state agent)
      ;; HINT: setting a new value for the state field
      (setf (agent-state agent) <new value>)
    • The agent should walk randomly, but it should avoid revisiting squares that has already visited (and then already cleaned up).
    • The agent's objective is to clean the whole environment.
    • The agent should return the 'suck action when visiting a dirty square otherwise one of the following actions: 'right, 'left, 'up or 'down.
    • If the agent is sure everything is clean, it should return the action 'idle (i.e. forever).
    • Be careful to not being too much strict, e.g. if it is absolutely impossible for the agent to travel through clean squares. Can it get in troubles so that it is unable to meet its objective? If so, find a solution.
    • NOTE: in order to keep track of squares already seen, you can use two helper functions (see agents2.lisp for more details): SQUARE-ID and GET-NEIGHBOURS. The following picture show an example 3×3 environment (w x h, width and height = 3):
  3. Use the functions at the bottom of agents2.lisp to run experiments and evaluate your agent. To try your agent on all possible 3×3 environments with two dirty squares, enter this in the REPL:
    (run-experiment (generate-environments 3 3 2)
    	        (lambda () (make-agent :program #'random-agent-with-state-program))
                    #'perf-measure
                    1000)

    This will return the number of experiments performed, and the average performance value.

  4. Try different agents (e.g. stateless random walk) and different performance measures. Does the stateful agent behave better than the others with any particular performance measure? You can add print commands to the experiment functions to get more information about the agents behaviour if you like.
  5. BONUS If you finish the stateful agent and running the experiments, try creating an agent that moves through the environment in some orderly pattern. Make sure it visits all squares and cleans the dirty ones. Is the orderly agent more efficient than the random one?
/var/www/cadia.ru.is/wiki/data/attic/public/t-622-arti-11-1/lab_2_materials.1295447514.txt.gz · Last modified: 2024/04/29 13:32 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki