User Tools

Site Tools


public:t-622-arti-09-1:lab_2_materials

Lab 2 materials: Comparing Agent Programs

In lab 2 we will finish lab 1 and write a new kind of agent program that maintains a state.

  1. Start by getting the file agents2.lisp. This is largely the same file as we had for the last lab session, except some fixes to accommodate stateful agents, and code to run experiments.
  2. First finish the steps in lab 1, including
    • writing an agent that walks about randomly
    • writing a performance evaluation function that at the end gives 20 points for each clean square and deducts one point for each move made. You can get the information about made moves from the environment like this:
      (env-moves-so-far env)
  3. After you have finished lab 1, fill in the function RANDOM-WALK-WITH-STATE, found in agents2.lisp. This should be the program of an agent that:
    • Uses the STATE field of the agent struct to remember what squares are already clean. You can use this field like this:
      (agent-state agent)  ;; returns the current state
      (setf (agent-state agent) <new value>)  ;; updates the state to a new value
    • The agent should walk randomly, but it should avoid revisiting squares that are already clean.
    • The agent's objective should be to clean the whole environment.
    • If the agent is sure everything is clean, it should return the action :idle
    • Be careful not to be to strict, e.g. if you make it absolutely impossible for the agent to travel through clean squares - can it get in trouble so that it is unable to meet its objective? If so, find a solution.
  4. Use the functions at the bottom of agents2.lisp to run experiments and evaluate your agent. To try your agent on all possible 3×3 environments with two dirty squares, enter this in the REPL:
    (run-experiment (generate-environments 3 3 2)
    	        (lambda () (make-agent :program #'random-walk-with-state))
                    #'perf-measure
                    1000)

    This will return the number of experiments performed, and the average performance value.

  5. Try different agents (e.g. stateless random walk) and different performance measures. Does the stateful agent behave better than the others with any particular performance measure? You can add print commands to the experiment functions to get more information about the agents behaviour if you like.
  6. BONUS If you finish the stateful agent and running the experiments, try creating an agent that moves through the environment in some orderly pattern. Make sure it visits all squares and cleans the dirty ones. Is the orderly agent more efficient than the random one?
/var/www/ailab/WWW/wiki/data/pages/public/t-622-arti-09-1/lab_2_materials.txt · Last modified: 2009/01/26 14:06 by hannes