# Center for Analysis and Design of Intelligent Agents

### Site Tools

public:t-622-arti-12-1:prog2

# Programming Assignment 2 - Adversarial Search

Use the forum or email me, if you have any questions or problems with the assignment. Start early, so you still have time to ask in case of problems!

## Problem Description

Implement an agent that is able to play Connect-4.

The game is played on a seven-column, six-row vertically-suspended grid. The two players “white” and “red” take turns in choosing one of the seven columns to drop a disc of their color. White moves first. The goal of the game is to reach a state where four of one's own discs form a line (vertically, horizontally, or diagonally). The game ends, if one of the players has reached his goal or both players depleted their supply of 21 discs. The game ends in a draw if none of the players wins. The scores are 100 points for winning, 50 points for a draw and 0 points for losing.

The legal moves for the agent are “(DROP x)” where x is a number from 1 to 7. It is illegal to drop a disc into a column that already contains 6 discs.

1. Develop a model of the environment. What constitutes a state of the environment? What is a successor state resulting of executing an action in a certain state? Which action is legal under which conditions?
2. Implement a state evaluation function for the game. You can start with the following simple evaluation function for white (and use the negation if you are red): <number of white discs that are adjacent to another white disc> - <number of red discs that are adjacent to another red disc>
3. Implement iterative deepening alpha-beta search using this state evaluation function to evaluate the leaf nodes of the tree.
4. Keep track of and output the number of state expansions.
5. Improve the state evaluation function or implement a better one.
6. Test if it is really better by pitching two agents (one with each evaluation function) against each other or by pitching each evaluation function against a random agent. Make sure to run the experiments with the random agent several times to get significant results. Don't forget to switch sides because white has a slight advantage in the game.
7. Do all experiments with time constraints (play clock) of 1s, 5s and 10s.
8. Make you code fast! The more state expansions you get per second, the better the player.

## Material

The files in the first archive are similar to those in the first programming assignment. The archive contains code for implementing an agent in the src directory. The agent is actually a server process which listens on some port and waits for the real robot or a simulator to send a message. It will then reply with the next action the agent wants to execute.

The zip file also contains the description of the environment (connect4.gdl) and a simulator (gamecontroller-gui.jar). To test your agent:

1. Start the simulator (execute gamecontroller-gui.jar with either double-click or using the command “java -jar gamecontroller-gui.jar” on the command line).
2. Setup the simulator as shown in this picture:

1. You can use your player as both the first and the second role of the game, just not at the same time.
2. Run the “Main” class in the project. If you added your own agent class, make sure that it is used in the main method of Main.java. You can also execute the “ant run” on the command line, if you have Ant installed. The output of the agent should say “NanoHTTPD is listening on port 4001”, which indicates that your agent is ready and waiting for messages to arrive on the specified port.
3. Now push the “Start” button in the simulator and your agent should get some messages and reply with the actions it wants to execute. At the end, the output of the simulator tells you how many points both players got: “Game over! results: 0 100”, the first number is for white and the second for red.
4. If the output of the simulator contains any line starting with “SEVERE”, something is wrong. The two most common problems are the network connection (e.g., due to a firewall) between the simulator and the agent or the agent sending illegal moves.

## Hints

For implementing your agent:

• Add a new class that implements the “Agent” interface. Look at RandomAgent.java to see how this is done.
• You have to implement the methods “init” and “nextAction”. “init” will be called once at the start and should be used to initialize the agent. You will get the information, which role your agent is playing (white or red) and how much time the agent has for computing each move. “nextAction” gets the previous move as input and has to return the next action the agent is supposed to execute within the given time limit. “nextAction” is called for every step of the game. If it is not your players turn return “NOOP”.
• You can make sure to be on time by regularly checking whether there is time left during the search process and stopping the search just before you run out of time.

## Handing In

Please hand in a zip file containing:

• a short description of your heuristic
• the results of the experiments and which conclusions you draw from them
• the source code and an executable jar file for your agent

To create the zip file you just edit the “student” property in the build.xml and call the “zip” target using ant (execute “ant zip” in the directory containing the build.xml).

The deadline is Tuesday, 28.02.2012. We will have a competition between your agents in the lab on Thursday, 01.03.2012. Extra points for the winner! You can watch the competition here.

As with the first programming assignment, you can still hand in after the deadline (by email), but your grade will be lowered by 0.5 for each day your submission is late. Also, you can't get the extra points for winning the competition.