User Tools

Site Tools


Programming Assignment 2 - Adversarial Search

Use Piazza or email me, if you have any questions or problems with the assignment. Start early, so you still have time to ask in case of problems!

Problem Description

Implement an agent that is able to play Connect-4.

The game is played on a seven-column, six-row vertically-suspended grid. The two players “white” and “red” take turns in choosing one of the seven columns to drop a disc of their color. White moves first. The goal of the game is to reach a state where four of one's own discs form a line (vertically, horizontally, or diagonally). The game ends, if one of the players has reached his goal or both players depleted their supply of 21 discs. The game ends in a draw if none of the players wins. The scores are 100 points for winning, 50 points for a draw and 0 points for losing.

The legal moves for the agent are ”(DROP x)” where x is a number from 1 to 7. It is illegal to drop a disc into a column that already contains 6 discs.


The agents that were created in class participated in a tournament against each other. The results of the tournament can be seen here.

GroupPlayer NameAverage Score
Ari Freyr Ásgeirsson ai15_c4_1
Ævar Ísak Ástþórsson
Ásgeir Viðar Árnason ai15_c4_2
Fabio Alessandrelli
Atli Sævar Guðmundsson ai15_c4_3
Ægir Már Jónsson
Stefanía Bergljót Stefánsdóttir
Arnar Freyr Bjarnason
Bjarni Egill ai15_c4_4
Davíð Hafþór
Einar Karl
Ólafur Ingi Eiríksson
Freyr Bergsteinsson ai15_c4_5
Kristinn Þröstur Sigurðarson
Gabríel Arthúr Pétursson ai15_c4_6
Kristján Árni Gerhardsson
Ingibergur Sindri Stefnisson
Geir Ingi Sigurðsson ai15_c4_7
Daníel Þór Gunnlaugsson
Guðlaugur Garðar Eyþórsson ai15_c4_8
Guðmundur Stefánsson ai15_c4_9
Hinrik Már Hreinsson ai15_c4_10
Svanhvít Jónsdóttir
Andri Már Sigurðsson
Hafdís Erla Helgadóttir
Ingólfur Halldórsson ai15_c4_11
Ragnar Adolf Árnason
Kjartan Valur Kjartansson ai15_c4_12
Murray Tannock ai15_c4_13
Páll Arinbjarnar ai15_c4_14
Davíð Arnar Sverrisson
Stefán Ólafsson ai15_c4_15
Sveinn Henrik Kristinsson ai15_c4_16
Andri Fannar Freysson
Sigurjón Rúnar Vikarsson
Ólafur Helgi Jónsson


  1. Develop a model of the environment. What constitutes a state of the environment? What is a successor state resulting of executing an action in a certain state? Which action is legal under which conditions?
  2. Implement a state evaluation function for the game. You can start with the following simple evaluation function for white (and use the negation if you are red): <number of white discs that are adjacent to another white disc> - <number of red discs that are adjacent to another red disc>
  3. Implement iterative deepening alpha-beta search using this state evaluation function to evaluate the leaf nodes of the tree.
  4. Keep track of and output the number of state expansions.
  5. Improve the state evaluation function or implement a better one.
  6. Test if it is really better by pitching two agents (one with each evaluation function) against each other or by pitching each evaluation function against a random agent. Make sure to run the experiments with the random agent several times to get significant results. Don't forget to switch sides because white has an advantage in the game.
  7. Do all experiments with time constraints (play clock) of 1s, 5s and 10s.
  8. Make your code fast! The more state expansions you get per second, the better the player.


The files in the first archive are similar to those in the first programming assignment. The archive contains code for implementing an agent in the src directory. The agent is actually a server process which listens on some port and waits for the real robot or a simulator to send a message. It will then reply with the next action the agent wants to execute.

The zip file also contains the description of the environment (connect4.gdl) and a simulator (gamecontroller-gui.jar). To test your agent:

  • Start the simulator (execute gamecontroller-gui.jar with either double-click or using the command “java -jar gamecontroller-gui.jar” on the command line).
  • Setup the simulator as shown in this picture:

  • You can use your player as both the first and the second role of the game, just not at the same time. To let two instances of your agent play against each other, start your agent twice with different ports to listen on and use the respective ports in the simulator.
  • Run the “Main” class in the project. If you added your own agent class, make sure that it is used in the main method of You can also execute the “ant run” on the command line, if you have Ant installed. The output of the agent should say “NanoHTTPD is listening on port 4001”, which indicates that your agent is ready and waiting for messages to arrive on the specified port.
  • Now push the “Start” button in the simulator and your agent should get some messages and reply with the actions it wants to execute. At the end, the output of the simulator tells you how many points both players got: “Game over! results: 0 100”, the first number is for white and the second for red.
  • If the output of the simulator contains any line starting with “SEVERE”, something is wrong. The two most common problems are the network connection (e.g., due to a firewall) between the simulator and the agent or the agent sending illegal moves.


For implementing your agent:

  • Add a new class that implements the “Agent” interface. Look at to see how this is done.
  • You have to implement the methods “init” and “nextAction”. “init” will be called once at the start and should be used to initialize the agent. You will get the information, which role your agent is playing (white or red) and how much time the agent has for computing each move. “nextAction” gets the previous move as input and has to return the next action the agent is supposed to execute within the given time limit. “nextAction” is called for every step of the game. If it is not your players turn return “NOOP”.
  • Make sure your agent is able to play both roles (white and red)!
  • You can make sure to be on time by regularly checking whether there is time left during the search process and stopping the search just before you run out of time, e.g., by throwing an exception that you catch where you call the search function the first time.
  • To specify the port your agent is running on change the build.xml file as follows and use the command line ant -Darg0=PORT run with PORT being the port number:
      <target name="run" depends="dist">
              <java jar="${dist}/${projectname}.jar" fork="true">
                      <arg value="${arg0}"/> <!-- add this line here! -->
                      <jvmarg value="-Xmx500m" />
              <antcall target="clean" />

Handing In

Please hand in a zip file containing:

  • the source code and an executable jar file for your agent
  • a short description of your heuristic
  • the results of the experiments and which conclusions you draw from them

To create the zip file you just edit the “student” property in the build.xml and call the “zip” target using ant (execute “ant zip” in the directory containing the build.xml). Make sure to have all files that are to be in the zip file in the directory containing the build.xml or below.

The deadline is 24.02.2014. We will have a tournament between your agents afterwards. Extra points for the top players!

/var/www/ailab/WWW/wiki/data/pages/public/t-622-arti-15-1/prog2.txt · Last modified: 2015/03/11 12:11 by stephan