User Tools

Site Tools


public:t-622-arti-16-1:programming_assignment_2_-_breakthrough

Programming Assignment 2 - Adversarial Search

Use Piazza if you have any questions or problems with the assignment. Start early, so you still have time to ask in case of problems!

Problem Description

Implement an agent that is able to play the game of Breakthrough. This game is a simplified version of chess.

The game is played on an grid if width W (<latex>$2 \leq W \leq 10$</latex>) and height H (<latex>$4 \leq H \leq 10$</latex>). The initial state is setup such that both players have two rows of pawns of their respective color. White's paws are on rows 1 and 2 while black's pawns are on rows H-1 and H. The two players “white” and “black” take turns in moving one of their pawns. White moves first. Pawns move like in chess, that is, they can move one spot straight forward (up for white or down for black) onto an empty square or they can capture an opponents piece by moving one spot diagonally forward. As opposed to chess, pawns on the second row can not move two spaces at once. The goal of the game is to advance any one pawn to the opposite side of the board (i.e., to promote a pawn). The game ends, if one of the players has reached his goal or if the player who's turn it is does not have any legal move. This can happen if he does not have any pieces left, or if all his pawns are in positions where they cannot move. The game ends in a draw if none of the players wins. The scores are 100 points for winning, 50 points for a draw and 0 points for losing.

The legal moves for the agent are called “(move x1 y1 x2 y2)” meaning that the pawn at x1,y1 is moved to x2,y2. x1,x2 are integers between 1 and W. y1,y2 are integers between 1 and H.

Tournament

The agents that were created in class participated in a tournament against each other. There were 16 agents competing in the first round. In that round, every agent played on the 3×5 and the 5×5 boards against everyone else, once as white and once as black. The top 7 of the first round (because there was a distinctive gap between 7th and 8th place) got to compete in a final round on the 8×8 board.

The results of the first round can be seen here. The results of the final round can be seen here.

GroupPlayer NameAverage Score
Fanney Sigurðardóttir ai16_bt_10 (82.76) FINALIST - 79.2
Guðni Fannar Kristjánsson
Hrafn Orri Hrafnkelsson
Kristinn Þorri Þrastarson
Guðmundur Harðarson ai16_bt_2 (76.72) FINALIST - 75.0
Andri Ívarsson
Björn Ingi Baldvinsson
Andri Már Þórhallsson ai16_bt_4 (74.14) FINALIST - 62.5
Ásgeir Þór Másson
Karl Ingi Karlsson
Sindri Már Kaldal Sigurjónsson ai16_bt_15 (70.69) FINALIST - 58.3
Eysteinn Gunnlaugsson
Magnús Sigurðarson
Brynjar Ólafsson ai16_bt_5 (62.93) FINALIST - 54.2
Tryggvi Þór Guðmundsson
Hörður Már Hafsteinsson
Ari Þórðarson
Eva Segarra Raro ai16_bt_9 (64.66) FINALIST - 12.5
Jakub Mackovic
Roser Sanchez Todo
Kári Eiríksson ai16_bt_3 (66.38) FINALIST - 8.3
Magnús Vilhelm Björnsson
Andri Már Ómarsson
Sverrir Magnússon
Eiður Sveinn Gunnarsson ai16_bt_8 48.28
Orri Ólafsson
Jóhannes Páll Magnússon
Ingimar Örn Oddsson ai16_bt_1 44.83
Alexander Björnsson
Johan Ejstrud
Kristján Hreinn Bergsson ai16_bt_14 41.38
Egill Anton Hlöðversson ai16_bt_13 40.52
Jón Gísli Björgvinsson
Atli Freyr Einarsson ai16_bt_7 26.72
Guðjón Geir Jónsson
Gunnar Karl Pálmason
Steinar Ágúst Steinarsson
Grímur Kristinsson ai16_bt_11 25.86
Natan Örn Ólafsson
Sölvi Hjaltason
Kristján Harðarson
Guðrún Inga Baldursdóttir ai16_bt_12 24.14
Björn Ingemar Elfström
Davíð Freyr Jónsson
Quang Van Nguyen ai16_bt_6 20.69
Einar Hallberg Ragnarsson
Ásgeir Frímannsson
Janus Þór Kristjánsson
Tinna Frímann Jökulsdóttir ai16_bt_16 10.0
Dagur Arinbjörn Daníelsson

Tasks

  1. Develop a model of the environment. What constitutes a state of the environment? What is a successor state resulting of executing an action in a certain state? Which action is legal under which conditions? (Note that the board size is flexible and your agent should be able to handle games with different board sizes within the given restrictions for width and height).
  2. Implement a state evaluation function for the game. You can start with the following simple evaluation function for white (and use the negation for black): 50 - <distance of most advanced white pawn to row H> + <distance of most advanced black pawn to row 1>
  3. Implement iterative deepening alpha-beta search and use this state evaluation function to evaluate the leaf nodes of the tree.
  4. Keep track of and output the number of state expansions, current depth limit of your iterative deepening loop and runtime of the search for each iteration of iterative deepening and in total.
  5. Improve the state evaluation function or implement a better one.
  6. Test if it is really better by pitching two agents (one with each evaluation function) against each other or by pitching each evaluation function against a random agent. If you run the experiments with the random agent, you need to repeat the experiment a decent number of times to get significant results. Don't forget to switch sides because white has an advantage in the game.
  7. Run the experiments with different board sizes and time constraints (play clock) of 1s and 10s.
  8. Make your code fast! The more state expansions you get per second, the better the player. Ideally, you should be able to solve the small boards (3×5, 5×5) in 10s.

Material

The files in the archive are similar to those in the first programming assignment. The archive contains code for implementing an agent in the src directory. The agent is actually a server process which listens on some port and waits for a game simulator or a game playing robot to send a message. It will then reply with the next action the agent wants to execute.

The zip file also contains the description of the environment for different board sizes (breakthrough_XxY.gdl) and a two simulators (gamecontroller-gui.jar and kiosk.jar). To test your agent:

  • Start the simulator (execute gamecontroller-gui.jar with either double-click or using the command “java -jar gamecontroller-gui.jar” on the command line)
  • Setup the simulator as shown in this picture:
  • You can use your player as both the first and the second role of the game, just not at the same time. To let two instances of your agent play against each other, start your agent twice with different ports to listen on and use the respective ports in the simulator.
  • Run the “Main” class in the project. If you added your own agent class, make sure that it is used in the main method of Main.java. You can also execute the “ant run” on the command line, if you have Ant installed. The output of the agent should say “NanoHTTPD is listening on port 4001”, which indicates that your agent is ready and waiting for messages to arrive on the specified port.
  • Now push the “Start” button in the simulator and your agent should get some messages and reply with the actions it wants to execute. At the end, the output of the simulator tells you how many points both players got: “Game over! results: 0 100”, the first number is for white and the second for black.
  • If the output of the simulator contains any line starting with “SEVERE”, something is wrong. The two most common problems are the network connection (e.g., due to a firewall) between the simulator and the agent or the agent sending illegal moves.
  • You can also use the kiosk to play against your agent. The kiosk does have visualization of the game, but it does not allow you to let two instances of the agent play against each other.

Hints

For implementing your agent:

  • Add a new class that implements the “Agent” interface. Look at RandomAgent.java to see how this is done.
  • You have to implement the methods “init” and “nextAction”. “init” will be called once at the start and should be used to initialize the agent. You will get the information, which role your agent is playing (white or black) and how much time the agent has for computing each move. “nextAction” gets the previous move as input and has to return the next action the agent is supposed to execute within the given time limit. “nextAction” is called for every step of the game. If it is not your players turn return “NOOP”.
  • Make sure your agent is able to play both roles (white and black)!
  • You can make sure to be on time by regularly checking whether there is time left during the search process and stopping the search just before you run out of time, e.g., by throwing an exception that you catch where you call the search function the first time.
  • Your agent should be able to play the small boards perfectly (at least the 3×5, probably also the 5×5), with a playclock of around 10 seconds. You will need a decent heuristics to play the bigger boards well.
  • To specify the port your agent is running on change the build.xml file as follows and use the command line ant -Dport=PORT run with PORT being the port number:
build.xml
      ...
      <target name="run" depends="dist">
              <java jar="${dist}/${projectname}.jar" fork="true">
                      <arg value="${port}"/> <!-- add this line here! -->
                      <jvmarg value="-Xmx500m" />
              </java>        
              <antcall target="clean" />
      </target>
      ...

Handing In

Please hand in a zip file containing:

  • the complete src folder containing the source code for your agent, but also the other classes that came with the project
  • a pdf with a short description of your heuristic and the results of the experiments and the conclusions you draw from them

The deadline is 29.02.2016. We will have a tournament between your agents afterwards. Extra points for the top players!

/var/www/cadia.ru.is/wiki/data/pages/public/t-622-arti-16-1/programming_assignment_2_-_breakthrough.txt · Last modified: 2024/04/29 13:33 by 127.0.0.1

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki