Aim: This assignment is meant to give you a better insight into complexity dimensions and properties of task-environments.
Summary: In this first exercise you are asked to evaluate a given Deep-Reinforcement Learner (an actor-critic learner, to be specific; for further information see Konda & Tsitsiklis 2000) in different task-environments, coded in Python. The task-environment (or just task, really) is the well-known cart-pole task: Balancing a pole on a moving platform, in 1-D (left and right movements).
Install python3 on your computer (https://www.python.org/downloads/).
Download the attached zip file and extract it to some location (e.g. …/assignment_1/) andcd into the folder.
Install the included requirements.txt file:
$ pip install -r requirements.txt
Run the code:
$ python main.py
Zip Files:
Old zip file
Updated zip file
In the file “env.py” you can find the parent-class of the cart-pole environment, you should inherit from this class when you write your own environment in 3. and include all abstract methods.
In the file “ac_learner.py” you can find the source code of an actor critic learner.
In the file “cart_pole_env.py” you can find the source code for the task environment. In order to evaluate the learner on the different environment settings the following methods are of importance:
apply_observation_noise:
Apply noise only to the observations received by the agent, NOT the internal environment variables
apply_action_noise:
Apply noise to the force applied to the cart-pole
apply_environment_noise:
Apply noise to the dynamics of the environment, but not to the observations of the agent
adjust_task:
Change the task in a chosen manner
apply_discretization:
discretize the data passed to the agent
In each of these methods you can implement a different method to adjust the task or the information passed to or from the agent. In “env.py” a helper class for noise is included, which you can use.
After the agent runs for the defined max number of epochs a plot is created of iterations per epoch. Use this to evaluate the learning performance of the learner.
HINT You are allowed to change any parts of the learner or the environment, just make sure to document all changes and explain how and why they influence the learning performance of the agent.
Besides many more:
Thórisson, K.R., Bieger, J., Schiffel, S., Garrett, D.: Towards Flexible Task-Environments for Comprehensive Evaluation of Artificial Intelligent Systems and Automatic Learners. In: International Conference on Artificial General Intelligence. pp. 187–196. Springer (2015)
Russell, S.J., Norvig, P.: Artificial intelligence: A modern approach. Malaysia; Pear-son Education Limited, (2016)
Eberding, L.M., Sheikhlar, A., Thórisson, K.R.: Sage: Task-environment platform for autonomy and generality evaluation. In: International Conference on ArtificialGeneral Intelligence. Springer (2020)
Konda, V.R., Tsitsiklis, J.N.: Actor-Critic algorithms. In: Advances in neural in-formation processing systems. pp. 1008–1014 (2000)