Using Baselines


OpenAI sure did a great job with its release of Baselines. But there are somethings that I think could be done to make them easier to use:

  1. Only the DQN has its own README explaining the first steps to see the algorithm in action.

  2. It would be nice to have a simple way to run each of the algorithms on the CartPole problem. Being such an easy environment its a nice place to run quick tests and compare algorithms at first.

  3. Can these algorithms be run on the GPU? Since the models for atari ar CNN it would make sense to use the GPU, wouldn’t it?

These are things that I think would help me kick-start my research. I would be happy to contribute myself as long as I could get some pushes in the right direction


The algorithms are implemented using TensorFlow, and you can have either of the CPU or GPU installations of TensorFlow. If you install tensorflow-gpu, it will automatically use GPUs for all the TF operations. So yes, the algorithms can run on GPU.

However in my experience, there was little difference between the GPU and CPU run-time of DQN training on Atari Pong. So my question is that is there a bottle-neck in the implementation that is preventing a significant improvement in GPU run-time?


I think for the gpu implementation to be worthwhile you need to have many environments training at one in parallel. I know the A2C algorithm has this setup, I am not sure about the deepq code.