How to capture video feed from a universe game


I’m interested in capturing the raw pixels for any given game running in vnc in the universe. is there any way to do this (get stream of screen captures) apart from recording the screen myself using some tool?


Copied from

Recording and uploading results

Gym makes it simple to record your algorithm’s performance on an environment, as well as to take videos of your algorithm’s learning. Just wrap your environment with a Monitor Wrapper as follows:

import gym
from gym import wrappers
env = gym.make(‘CartPole-v0’)
env = wrappers.Monitor(env, ‘/tmp/cartpole-experiment-1’)
for i_episode in range(20):
observation = env.reset()
for t in range(100):
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
print(“Episode finished after {} timesteps”.format(t+1))

This will log your algorithm’s performance to the provided directory. The Monitor is fairly sophisticated, and supports multiple instances of an environment writing to a single directory.

You can then upload your results to OpenAI Gym:

import gym
gym.upload(’/tmp/cartpole-experiment-1’, api_key=‘YOUR_API_KEY’)

The output should look like the following:

[2016-04-22 23:16:03,123] Uploading 20 episodes of training data
[2016-04-22 23:16:04,194] Uploading videos of 2 training episodes (6306 bytes)
[2016-04-22 23:16:04,437] Creating evaluation object on the server with learning curve and training video
[2016-04-22 23:16:04,677]

You successfully uploaded your agent evaluation to
OpenAI Gym! You can find it at:

How could I save VNC traffic from client to remote to record human performance?

It helps a lot! I would like to record images and actions when human playing flash games.

There is also Monitor module in Universe. With the help of Monitor, I can collect mp4 files when playing games!:slight_smile: And there are key arrows displaying in the mp4 files. What I need more is to collect these key arrows.

So, if I want to collect key arrows, do I need to detect the key arrows shown in the mp4 files? Then it will be much hard for me to finish it.
Or, are there any other methods or documents to help to collect actions (key actions) while playing games by human?


I never tried to look in the gym docs for recording and was only looking at the universe docs.
thanks a lot for this.


can you explain how are you doing this ?

i’m also interested doing the same. capturing the video of human playing any of the games and corresponding input given by human player at that time


Well, I could not do it now. I am still finding a way to record human performance. But it seems that you can access human performance in beta version, said in the blog. I have applied to access beta version, however, I get no information yet. I think they are woking on it now. If you get a way to record the performance, please teach me about it. :wink:


what I’m interested in doing is exactly same as you do. create human performance dataset annotated with the controls as well.

I’ll definitely ping back if I figure out any way to do this.


@liber145 There is an easy way to record the browser without much slowdown.
I have created a Youtube stream, I use YouTube Live feature, either through Chrome Plugins, or Hangout to record the Agent doing it’s own thing in the browser.

You can extract the keys, since they show up at the bottom of the screen separately.


@yad, thanks a lot for sharing your ideas. Extracting keys from videos is not an easy thing for me. Maybe I could try some ocr methods later if there is not a better way. Further, do you have any idea to record the rewards during playing games?


Do you know if there is a function in gym that can
capture the image of the demo window?


Hi I did not see any details on this in the documetation you referred to.

Also, how to record environments when the baseline gym models create multiple stacked environments as opposed to a single atari environment.

I really just want to be able to view the results for now. Still having difficulty.