I’m new here. I’m trying to write a Go 9x9 agent by using Monte Carlo Tree Search. For this, I will need to simulate the Game of Go. First, I have some specific and very basic questions regarding the environment.
When trying with the code provided in the documentation (https://gym.openai.com/docs) and changing the environment to env = gym.make(‘Go9x9-v0’), I found out that the moves are not random. Does that suggest action = env.action_space.sample() is deterministic?
I forked the code in https://gym.openai.com/evaluations/eval_4hNanao8SIGtvddOSYwU9w and specified video_callable=always_true, env_ids=[‘Go9x9-v0’] in EnvRunner in random_agent.py. When I run it, it does not generate any video (mp4) but only some stats.json, manifest.json, meta.json, video0000x.json files, different from other environments in which I see some mp4 files. How can I generate videos?
Where can I find documentation related to Go9x9 (pachi_py)? To simulate the game, I only found out that I can do b = env.state.board.clone() and b.play_inplace(coord, color). I’d like to know how to get access to some other information, such as the number of captures, the komi, the handicap, whether the game has terminated, etc.
Does anyone have any clue?