Hi everyone! What do you think about using OpenAI to create a bot? Is it possible? What difficulties will you face?
Hi. When you say “bot” can you be more specific? Are you talking about “chatbots” or in-game AI players or …? Just want to better understand what you mean.
i mean in-game AI player, i already have some expiriens developing pixel analysis bot, butit use only if recasts.
Discusses in-game bots. This might be useful start for what you are looking for.
I hope this helps.
I was also wondering if it is possible to create bot for mmorpg for specific task ( player killing ). The environment is fully 3d. Any ideas?
i already done a part with image recognition (EmguCV C#). i get a lot usefull information by analysing minimap - player X,Y cords, angle of view, red dots (monster), white square ( player) etc. Also i almost done a skills condition analysis, inventory state, HP,MP, equipment state and much more! But i have no idea how use all it information to start training tha agent… Nowdays VS2017 support Python app development and i sucsessfuly run Cart-pole example, but it only one success in field of fails.
Step 1: Determine your state, actions, and rewards.
Step 2: Decide on a reinforcement algorithm (e.g. TRPO, DQN, etc.) or just pick one.
State Vector: [Player X, PlayerY, Angle of View, Monster1 X, Monster1 Y, Monster2 X, Monster2 Y,…]
Actions: [move up, move down, move right, move left, shoot arrow, use healing, …]
Rewards: [change in score]
Step 2: Use DQN algorithm to train agent: https://github.com/openai/baselines/tree/master/baselines/deepq
I hope this helps.
ad_xyz, I great thanks for your advice ! what can you say about the fact that you teach the agent to independently evaluate the environment?
State Vector: image matrix ( captured from screen original or filtred).
Action - get environment State Vector: [Player X, PlayerY, Angle of View, Monster1 X, Monster1 Y, Monster2 X, Monster2 Y,…]
Rewards: [change in equal real stete(by EmguCV) and state by agent]
Eugen. I’m not quite sure I understand what you are trying to do with this bot. I think that the “independent evaluation” already occurs within the DQN algorithm (the neural network in particular). The bot/agent learns to take the action (for example, move bot right) based on evaluating the current state (image matrix) which will lead to the greatest expected reward. The state is determined by EmguCV and is the only state which is evaluated by the bot. During game play, the input to the bot will be the EmguCV processed image and the output will be the bot action (move left, move right, etc.). At least, this is how I understood your original bot goal.
Hope this makes sense and is helpful.
if you have continuous action space, I would recommend using A3C. DQN is good only for discrete action spaces.
maxmax1992 you got right ! I already thought about using several agents each for a specific task. I was thinking about what would be the main agent of the DQN, instead of the actions chosen from a set of agents to perform actions (chose pvpAgnet for open world pvp or chose huntAgent to continue grinding, or chose interfaceAgent for use inventory, stores, auction etc.). your idea of using A3C looks reasonable and convincing. I will definitely consider this approach when developing my bot. many thanks