How to create a new gym environment in OpenAI?
题意 :如何在OpenAI中创建一个新的(gym)环境?
问题背景:
I have an assignment to make an AI Agent that will learn to play a video game using ML. I want to create a new environment using OpenAI Gym because I don't want to use an existing environment. How can I create a new, custom Environment?
我有一个任务,需要制作一个AI代理,让它通过学习机器学习来玩一款视频游戏。我想使用OpenAI Gym创建一个新的环境,因为我不想使用现有的环境。我如何创建一个新的、自定义的环境呢?
Also, is there any other way I can start to develop making AI Agent to play a specific video game without the help of OpenAI Gym?
另外,有没有其他方法可以在不借助OpenAI Gym的情况下,开始开发让AI代理玩特定视频游戏的功能?
问题解决:
See my banana-gym for an extremely small environment.
看看我的banana-gym,那是一个极简的环境。
Create new environments
创建新环境
See the main page of the repository:
查看存储库的主页
https://github.com/openai/gym/blob/master/docs/creating_environments.md
The steps are: 步骤是:
- Create a new repository with a PIP-package structure 创建一个具有PIP包结构的新存储库
It should look like this
它应该像这样
gym-foo/
README.md
setup.py
gym_foo/
__init__.py
envs/
__init__.py
foo_env.py
foo_extrahard_env.py
For the contents of it, follow the link above. Details which are not mentioned there are especially how some functions in foo_env.py should look like. Looking at examples and at gym.openai.com/docs/ helps. Here is an example:
关于它的内容,请点击上面的链接查看。那里没有提到的细节,特别是foo_env.py中的一些函数应该是什么样的。查看示例和gym.openai.com/docs/上的文档会有所帮助。下面是一个示例:
class FooEnv(gym.Env):
metadata = {'render.modes': ['human']}
def __init__(self):
pass
def _step(self, action):
"""
10. Parameters
----------
action :
14. Returns
-------
ob, reward, episode_over, info : tuple
ob (object) :
an environment-specific object representing your observation of
the environment.
reward (float) :
amount of reward achieved by the previous action. The scale
varies between environments, but the goal is always to increase
your total reward.
episode_over (bool) :
whether it's time to reset the environment again. Most (but not
all) tasks are divided up into well-defined episodes, and done
being True indicates the episode has terminated. (For example,
perhaps the pole tipped too far, or you lost your last life.)
info (dict) :
diagnostic information useful for debugging. It can sometimes
be useful for learning (for example, it might contain the raw
probabilities behind the environment's last state change).
However, official evaluations of your agent are not allowed to
use this for learning.
"""
self._take_action(action)
self.status = self.env.step()
reward = self._get_reward()
ob = self.env.getState()
episode_over = self.status != hfo_py.IN_GAME
return ob, reward, episode_over, {}
def _reset(self):
pass
def _render(self, mode='human', close=False):
pass
def _take_action(self, action):
pass
def _get_reward(self):
""" Reward is given for XY. """
if self.status == FOOBAR:
return 1
elif self.status == ABC:
return self.somestate *
else:
return 0
Use your environment 使用你的环境
import gym
import gym_foo
env = gym.make('MyEnv-v0')
Examples 示例
- GitHub - openai/gym-soccer
- GitHub - openai/gym-wikinav: Wikipedia navigation environment for OpenAI Gym
- GitHub - alibaba/gym-starcraft: StarCraft environment for OpenAI Gym, based on Facebook's TorchCraft. (In progress)
- GitHub - endgameinc/gym-malware
- GitHub - hackthemarket/gym-trading: Environment for reinforcement-learning algorithmic trading models
- GitHub - tambetm/gym-minecraft: Minecraft environment for Open AI Gym, based on Microsoft's Malmo.
- GitHub - ppaquette/gym-doom: Gym - Doom environments based on VizDoom.
- GitHub - ppaquette/gym-super-mario: Gym - 32 levels of original Super Mario Bros
- GitHub - MattChanTK/gym-maze: A basic 2D maze environment where an agent start from the top left corner and try to find its way to the bottom left corner.

