首页 > 解决方案 > 如何使用具有 Openai 稳定基线 RL 算法的自定义 Openai 健身房环境?

问题描述

我一直在尝试通过使用 openai stable-baselines 算法对其进行测试,从https://github.com/evindeb/fixed-wing-gym为固定翼无人机使用自定义 openai 健身房环境,但我一直遇到问题现在几天。我的基线是 CartPole 示例多处理:从https://stable-baselines.readthedocs.io/en/master/guide/examples.html#multiprocessing-unleashing-the-power-of-vectorized-environments释放矢量化环境的力量因为我需要提供参数并且我正在尝试使用多处理,我相信这个示例就是我所需要的。

我已将基线示例修改如下:

import gym
import numpy as np

from stable_baselines.common.policies import MlpPolicy
from stable_baselines.common.vec_env import SubprocVecEnv
from stable_baselines.common import set_global_seeds
from stable_baselines import ACKTR, PPO2
from gym_fixed_wing.fixed_wing import FixedWingAircraft


def make_env(env_id, rank, seed=0):
    """
    Utility function for multiprocessed env.

    :param env_id: (str) the environment ID
    :param num_env: (int) the number of environments you wish to have in subprocesses
    :param seed: (int) the inital seed for RNG
    :param rank: (int) index of the subprocess
    """

    def _init():
        env = FixedWingAircraft("fixed_wing_config.json")
        #env = gym.make(env_id)
        env.seed(seed + rank)
        return env

    set_global_seeds(seed)
    return _init

if __name__ == '__main__':
    env_id = "fixed_wing"
    #env_id = "CartPole-v1"
    num_cpu = 4  # Number of processes to use
    # Create the vectorized environment
    env = SubprocVecEnv([lambda: FixedWingAircraft for i in range(num_cpu)])
    #env = SubprocVecEnv([make_env(env_id, i) for i in range(num_cpu)])

    model = PPO2(MlpPolicy, env, verbose=1)
    model.learn(total_timesteps=25000)

    obs = env.reset()
    for _ in range(1000):
        action, _states = model.predict(obs)
        obs, rewards, dones, info = env.step(action)
        env.render()

我不断收到的错误如下:

Traceback (most recent call last):
  File "/home/bonie/PycharmProjects/deepRL_fixedwing/fixed-wing-gym/gym_fixed_wing/ACKTR_fixedwing.py", line 38, in <module>
    model = PPO2(MlpPolicy, env, verbose=1)
  File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/ppo2/ppo2.py", line 104, in __init__
    self.setup_model()
  File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/ppo2/ppo2.py", line 134, in setup_model
    n_batch_step, reuse=False, **self.policy_kwargs)
  File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/common/policies.py", line 660, in __init__
    feature_extraction="mlp", **_kwargs)
  File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/common/policies.py", line 540, in __init__
    scale=(feature_extraction == "cnn"))
  File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/common/policies.py", line 221, in __init__
    scale=scale)
  File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/common/policies.py", line 117, in __init__
    self._obs_ph, self._processed_obs = observation_input(ob_space, n_batch, scale=scale)
  File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/common/input.py", line 51, in observation_input
    type(ob_space).__name__))
NotImplementedError: Error: the model does not support input space of type NoneType

我不确定要真正输入什么作为函数的env_idand 。def make_env(env_id, rank, seed=0)我还认为VecEnv并行进程的功能设置不正确。

我在 Ubuntu 18.04 中使用 PyCharm IDE 使用 Python v3.6 进行编码。

在这一点上,任何建议都会有帮助!

谢谢你。

标签: pythonreinforcement-learningagentopenai-gymvirtual-environment

解决方案


您创建了一个自定义环境,但您没有使用 openaigym界面注册它。env_id指的就是这个。gym可以通过调用其注册名称来设置所有环境。

所以基本上你需要做的是按照这里的设置说明创建适当的__init__.py脚本setup.py,并遵循相同的文件结构。

pip install -e .最后,在您的环境目录中使用本地安装您的包。


推荐阅读