首页 > 解决方案 > 如何使我的 np.split 以相等的除法结束?

问题描述

我正在排练以提高我开发游戏的技能,使用强化学习教程,但后来我遇到了使 np.split 正确执行的问题(即没有导致相等的划分)。下面的代码显示了奖励定义会话,同时在另一个 py 文件中定义了环境的参数。然后,以下行会导致 traceback info 部分中显示的问题。

rewards_per_thousand_episodes = np.split(np.array(rewards_all_episodes), num_episodes/1000)

试图寻找解决问题的方法,但徒劳无功。任何对解决方案的建议都将受到高度赞赏......;o)

环境定义:

import numpy as np
import gym
import random
import time
from IPython.display import clear_output

env = gym.make("FrozenLake-v1")
env.reset()

# Construct Q-table, and initialize all the Q-values to zero for each state-action pair

action_space_size = env.action_space.n
state_space_size = env.observation_space.n

q_table = np.zeros((state_space_size, action_space_size))

print("\nq_table")
print(q_table)

# Initializing Q-Learning Parameters

num_episodes = 10000 # total number of episodes the agent is to play during training
max_steps_per_episode = 100 # maximum number of steps that agent is allowed to take within a single episode

learning_rate = 0.1
discount_rate = 0.99

exploration_rate = 1
max_exploration_rate = 1 # bounds to how small exploration rate can be
min_exploration_rate = 0.01 # bounds to how large exploration rate can be

# rate at which the exploration_rate will decay
# LR changed to 0.001 due to inconsistencies in results with larger rate - https://youtu.be/HGeI30uATws?t=57
exploration_decay_rate = 0.001

奖励定义:

import numpy as np
import random
from .... import \
    num_episodes, env, max_steps_per_episode, q_table, learning_rate, exploration_rate, discount_rate, \
    min_exploration_rate, max_exploration_rate, exploration_decay_rate

rewards_all_episodes = []

# Q-learning algorithm
for episode in range(num_episodes):
    state = env.reset()

    # Exploration/exploitation trade-off
    done = False
    rewards_current_episode = 0

    for step in range(max_steps_per_episode):


        exploration_rate_threshold = random.uniform(0,1)
        if exploration_rate_threshold > exploration_rate:
            action = np.argmax(q_table[state,:])
        else:
            action = env.action_space.sample()

        new_state, reward, done, info = env.step(action)

        #Update Q-table for Q(s,a)
        q_table[state, action] = \
            q_table[state, action] * (1 - learning_rate) + learning_rate * (reward + discount_rate *
                                                                            np.max(q_table[new_state, :]))

        state = new_state
        rewards_current_episode +- reward

        if done == True:
            break

        # Exploration rate decay - exploration rate update - https://youtu.be/HGeI30uATws?t=298
        exploration_rate = min_exploration_rate + (max_exploration_rate - min_exploration_rate) * np.exp(
            -exploration_decay_rate * episode)

        rewards_all_episodes.append(rewards_current_episode)

rewards_per_thousand_episodes = np.split(np.array(rewards_all_episodes), num_episodes/1000)
count = 1000
print("\n********Average reward per thousand episodes********")
for r in rewards_per_thousand_episodes:
    print(count, ": ", str(sum(r/1000)))
    count += 1000

print("\n********Updated Q-table********")
print(q_table)

追溯信息:

Traceback (most recent call last):
File "C:\Users\jcst\PycharmProjects\...\_test_Q_learning_and_Gym_run.py", line 48, in <module>
rewards_per_thousand_episodes = np.split(np.array(rewards_all_episodes), num_episodes/1000)
File "<__array_function__ internals>", line 5, in split
File "C:\Users\jcst\PycharmProjects\Python_3_9_test\venv\lib\site-packages\numpy\lib\shape_base.py", line 872, in split
raise ValueError(
ValueError: array split does not result in an equal division

标签: python-3.xnumpysplitreinforcement-learningopenai-gym

解决方案


推荐阅读