首页 > 解决方案 > Pytorch Double DQN 无法正常工作

问题描述

我正在尝试为 cartpole-v0 创建一个双 dqn 网络,但该网络似乎没有按预期工作,并且停滞在 8-9 奖励左右。我究竟做错了什么?

学习阶段的每一步:

def make_step(model, target_model, optimizer, criterion, observation, action, reward, next_observation):
    inp_obv = torch.Tensor(observation)
    q = model(inp_obv)
    q_argmax = torch.argmax(q.data)
    q = q[action]

    inp_next_obv = torch.Tensor(next_observation)
    q_next = target_model(inp_next_obv)
    q_a_next = q_next[q_argmax]

    #LHS of the double DQN equation
    obv_reward = q

    #RHS of the double DQN equation
    target_reward = torch.Tensor([reward]) + GAMMA*q_a_next.detach()

    #Backprop
    loss = criterion(obv_reward, target_reward) #MSELoss
    loss.backward()

代码包装make_step:

optimizer.zero_grad() #RMSprop on net
if e%2 == 0:
    target_net.load_state_dict(net.state_dict())
for i in range(len(data)):
    observation, action, reward, next_observation = data[i]
    make_step(net, target_net, optimizer, criterion, observation, action, reward, next_observation)

GAMMA *= GAMMA
optimizer.step()

我究竟做错了什么?谢谢你。

标签: pythonpytorchreinforcement-learning

解决方案


增加目标网络更新频率可以解决问题。

optimizer.zero_grad() #RMSprop on net
if e % 100 == 0:
    target_net.load_state_dict(net.state_dict())
for i in range(len(data)):
    observation, action, reward, next_observation = data[i]
    make_step(net, target_net, optimizer, criterion, observation, action, reward, next_observation)

GAMMA *= GAMMA
optimizer.step()

推荐阅读