首页 > 解决方案 > 丢弃整个输入层

问题描述

假设我有两个输入(每个都有许多特征),我想将它们输入一个Dropout层。我希望每次迭代都丢弃整个输入及其所有相关特征,并保留整个其他输入。

连接输入后,我想我需要使用noise_shape参数 for Dropout,但连接层的形状并不能真正让我这样做。对于形状 (15,) 的两个输入,连接的形状是 (None, 30),而不是 (None, 15, 2),因此其中一个轴丢失了,我不能沿着它退出。

关于我能做什么的任何建议?谢谢。

from keras.layers import Input, concatenate, Dense, Dropout

x = Input((15,))  # 15 features for the 1st input
y = Input((15,))  # 15 features for the 2nd input
xy = concatenate([x, y])
print(xy._keras_shape)
# (None, 30)

layer = Dropout(rate=0.5, noise_shape=[xy.shape[0], 1])(xy)
...

标签: pythonkerasneural-network

解决方案


编辑 :

好像我误解了你的问题,这里是根据你的要求更新的答案。

为了实现您想要的,x 和 y 有效地成为时间步长,并且根据 Keras 文档,noise_shape=(batch_size, 1, features)如果您的输入形状是(batch_size, timesteps, features)

x = Input((15,1))  # 15 features for the 1st input
y = Input((15,1))  # 15 features for the 2nd input
xy = concatenate([x, y])

dropout_layer = Dropout(rate=0.5, noise_shape=[None, 1, 2])(xy)
...

要测试您是否获得了正确的行为,您可以检查中间xy层并dropout_layer使用以下代码(参考链接):

### Define your model ###

from keras.layers import Input, concatenate, Dropout
from keras.models import Model
from keras import backend as K

# Learning phase must be set to 1 for dropout to work
K.set_learning_phase(1)

x = Input((15,1))  # 15 features for the 1st input
y = Input((15,1))  # 15 features for the 2nd input
xy = concatenate([x, y])

dropout_layer = Dropout(rate=0.5, noise_shape=[None, 1, 2])(xy)

model = Model(inputs=[x,y], output=dropout_layer)

# specify inputs and output of the model

x_inp = model.input[0]                                           
y_inp = model.input[1]
outp = [layer.output for layer in model.layers[2:]]        
functor = K.function([x_inp, y_inp], outp)

### Get some random inputs ###

import numpy as np

input_1 = np.random.random((1,15,1))
input_2 = np.random.random((1,15,1))

layer_outs = functor([input_1,input_2])
print('Intermediate xy layer:\n\n',layer_outs[0])
print('Dropout layer:\n\n', layer_outs[1])

您应该看到整个 x 或 y 根据您的要求随机删除(50% 的机会):

Intermediate xy layer:

 [[[0.32093528 0.70682645]
  [0.46162075 0.74063486]
  [0.522718   0.22318116]
  [0.7897043  0.7849486 ]
  [0.49387926 0.13929296]
  [0.5754296  0.6273373 ]
  [0.17157765 0.92996144]
  [0.36210892 0.02305864]
  [0.52637625 0.88259524]
  [0.3184462  0.00197006]
  [0.67196816 0.40147918]
  [0.24782693 0.5766827 ]
  [0.25653633 0.00514544]
  [0.8130438  0.2764429 ]
  [0.25275478 0.44348967]]]

Dropout layer:

 [[[0.         1.4136529 ]
  [0.         1.4812697 ]
  [0.         0.44636232]
  [0.         1.5698972 ]
  [0.         0.2785859 ]
  [0.         1.2546746 ]
  [0.         1.8599229 ]
  [0.         0.04611728]
  [0.         1.7651905 ]
  [0.         0.00394012]
  [0.         0.80295837]
  [0.         1.1533654 ]
  [0.         0.01029088]
  [0.         0.5528858 ]
  [0.         0.88697934]]]

如果您想知道为什么所有元素都乘以 2,请在此处查看 tensorflow 如何实现 dropout 。

希望这可以帮助。


推荐阅读