首页 > 解决方案 > 有状态的 RNN 在 Keras 函数模型中具有错误的张量形状

问题描述

我定义了一个包含有状态 LSTM 块的 Keras 函数模型,如下所示:

import numpy as np
from tensorflow.python import keras


data = np.ones((1,2,3))

input_shape = data.shape  # batch size, step size, input size
output_units = 2

### input block ###
inputs = keras.layers.Input(batch_shape=input_shape, name="inputs")


### model block with stateful RNN ###
dummy_inputs_1 = keras.layers.Input(batch_shape=input_shape, name="dummy_inputs_1")
recurrent_1 = keras.layers.LSTM(units=input_shape[-1], batch_input_shape=input_shape,
                                return_sequences=True, stateful=True,
                                name="recurrent_1")(dummy_inputs_1)

dense_1 = keras.layers.Dense(output_units, batch_input_shape=(
    input_shape[0], input_shape[-1], input_shape[1]),
                             name="dense_1")
output_1 = keras.layers.TimeDistributed(dense_1, input_shape=input_shape, name="output_1")(recurrent_1)

model_1 = keras.models.Model(inputs=[dummy_inputs_1], outputs=[output_1], name="model_1")
model_1.compile(loss='mean_squared_error',
                optimizer='Nadam',
                metrics=['accuracy'])

model_1.predict(data) #works

### add model block to model ###
model_block = model_1(inputs)
model = keras.models.Model(inputs=[inputs], outputs=[model_block], name="model")
model.compile(loss='mean_squared_error',
                optimizer='Nadam',
                metrics=['accuracy'])

model_1.predict(data) #works

model.predict(data)  #fails

如所写,第一次predict()调用(对包含有状态 LSTM 层的内部模型块)工作正常,但第二次调用失败并出现以下错误:

 Traceback (most recent call last):
  File ".../functional_stateful.py", line 38, in <module>
    model_1.predict(data)
  File ".../local/lib/python2.7/site-packages/tensorflow/python/keras/engine/training.py", line 1478, in predict
    self, x, batch_size=batch_size, verbose=verbose, steps=steps)
  File ".../local/lib/python2.7/site-packages/tensorflow/python/keras/engine/training_arrays.py", line 363, in predict_loop
    batch_outs = f(ins_batch)
  File ".../local/lib/python2.7/site-packages/tensorflow/python/keras/backend.py", line 2897, in __call__
    fetched = self._callable_fn(*array_vals)
  File ".../local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1454, in __call__
    self._session._session, self._handle, args, status, None)
  File ".../local/lib/python2.7/site-packages/tensorflow/python/framework/errors_impl.py", line 519, in __exit__
    c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'inputs' with dtype float and shape [1,2,3]
     [[Node: inputs = Placeholder[dtype=DT_FLOAT, shape=[1,2,3], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

stateful=TrueLSTM 定义中注释掉后,整个事情运行良好。有谁知道发生了什么?

编辑: 显然只是在另一层调用有状态模型块就足以导致predict()该块失败(即此代码失败并出现相同的错误):

import numpy as np
from tensorflow.python import keras

data = np.ones((1,2,3))

input_shape = data.shape  # batch size, step size, input size
output_units = 2

### input block ###
inputs = keras.layers.Input(batch_shape=input_shape, name="inputs")


### sample model block with stateful RNN ###
dummy_inputs_1 = keras.layers.Input(batch_shape=input_shape, name="dummy_inputs_1")
recurrent_1 = keras.layers.LSTM(units=input_shape[-1], batch_input_shape=input_shape,
                                return_sequences=True, stateful=True,
                                name="recurrent_1")(dummy_inputs_1)

model_1 = keras.models.Model(inputs=[dummy_inputs_1], outputs=[recurrent_1], name="model_1")
model_1.compile(loss='mean_squared_error',
                optimizer='Nadam',
                metrics=['accuracy'])

# ### add model block to model ###
model_block = model_1(inputs)

model_1.predict(data) #fails 

编辑2:但显然,在另一个块上调用它之前 为有状态块添加对predict()的调用可以让您之后仍然使用它(即以下运行良好):

import numpy as np
from tensorflow.python import keras

data = np.ones((1,2,3))

input_shape = data.shape  # batch size, step size, input size
output_units = 2

### input block ###
inputs = keras.layers.Input(batch_shape=input_shape, name="inputs")


### sample model block with stateful RNN ###
dummy_inputs_1 = keras.layers.Input(batch_shape=input_shape, name="dummy_inputs_1")
recurrent_1 = keras.layers.LSTM(units=input_shape[-1], batch_input_shape=input_shape,
                                return_sequences=True, stateful=True,
                                name="recurrent_1")(dummy_inputs_1)

model_1 = keras.models.Model(inputs=[dummy_inputs_1], outputs=[recurrent_1], name="model_1")
model_1.compile(loss='mean_squared_error',
                optimizer='Nadam',
                metrics=['accuracy'])

model_1.predict(data) #works

# ### add model block to model ###
model_block = model_1(inputs)

model_1.predict(data) #works

标签: pythontensorflowkeras

解决方案


我怀疑stateful=TrueRNN 与多个输入不兼容。
(在您的代码中,您有dummy_inputs_1and inputs。keras 在其许多消息中将其称为“多个入站节点”。实际上,您在那里有两个并行分支,一个关于 original dummy_inputs_1,另一个关于 new inputs

这是为什么?一个stateful=True层旨在接收“一个序列”(或一批中的许多“并行”序列),该序列被分成多组时间步长。

当它收到第 2 批时,它会将其解释为第 1 批关于序列时间步长的续集。

当你有两个输入张量时,RNN 应该如何解释什么继续什么?您将失去“连续序列”的一致性。该层只有“一个状态张量”,它不能用这个来跟踪并行张量。

因此,如果您要使用具有多个输入的有状态 RNN,我建议您创建该层的副本。如果您希望它们共享相同的权重,这可能需要获得公共权重张量的自定义层。

现在,如果您打算使用此块一次,您可能应该使用model_1.inputandmodel_1.output而不是提供另一个输入张量。


推荐阅读