python-3.x - TypeError:“NoneType”对象不可调用 Tensorflow
问题描述
目前正在使用tf2.0
. 为了准备我的数据集,我使用了以下代码:
train = tf.data.Dataset.from_tensor_slices(([train_X], [train_y])).batch(BATCH_SIZE).repeat()
val = tf.data.Dataset.from_tensor_slices(([val_X], [val_y])).batch(BATCH_SIZE).repeat()
现在,如果我们看一下它们的形状:
<RepeatDataset shapes: ((None, 42315, 20), (None, 42315)), types: (tf.float64, tf.float64)>
<RepeatDataset shapes: ((None, 2228, 20), (None, 2228)), types: (tf.float64, tf.float64)>
我认为这是相当正确的。现在,如果我通过如下所示的模型运行这些,它们似乎训练和工作得很好:
simple_lstm_model = tf.keras.models.Sequential([
tf.keras.layers.LSTM(8),
tf.keras.layers.Dense(1)
])
simple_lstm_model.compile(optimizer='adam', loss='mae')
history = simple_lstm_model.fit(train, epochs=EPOCHS,
steps_per_epoch=EVALUATION_INTERVAL,
validation_data=val, validation_steps=50)
但是,当我让我的模型稍微复杂一些并尝试编译它时,它给了我一个错误,这是这个问题的标题。有关错误的详细信息位于此问题的最底部。复杂模型如下图所示:
comp_lstm = tf.keras.models.Sequential([
tf.keras.layers.LSTM(64),
tf.keras.layers.LSTM(64),
tf.keras.layers.LSTM(64),
tf.keras.layers.Dense(1)
])
comp_lstm.compile(optimizer='adam', loss='mae')
history = comp_lstm.fit(train,
epochs=EPOCHS,
steps_per_epoch=EVALUATION_INTERVAL,
validation_data=val, validation_steps=50)
事实上,我想尝试一个双向 LSTM,但似乎多个 LSTM 堆栈本身给我带来了如下所述的问题。
错误
TypeError Traceback (most recent call last)
<ipython-input-21-8a86aab8a730> in <module>
2 EPOCHS = 20
3
----> 4 history = comp_lstm.fit(train,
5 epochs=EPOCHS,
6 steps_per_epoch=EVALUATION_INTERVAL,
~/python_envs/p2/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
64 def _method_wrapper(self, *args, **kwargs):
65 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
---> 66 return method(self, *args, **kwargs)
67
68 # Running inside `run_distribute_coordinator` already.
~/python_envs/p2/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
846 batch_size=batch_size):
847 callbacks.on_train_batch_begin(step)
--> 848 tmp_logs = train_function(iterator)
849 # Catch OutOfRangeError for Datasets of unknown size.
850 # This blocks until the batch has finished executing.
~/python_envs/p2/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
578 xla_context.Exit()
579 else:
--> 580 result = self._call(*args, **kwds)
581
582 if tracing_count == self._get_tracing_count():
~/python_envs/p2/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
609 # In this case we have created variables on the first call, so we run the
610 # defunned version which is guaranteed to never create variables.
--> 611 return self._stateless_fn(*args, **kwds) # pylint: disable=not-callable
612 elif self._stateful_fn is not None:
613 # Release the lock early so that multiple threads can perform the call
TypeError: 'NoneType' object is not callable
解决方案
问题是当你堆叠多个 LSTM 时,我们应该return_sequences = True
在 LSTM 层中使用参数。
这是因为如果return_sequences = False
(默认行为),LSTM 将返回Output of the Last Time Step
. 但是当我们堆叠 LSTM 时,我们需要的Output
是 theComplete Sequence
而不仅仅是 Last Time Step
。
将您的模型更改为
comp_lstm = tf.keras.models.Sequential([
tf.keras.layers.LSTM(64, return_sequences = True),
tf.keras.layers.LSTM(64, return_sequences = True),
tf.keras.layers.LSTM(64),
tf.keras.layers.Dense(1)
])
应该解决错误。
这样,您也可以使用Bi-Directional LSTMs
。
如果您遇到任何其他错误,请告诉我,我很乐意为您提供帮助。
希望这可以帮助。快乐学习!
推荐阅读
- sql - SQL中百分比计算的简单方法是什么?
- android - setOnClickListener RecyclerView 适配器中的 notifyDataSetChanged
- sql - 将数据限制在数据库中的一个表中是更快还是更少内存?
- javascript - 谷歌脚本在包含电子邮件地址的字符串中转义字符时遇到问题
- sql-server - 使用可用性组和复制重新启动 SQL 节点的正确方法
- php - 将中间件应用到此资源组
- haskell-stack - 使用`stack build`构建时如何为FFI捕获Haskell的库依赖项?
- c - Flex Start Condition 意外结果
- javascript - 使用 node-fetch 将带有 GET 和 data-urlencode 选项的 cURL 命令转换为 javascript
- phpstorm - 如何让 PhpStorm 的主要标签不那么粗体