python - 为什么 Keras model.fit() 将整个数据集作为一个批次使用并且内存不足?
问题描述
我正在用 tensorflow 构建一个非常简单的 Keras 模型。当我启动它时,它会因 OOM 异常而失败,因为它尝试分配与整个数据集大小成比例的张量。这里会发生什么?
相关形状:
- 数据集形状:[60000, 28, 28, 1]
- Batch_size(自动):10,
- step_per_epoch:6000
- 错误消息:使用 shape[60000,256,28,28] 分配张量并键入 float 时出现 OOM
注意:我没有使用顺序模型,因为稍后我将需要非顺序层。
张量流:1.12.0;Keras:2.1.6-tf
最小工作示例:
from tensorflow.keras import layers
import tensorflow as tf
import tensorflow.keras as keras
import numpy as np
def build_mnist_model(input_img):
conv1 = layers.Conv2D(256, (3,3), activation='relu', padding='same')(input_img)
conv2 = layers.Conv2D(1, (3, 3), activation='sigmoid', padding='same')(conv1)
return conv2
(x_train, _), (x_test, _) = keras.datasets.mnist.load_data()
x_train = np.expand_dims(x_train.astype('float32') / 255., -1)
x_test = np.expand_dims(x_test.astype('float32') / 255., -1)
print(x_train.shape)
print(x_test.shape)
input_img = keras.Input(shape = (28, 28, 1))
autoencoder = keras.Model(input_img, build_mnist_model(input_img))
autoencoder.compile(loss='mean_squared_error', optimizer = tf.train.AdamOptimizer(0.001))
autoencoder.fit(x_train, x_train,
epochs=50,
steps_per_epoch=int(int(x_train.shape[0])/10),
shuffle=True,
verbose=1,
validation_data=(x_test, x_test)
)
这是一个例外:
---------------------------------------------------------------------------
ResourceExhaustedError Traceback (most recent call last)
<ipython-input-40-be75898e307a> in <module>
24 shuffle=True,
25 verbose=1,
---> 26 validation_data=(x_test, x_test)
27 )
~/tf112/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, max_queue_size, workers, use_multiprocessing, **kwargs)
1637 initial_epoch=initial_epoch,
1638 steps_per_epoch=steps_per_epoch,
-> 1639 validation_steps=validation_steps)
1640
1641 def evaluate(self,
~/tf112/lib/python3.6/site-packages/tensorflow/python/keras/engine/training_arrays.py in fit_loop(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps)
152 callbacks.on_batch_begin(step_index, batch_logs)
153 try:
--> 154 outs = f(ins)
155 except errors.OutOfRangeError:
156 logging.warning('Your dataset iterator ran out of data; '
~/tf112/lib/python3.6/site-packages/tensorflow/python/keras/backend.py in __call__(self, inputs)
2984
2985 fetched = self._callable_fn(*array_vals,
-> 2986 run_metadata=self.run_metadata)
2987 self._call_fetch_callbacks(fetched[-len(self._fetches):])
2988 return fetched[:len(self.outputs)]
~/tf112/lib/python3.6/site-packages/tensorflow/python/client/session.py in __call__(self, *args, **kwargs)
1437 ret = tf_session.TF_SessionRunCallable(
1438 self._session._session, self._handle, args, status,
-> 1439 run_metadata_ptr)
1440 if run_metadata:
1441 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
~/tf112/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py in __exit__(self, type_arg, value_arg, traceback_arg)
526 None, None,
527 compat.as_text(c_api.TF_Message(self.status.status)),
--> 528 c_api.TF_GetCode(self.status.status))
529 # Delete the underlying status object from memory otherwise it stays alive
530 # as there is a reference to status from this from the traceback due to
ResourceExhaustedError: OOM when allocating tensor with shape[60000,256,28,28] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node conv2d_95/Conv2D}} = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](training_15/TFOptimizer/gradients/conv2d_95/Conv2D_grad/Conv2DBackpropFilter-0-TransposeNHWCToNCHW-LayoutOptimizer, conv2d_95/Conv2D/ReadVariableOp)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[{{node loss_24/mul/_1261}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_255_loss_24/mul", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
当我将模型定义为 keras.Sequential() 时,问题就消失了。
解决方案
要分批训练,您应该使用 fit_generator 方法。为此,您需要先制作数据生成器。您需要使用 flow_from_directory 跟随的 ImageDataGenerator(例如)。这样,keras 将分批提供数据。您应该调整批量大小以确保 GPU 的内存足够。通常批量大小在 32-64 左右。通常较大的批量大小更好。
Keras 文档: https ://keras.io/preprocessing/image/
您可以在此处查看使用示例: https ://www.kaggle.com/vbookshelf/skin-lesion-analyzer-tensorflow-js-web-app
推荐阅读
- spring-boot - [com.example.blog.SnapEngChatRequest] 和内容类型 [application/x-www-form-urlencoded] 没有 HttpMessageConverter
- sql-server - SQL Select From Where 不同的列必须相等
- javascript - Moment.js,将 UTC 日期更改为给定的 UTC 偏移量
- python - 如何在具有不同 django 类的两个字段之间建立关系?
- python-3.x - 使 Gensim FAST_VERSION 在 Windows 10 (Python 3.6) 上工作
- java - Spring Kafka错误处理程序如何避免死循环
- ios - UITextView 验证问题:iOS
- c# - 向故障实体添加自定义属性
- android - MediaDrmCallback 函数的打印值
- actions-on-google - 向 Google Home 发送自定义音频响应以在 google-app 上执行操作,并使用技能包应用程序向 Alexa 发送