首页 > 解决方案 > Keras:不能在 model.fit 中使用 ImageDataGenerator 作为验证数据

问题描述

我知道顺序模型可以使用数据集迭代器作为文档中的验证数据,请参阅https://keras.io/models/sequential/

validation_data: 在每个 epoch 结束时评估损失和任何模型指标的数据。模型不会根据这些数据进行训练。validation_data将覆盖validation_split。validation_data 可以是: - (x_val, y_val)Numpy 数组或张量的元组 - (x_val, y_val, val_sample_weights)Numpy 数组的元组 - 数据集或数据集迭代器

但是,将由完整数据构建的数据集迭代器提供给此参数会在库文件 keras/engine/training.py(第 1158 到 1170 行)中引发错误,因为它们会检查文件中的大小,并且该类型的生成器会生成批次列表。我错过了什么 ?

来自https://keras.io/preprocessing/image/https://keras.io/examples/cifar10_cnn/的极简代码传递模型定义并专注于model.fit-type 行:

import keras
from keras.datasets import cifar10
from keras.utils import np_utils
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D


def def_model(input_shape, n_classes):
    """Stolen from [here](https://keras.io/examples/cifar10_cnn/)."""
    model = Sequential()
    model.add(Conv2D(32, (3, 3), padding='same', input_shape=input_shape[1:]))
    model.add(Activation('relu'))
    model.add(Conv2D(32, (3, 3)))
    model.add(Activation('relu'))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.25))

    model.add(Conv2D(64, (3, 3), padding='same'))
    model.add(Activation('relu'))
    model.add(Conv2D(64, (3, 3)))
    model.add(Activation('relu'))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.25))

    model.add(Flatten())
    model.add(Dense(512))
    model.add(Activation('relu'))
    model.add(Dropout(0.5))
    model.add(Dense(n_classes))
    model.add(Activation('softmax'))

    # initiate RMSprop optimizer
    opt = keras.optimizers.RMSprop(learning_rate=0.0001, decay=1e-6)

    # Let's train the model using RMSprop
    model.compile(loss='categorical_crossentropy',
                  optimizer=opt,
                  metrics=['accuracy'])

    return model


def main():
    num_classes = 10
    epochs = 1

    (x_train, y_train), (x_test, y_test) = cifar10.load_data()
    y_train = np_utils.to_categorical(y_train, num_classes)
    y_test = np_utils.to_categorical(y_test, num_classes)

    datagen = ImageDataGenerator(
        featurewise_center=True,
        featurewise_std_normalization=True,
        rotation_range=20,
        width_shift_range=0.2,
        height_shift_range=0.2,
        horizontal_flip=True)

    datagen_test = ImageDataGenerator(
        featurewise_center=True,
        featurewise_std_normalization=True)

    # compute quantities required for featurewise normalization
    # (std, mean, and principal components if ZCA whitening is applied)
    datagen.fit(x_train)
    datagen_test.fit(x_train)

    model = def_model(x_train.shape, num_classes)

    # fits the model on batches with real-time data augmentation:
    model.fit_generator(datagen.flow(x_train, y_train, batch_size=32),
                        steps_per_epoch=1, epochs=epochs)  # len(x_train) / 32

    # here's a more "manual" example
    for e in range(epochs):
        print('Epoch', e)
        batches = 0
        for x_batch, y_batch in datagen.flow(x_train, y_train, batch_size=32):
            model.fit(x_batch, y_batch, validation_data=datagen_test.flow(
                x_train, y_train, batch_size=32))
            batches += 1
            if batches >= len(x_train) / 32:
                # we need to break the loop by hand because
                # the generator loops indefinitely
                break


if __name__ == "__main__":

我的库版本:

$ pip freeze | grep Keras
Keras==2.3.1
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.0

运行代码的完整输出:

Using TensorFlow backend.
2019-11-02 09:17:11.946893: I tensorflow/core/platform/cpu_feature_guard.cc:145] This TensorFlow binary is optimized with Intel(R) MKL-DNN to use the following CPU instructions in performance critical operations:  SSE4.1 SSE4.2 AVX AVX2 FMA
To enable them in non-MKL-DNN operations, rebuild TensorFlow with the appropriate compiler flags.
2019-11-02 09:17:12.111959: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3000000000 Hz
2019-11-02 09:17:12.112625: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55a874c0e510 executing computations on platform Host. Devices:
2019-11-02 09:17:12.112676: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): Host, Default Version
2019-11-02 09:17:12.116662: I tensorflow/core/common_runtime/process_util.cc:115] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
Epoch 1/1
1/1 [==============================] - 2s 2s/step - loss: 2.3134 - accuracy: 0.0312
Epoch 0
Traceback (most recent call last):
  File "tmp.py", line 90, in <module>
    main()
  File "tmp.py", line 81, in main
    x_train, y_train, batch_size=32))
  File "/home/robin/anaconda3/lib/python3.7/site-packages/keras/engine/training.py", line 1170, in fit
    len(validation_data))
ValueError: When passing validation_data, it must contain 2 (x_val, y_val) or 3 (x_val, y_val, val_sample_weights) items, however it contains 1563 items

标签: pythonkeras

解决方案


我的代码有点损坏,因为我使用了两次该fit方法(要么 要么fit_generatorfit,见下文。当我将validation_data参数放入fit_generator而不是fit. 然而,Keras 的文档指出,可以将 fit 与数据集迭代器一起使用,该迭代器必须不同于生成器。

    model.fit_generator(datagen.flow(x_train, y_train, batch_size=32),
                        steps_per_epoch=1, epochs=epochs)  # len(x_train) / 32

    # here's a more "manual" example
    for e in range(epochs):
        print('Epoch', e)
        batches = 0
        for x_batch, y_batch in datagen.flow(x_train, y_train, batch_size=32):
            model.fit(x_batch, y_batch, validation_data=datagen_test.flow(
                x_train, y_train, batch_size=32))

感谢@natthaphon-hongcharoen 的提示。


推荐阅读