首页 > 解决方案 > Tensorflow - model.fit 中的值错误 - 如何修复

问题描述

我正在尝试使用 MNIST 数据集训练深度神经网络。

BATCH_SIZE = 100
train_data = train_data.batch(BATCH_SIZE)
validation_data = validation_data.batch(num_validation_samples)
test_data = scaled_test_data.batch(num_test_samples)

validation_inputs, validation_targets = next(iter(validation_data))

input_size = 784
output_size = 10
hidden_layer_size = 50

model = tf.keras.Sequential([
                    tf.keras.layers.Flatten(input_shape=(28,28,1)),
                    tf.keras.layers.Dense(hidden_layer_size, activation='relu'),
                    tf.keras.layers.Dense(hidden_layer_size, activation='relu'),
                    tf.keras.layers.Dense(output_size, activation='softmax')                        
                ])

model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

NUM_EPOCHS = 5
model.fit(train_data, epochs=NUM_EPOCHS, validation_data=(validation_inputs,validation_targets))

model.fit 抛出以下错误

-------------------------------------------------------------------------

--
ValueError                                Traceback (most recent call last)
<ipython-input-58-c083185dafc6> in <module>
      1 NUM_EPOCHS = 5
----> 2 model.fit(train_data, epochs=NUM_EPOCHS, validation_data=(validation_inputs,validation_targets))

~/anaconda3/envs/py3-TF2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
    726         max_queue_size=max_queue_size,
    727         workers=workers,
--> 728         use_multiprocessing=use_multiprocessing)
    729 
    730   def evaluate(self,

~/anaconda3/envs/py3-TF2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs)
    222           validation_data=validation_data,
    223           validation_steps=validation_steps,
--> 224           distribution_strategy=strategy)
    225 
    226       total_samples = _get_total_number_of_samples(training_data_adapter)

~/anaconda3/envs/py3-TF2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_training_inputs(model, x, y, batch_size, epochs, sample_weights, class_weights, steps_per_epoch, validation_split, validation_data, validation_steps, shuffle, distribution_strategy, max_queue_size, workers, use_multiprocessing)
    562                                     class_weights=class_weights,
    563                                     steps=validation_steps,
--> 564                                     distribution_strategy=distribution_strategy)
    565     elif validation_steps:
    566       raise ValueError('`validation_steps` should not be specified if '

~/anaconda3/envs/py3-TF2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_inputs(model, x, y, batch_size, epochs, sample_weights, class_weights, shuffle, steps, distribution_strategy, max_queue_size, workers, use_multiprocessing)
    604       max_queue_size=max_queue_size,
    605       workers=workers,
--> 606       use_multiprocessing=use_multiprocessing)
    607   # As a fallback for the data type that does not work with
    608   # _standardize_user_data, use the _prepare_model_with_inputs.

~/anaconda3/envs/py3-TF2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/data_adapter.py in __init__(self, x, y, sample_weights, batch_size, epochs, steps, shuffle, **kwargs)
    252     if not batch_size:
    253       raise ValueError(
--> 254           "`batch_size` or `steps` is required for `Tensor` or `NumPy`"
    255           " input data.")
    256 

ValueError: `batch_size` or `steps` is required for `Tensor` or `NumPy` input data.

训练和验证数据来自 MNIST 数据集。一部分数据作为训练数据,一部分作为测试数据。

我在这里做错了什么?

更新 根据 Dominques 的建议,我已将 model.fit 更改为

model.fit(train_data, batch_size=128, epochs=NUM_EPOCHS, validation_data=(validation_inputs,validation_targets))

但是现在,我收到以下错误

ValueError: The `batch_size` argument must not be specified for the given input type. Received input: <BatchDataset shapes: ((None, 28, 28, 1), (None,)), types: (tf.float32, tf.int64)>, batch_size: 128

标签: python-3.xtensorflowmachine-learningneural-networkmnist

解决方案


tf 文档将为您提供更多线索,为什么您会收到错误。

https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit

validation_data: Data on which to evaluate the loss and any model metrics at the end of each epoch. The model will not be trained on this data. validation_data will override validation_split. validation_data could be:
    •   tuple (x_val, y_val) of Numpy arrays or tensors
    •   tuple (x_val, y_val, val_sample_weights) of Numpy arrays
    •   dataset 

对于前两种情况,必须提供 batch_size。对于最后一种情况,validation_steps 必须提供。

由于您已经批量处理了验证数据集,请考虑直接使用它并指定验证步骤,如下所示。

BATCH_SIZE = 100
train_data = train_data.batch(BATCH_SIZE)
validation_data = validation_data.batch(BATCH_SIZE)
...
model.fit(train_data, epochs=NUM_EPOCHS, validation_data=validation_data,validation_steps=1)

推荐阅读