首页 > 解决方案 > Tensorflow/Keras:训练期间的模型准确率始终为 0.5,输入大小与第一个官方教程不同

问题描述

我是深度学习和 keras/tensorflow 的初学者。我遵循了 tensorflow.org 上的第一个教程时尚 MNIST 的基本分类。

在这种情况下,输入数据是 60000、28x28 图像,模型是这样的:

model = keras.Sequential([
    keras.layers.Flatten(input_shape=(28, 28)),
    keras.layers.Dense(128, activation=tf.nn.relu),
    keras.layers.Dense(10, activation=tf.nn.softmax)
])

编译:

model.compile(optimizer=tf.train.AdamOptimizer(), 
                loss='sparse_categorical_crossentropy',
                metrics=['accuracy'])

在训练结束时,模型具有以下准确性:

10000/10000 [==============================] - 0s 21us/step
Test accuracy: 0.8769

没关系。现在我试图用另一组数据复制这个模型。新输入是从kaggle下载的数据集。

该数据集包含不同大小的狗和猫的图像,因此我创建了一个简单的脚本来获取图像,在 28x28 像素中调整大小并转换为 numpy 数组。

这是执行此操作的代码:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
from tensorflow.keras.models import load_model
from PIL import Image

import os

# Helper libraries
import numpy as np

# base path dataset
base_path = './dataset/'
training_path = base_path + "training_set/"
test_path = base_path + "test_set/"

# size rate of images
size = 28, 28

# 
train_images = []
train_labels = []
test_images = []
test_labels = []

classes = ['dogs', 'cats']

# Scorre sulle cartelle contenute nel path e trasforma le immagini in nparray
def from_files_to_nparray(path):
    images = []
    labels = []
    for subfolder in os.listdir(path):
        if subfolder == '.DS_Store':
            continue

        for image_name in os.listdir(path + subfolder):
            if not image_name.endswith('.jpg'):
                continue

            img = Image.open(path + subfolder + "/" + image_name).convert("L").resize(size) # convert to grayscale and resize
            npimage = np.asarray(img)

            images.append(npimage)
            labels.append(classes.index(subfolder))

            img.close()

    # convertt to np arrays
    images = np.asarray(images)
    labels = np.asarray(labels)

    # Normalize to [0, 1]
    images = images / 255.0 
    return (images, labels)

(train_images, train_labels) = from_files_to_nparray(training_path)
(test_images, test_labels) = from_files_to_nparray(test_path)

最后我有这些形状:

Train images shape   :  (8000, 128, 128)
Labels images shape  :  (8000,)
Test images shape    :  (2000, 128, 128)
Test images shape    :  (2000,)

在训练了相同的模型(但最后一个密集层格式为 2 个神经元)后,我得到了这个结果,应该没问题:

Train images shape   :  (8000, 28, 28)
Labels images shape  :  (8000,)
Test images shape    :  (2000, 28, 28)
Test images shape    :  (2000,)


_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten (Flatten)            (None, 784)               0         
_________________________________________________________________
dense (Dense)                (None, 128)               100480    
_________________________________________________________________
dense_1 (Dense)              (None, 2)                 258       
=================================================================
Total params: 100,738
Trainable params: 100,738
Non-trainable params: 0
_________________________________________________________________
None

Epoch 1/5
2018-07-27 15:25:51.283117: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
8000/8000 [==============================] - 1s 66us/step - loss: 0.6924 - acc: 0.5466
Epoch 2/5
8000/8000 [==============================] - 0s 39us/step - loss: 0.6679 - acc: 0.5822
Epoch 3/5
8000/8000 [==============================] - 0s 41us/step - loss: 0.6593 - acc: 0.6048
Epoch 4/5
8000/8000 [==============================] - 0s 39us/step - loss: 0.6545 - acc: 0.6134
Epoch 5/5
8000/8000 [==============================] - 0s 39us/step - loss: 0.6559 - acc: 0.6039
2000/2000 [==============================] - 0s 33us/step

Test accuracy:  0.592

现在,问题是,如果我尝试将输入大小从 28x28 更改为,例如 128x128,结果是这样的:

Train images shape   :  (8000, 128, 128)
Labels images shape  :  (8000,)
Test images shape    :  (2000, 128, 128)
Test images shape    :  (2000,)


_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten (Flatten)            (None, 16384)             0         
_________________________________________________________________
dense (Dense)                (None, 128)               2097280   
_________________________________________________________________
dense_1 (Dense)              (None, 2)                 258       
=================================================================
Total params: 2,097,538
Trainable params: 2,097,538
Non-trainable params: 0
_________________________________________________________________
None

Epoch 1/5
2018-07-27 15:27:41.966860: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
8000/8000 [==============================] - 4s 483us/step - loss: 8.0341 - acc: 0.4993
Epoch 2/5
8000/8000 [==============================] - 3s 362us/step - loss: 8.0590 - acc: 0.5000
Epoch 3/5
8000/8000 [==============================] - 3s 351us/step - loss: 8.0590 - acc: 0.5000
Epoch 4/5
8000/8000 [==============================] - 3s 342us/step - loss: 8.0590 - acc: 0.5000
Epoch 5/5
8000/8000 [==============================] - 3s 342us/step - loss: 8.0590 - acc: 0.5000
2000/2000 [==============================] - 0s 217us/step

Test accuracy:  0.5

为什么?尽管添加新的密集层或增加神经元数量,但结果是相同的。

输入大小和模型层之间有什么联系?谢谢!

标签: pythontensorflowmachine-learningkerasdeep-learning

解决方案


问题是您在第二个示例中需要训练更多参数。在第一个示例中,您只有 100k 参数。你用 8k 图像训练它们。

在第二个示例中,您有 2000k 参数,并尝试使用相同数量的图像训练它们。这不起作用,因为自由参数和样本数量之间存在关系。没有精确的公式来计算这种关系,但有一条经验法则,你应该拥有比可训练参数更多的样本。

您可以尝试使用它来训练更多时期并了解它是如何工作的,但通常您需要更多数据来构建更复杂的模型。


推荐阅读