首页 > 解决方案 > tensorflow.keras 模型的准确性、损失和验证指标在 30 个训练阶段保持不变

问题描述

我编写了一个 CNN,它接收 MFCC 频谱图,旨在将图像分类为五个不同的类别。我对模型进行了 30 个 epoch 的训练,在第一个 epoch 之后,指标没有变化。这可能是分类不平衡的问题吗?如果是这样,如果可能的话,我将如何使模型偏向数据集?下面是数据生成器代码、模型定义和输出。原始模型有两个额外的层,但是,当我尝试解决问题时,我开始调整一些东西

数据生成器定义:

path = 'path_to_dataset'
CLASS_NAMES = ['belly_pain', 'burping', 'discomfort', 'hungry', 'tired']
CLASS_NAMES = np.array(CLASS_NAMES)
BATCH_SIZE = 32
IMG_HEIGHT = 150
IMG_WIDTH = 150
# 457 is the number of images total
STEPS_PER_EPOCH = np.ceil(457/BATCH_SIZE)

img_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255, validation_split=0.2, horizontal_flip=True, rotation_range=45, width_shift_range=.15, height_shift_range=.15)

train_data_gen = img_generator.flow_from_directory( directory=path, batch_size=BATCH_SIZE, shuffle=True, target_size=(IMG_HEIGHT, IMG_WIDTH), classes = list(CLASS_NAMES), subset='training', class_mode='categorical')

validation_data_gen = img_generator.flow_from_directory( directory=path, batch_size=BATCH_SIZE, shuffle=True, target_size=(IMG_HEIGHT, IMG_WIDTH), classes = list(CLASS_NAMES), subset='validation', class_mode='categorical')

型号定义:

EPOCHS = 30

model = Sequential([
    Conv2D(128, 3, activation='relu', 
           input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
    MaxPooling2D(),
    Flatten(),
    Dense(512, activation='sigmoid'),
    Dense(1)
])

opt = tf.keras.optimizers.Adamax(lr=0.001)
model.compile(optimizer=opt,
              loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
              metrics=['accuracy'])

前 5 个时代:

Epoch 1/30
368/368 [==============================] - 371s 1s/step - loss: 0.6713 - accuracy: 0.8000 - val_loss: 0.5004 - val_accuracy: 0.8000
Epoch 2/30
368/368 [==============================] - 235s 640ms/step - loss: 0.5004 - accuracy: 0.8000 - val_loss: 0.5004 - val_accuracy: 0.8000
Epoch 3/30
368/368 [==============================] - 233s 633ms/step - loss: 0.5004 - accuracy: 0.8000 - val_loss: 0.5004 - val_accuracy: 0.8000
Epoch 4/30
368/368 [==============================] - 236s 641ms/step - loss: 0.5004 - accuracy: 0.8000 - val_loss: 0.5004 - val_accuracy: 0.8000
Epoch 5/30
368/368 [==============================] - 234s 636ms/step - loss: 0.5004 - accuracy: 0.8000 - val_loss: 0.5004 - val_accuracy: 0.8000

最近五个纪元:

Epoch 25/30
368/368 [==============================] - 231s 628ms/step - loss: 0.5004 - accuracy: 0.8000 - val_loss: 0.5004 - val_accuracy: 0.8000
Epoch 26/30
368/368 [==============================] - 227s 617ms/step - loss: 0.5004 - accuracy: 0.8000 - val_loss: 0.5004 - val_accuracy: 0.8000
Epoch 27/30
368/368 [==============================] - 228s 620ms/step - loss: 0.5004 - accuracy: 0.8000 - val_loss: 0.5004 - val_accuracy: 0.8000
Epoch 28/30
368/368 [==============================] - 234s 636ms/step - loss: 0.5004 - accuracy: 0.8000 - val_loss: 0.5004 - val_accuracy: 0.8000
Epoch 29/30
368/368 [==============================] - 235s 638ms/step - loss: 0.5004 - accuracy: 0.8000 - val_loss: 0.5004 - val_accuracy: 0.8000
Epoch 30/30
368/368 [==============================] - 234s 636ms/step - loss: 0.5004 - accuracy: 0.8000 - val_loss: 0.5004 - val_accuracy: 0.8000

标签: pythontensorflowmachine-learningkerasdeep-learning

解决方案


您正在尝试实现具有 4 个类别的分类任务,但您的最后一层仅包含一个神经元。

它应该是一个有 4 个神经元和一个 softmax 激活的密集层:

Dense(4, activation="softmax")

您还需要根据分类损失相应地更改损失函数,例如categorical_crossentropy


推荐阅读