首页 > 解决方案 > 训练准确度提高但验证准确度保持在 0.5,并且模型为每个验证样本预测几乎相同的类别

问题描述

我正在使用 keras tensorflow 后端在时间流逝的 IVF 胚胎图像数据集上实现 ResNet50 + LSTM 和注意力模型。

数据集由大约 220 个样本组成,我使用 85% - 15% 的训练/验证拆分(203 个用于训练,27 个用于验证)。

我的模型能够达到 0.80+ 的训练精度,但验证精度要么停留在 0.5 左右,要么仅停留在 0.5,验证损失几乎是训练损失的两倍。

这只是一个过度拟合的问题吗?

如果没有,我该如何调试和提高验证集的性能?

#我尝试过的事情:

我尝试添加正则化(L1,0.01)/Dropout 层(0.5)/减少神经元(1024 到 512 到 256),但它们都不起作用)

我还通过减去平均值和除以标准差来标准化我的数据。

我正在使用具有 1e-5 学习率且没有权重衰减的 Adam 优化器。图像在训练前被打乱。

#下面是我的模型和进度条的代码

#进度条:

Epoch 1/40

150/150 [==============================] - 28s 189ms/step - loss: 2.1318 - acc: 0.5267 - val_loss: 4.8806 - val_acc: 0.5556



Epoch 00001: val_loss improved from inf to 4.88055, saving model to result/resnetmodel.hdf5

Epoch 2/40

150/150 [==============================] - 14s 94ms/step - loss: 1.9957 - acc: 0.5867 - val_loss: 4.8210 - val_acc: 0.5000



Epoch 00002: val_loss improved from 4.88055 to 4.82100, saving model to result/resnetmodel.hdf5

Epoch 3/40

150/150 [==============================] - 14s 94ms/step - loss: 1.8062 - acc: 0.6200 - val_loss: 4.9689 - val_acc: 0.5000



Epoch 00003: val_loss did not improve from 4.82100

Epoch 4/40

150/150 [==============================] - 14s 91ms/step - loss: 1.7516 - acc: 0.6267 - val_loss: 5.0284 - val_acc: 0.5000



Epoch 00004: val_loss did not improve from 4.82100

Epoch 5/40

150/150 [==============================] - 14s 94ms/step - loss: 1.6508 - acc: 0.7000 - val_loss: 4.9873 - val_acc: 0.4444



Epoch 00005: val_loss did not improve from 4.82100

Epoch 6/40

150/150 [==============================] - 14s 92ms/step - loss: 1.5003 - acc: 0.7733 - val_loss: 4.9800 - val_acc: 0.4444



Epoch 00006: val_loss did not improve from 4.82100

Epoch 7/40

150/150 [==============================] - 14s 96ms/step - loss: 1.4614 - acc: 0.7667 - val_loss: 4.9435 - val_acc: 0.5000



Epoch 00007: val_loss did not improve from 4.82100

Epoch 8/40

150/150 [==============================] - 14s 90ms/step - loss: 1.5480 - acc: 0.6800 - val_loss: 4.9345 - val_acc: 0.5000



Epoch 00008: val_loss did not improve from 4.82100

Epoch 9/40

150/150 [==============================] - 14s 93ms/step - loss: 1.4334 - acc: 0.7667 - val_loss: 5.0452 - val_acc: 0.5000



Epoch 00009: val_loss did not improve from 4.82100

Epoch 10/40

150/150 [==============================] - 14s 94ms/step - loss: 1.4344 - acc: 0.7667 - val_loss: 5.1768 - val_acc: 0.4444



Epoch 00010: val_loss did not improve from 4.82100

Epoch 11/40

150/150 [==============================] - 15s 98ms/step - loss: 1.3369 - acc: 0.8533 - val_loss: 5.1331 - val_acc: 0.4444



Epoch 00011: val_loss did not improve from 4.82100

Epoch 12/40

150/150 [==============================] - 14s 93ms/step - loss: 1.2834 - acc: 0.8133 - val_loss: 5.1265 - val_acc: 0.4444



Epoch 00012: val_loss did not improve from 4.82100

Epoch 13/40

150/150 [==============================] - 14s 91ms/step - loss: 1.3007 - acc: 0.8200 - val_loss: 5.1941 - val_acc: 0.4444



Epoch 00013: val_loss did not improve from 4.82100

Epoch 14/40

150/150 [==============================] - 14s 94ms/step - loss: 1.2358 - acc: 0.8533 - val_loss: 5.3716 - val_acc: 0.4444



Epoch 00014: val_loss did not improve from 4.82100

Epoch 15/40

150/150 [==============================] - 14s 92ms/step - loss: 1.2823 - acc: 0.8000 - val_loss: 5.3877 - val_acc: 0.4444

纪元 00015:val_loss 没有从 4.82100 改进

纪元 00015:提前停止

评价:----加载数据

----prediction_on_eval-----

事实= [1。0.],预测=[0.03809702 0.96190304]

事实= [1。0.],预测=[0.9803326 0.0196674]

事实= [1。0.],预测=[9.9986279e-01 1.3717638e-04]

事实= [1。0.],预测=[0.98158103 0.01841903]

事实= [1。0.],预测=[0.99492776 0.00507224]

事实= [1。0.],预测=[0.70435154 0.29564843]

事实= [1。0.],预测=[4.1277369e-04 9.9958724e-01]

事实= [1。0.],预测=[0.9818978 0.01810225]

事实= [1。0.],预测=[0.91195923 0.08804072]

事实= [0。1.],预测=[0.986312 0.013688]

事实= [0。1.],预测=[0.9985434 0.00145668]

事实= [0。1.],预测=[0.80424094 0.195759]

事实= [0。1.],预测=[0.9214819 0.07851809]

事实= [0。1.],预测=[0.03754392 0.96245605]

事实= [0。1.],预测=[9.9976009e-01 2.3989924e-04]

事实= [0。1.],预测=[0.98681134 0.01318868]

事实= [0。1.],预测=[0.9984666 0.0015334]

事实= [0。1.],预测=[0.7229417 0.27705824]

#这是我的模型:


x =Input(shape = (40, config.img_shape, config.img_shape, config.img_channel))

if config.base_model == "inception_v3":

    cnn = InceptionV3(weights = None, include_top=False, pooling = "avg")

elif config.base_model == 'ResNet50':

    cnn = ResNet50(weights = None, include_top=False, pooling = "avg")
    
cnn.load_weights(config.pretrained_path)

for layer in cnn.layers:

    layer.trainable = False


extracted_features = TimeDistributed(cnn)(x)

activations = Bidirectional(LSTM(config.num_units_lstm, return_sequences=True,                              recurrent_activation = 'relu', recurrent_initializer = 'glorot_uniform', name='Bidirectional_LSTM'))(extracted_features)

activations = Dropout(0.5)(activations)

attention = TimeDistributed(Dense(1, activation='tanh'),name = "context_vector")(activations)

attention = Flatten()(attention)

attention = Activation('softmax', name = "conext_weights")(attention)

attention = RepeatVector(config.num_units_lstm*2)(attention)

attention = Permute([2, 1])(attention)

sent_representation = merge.multiply([activations, attention])

sent_representation = Lambda(lambda xin: K.sum(xin, axis=1))(sent_representation)

sent_representation = BatchNormalization()(sent_representation)

prediction = Dense(config.num_classes, activation = 'softmax')(sent_representation)

model =  Model(inputs = x, outputs = prediction)

标签: pythontensorflowkerasdeep-learningclassification

解决方案


数据集由大约 220 个样本组成,我使用 85% - 15% 的训练/验证拆分(203 个用于训练,27 个用于验证)。

这只是一个过度拟合的问题吗?

听起来很有可能,是的。对于这样一个深度网络,220 个样本是非常非常小的数据集。真的不太可能从这么小的一组数据中学会很好地概括。

如果没有,我该如何调试和提高验证集的性能?

在理想情况下,再获取 100,000 个左右的样本并将它们添加到您的数据集中!

接受这一点可能是不切实际的,您可以尝试以下策略之一或组合:

  • 使用图像增强人为地增加数据集的大小
  • 而不是尝试从头开始训练深度网络。使用 tensorflow_hub 进行调查,以便仅训练最后一层(和/或微调)预训练网络(链接)。

推荐阅读