首页 > 解决方案 > 为什么我的模型总是精确地得到 0.5 AUC?

问题描述

我目前正在做一个项目,我需要在一组图像中预测眼病。我正在使用 Keras 内置应用程序。我在 VGG16 和 VGG19 上取得了不错的成绩,但在 Xception 架构上,我每个时期的 AUC 都保持在 0.5 左右。

我尝试了不同的优化器和学习率,但没有任何效果。我通过从 RMSProp 优化器切换到 Adam 优化器解决了与 VGG19 相同的问题,但我无法让它为 Xception 工作。

   def buildModel():
    from keras.models import Model
    from keras.layers import Dense, Flatten
    from keras.optimizers import adam

    input_model = applications.xception.Xception(
        include_top=False,
        weights='imagenet',
        input_tensor=None,
        input_shape=input_sizes["xception"],
        pooling=None,
        classes=2)

    base_model = input_model

    x = base_model.output
    x = Flatten()(x)
    predictions = Dense(2, activation='softmax')(x)

    model = Model(inputs=base_model.input, outputs=predictions)
    for layer in base_model.layers:
        layer.trainable = False

    model.compile(optimizer=adam(lr=0.01), loss='binary_crossentropy', metrics=['accuracy'])

    return model


class Histories(keras.callbacks.Callback):

    def __init__(self, val_data):
        super(Histories, self).__init__()
        self.x_batch = []
        self.y_batch = []
        for i in range(len(val_data)):
            x, y = val_data.__getitem__(i)
            self.x_batch.extend(x)
            self.y_batch.extend(np.ndarray.astype(y, int))
        self.aucs = []
        self.specificity = []
        self.sensitivity = []
        self.losses = []
        return

    def on_train_begin(self, logs={}):
        initFile("results/xception_results_adam_3.txt")
        return

    def on_train_end(self, logs={}):
        return

    def on_epoch_begin(self, epoch, logs={}):
        return

    def on_epoch_end(self, epoch, logs={}):
        self.losses.append(logs.get('loss'))
        y_pred = self.model.predict(np.asarray(self.x_batch))
        con_mat = confusion_matrix(np.asarray(self.y_batch).argmax(axis=-1), y_pred.argmax(axis=-1))
        tn, fp, fn, tp = con_mat.ravel()
        sens = tp/(tp+fn)
        spec = tn/(tn+fp)
        auc_score = roc_auc_score(np.asarray(self.y_batch).argmax(axis=-1), y_pred.argmax(axis=-1))
        print("Specificity: %f Sensitivity: %f AUC: %f"%(spec, sens, auc_score))
        print(con_mat)
        self.sensitivity.append(sens)
        self.specificity.append(spec)
        self.aucs.append(auc_score)
        writeToFile("results/xception_results_adam_3.txt", epoch, auc_score, spec, sens, self.losses[epoch])
        return


# What follows is data from the Jupyter Notebook that I actually use to evaluate
#%% Initialize data
trainDirectory =      'RetinaMasks/train'
valDirectory =        'RetinaMasks/val'
testDirectory =       'RetinaMasks/test'

train_datagen = ImageDataGenerator(rescale=1. / 255)
test_datagen = ImageDataGenerator(rescale=1. / 255)

train_generator = train_datagen.flow_from_directory(
    trainDirectory,
    target_size=(299, 299),
    batch_size=16,
    class_mode='categorical')

validation_generator = test_datagen.flow_from_directory(
    valDirectory,
    target_size=(299, 299),
    batch_size=16,
    class_mode='categorical')

test_generator = test_datagen.flow_from_directory(
    testDirectory,
    target_size=(299, 299),
    batch_size=16,
    class_mode='categorical')

#%% Create model
model = buildModel("xception")

#%% Initialize metrics
from keras.callbacks import EarlyStopping
from MetricsCallback import Histories
import keras

metrics = Histories(validation_generator)
es = EarlyStopping(monitor='val_loss', 
                   min_delta=0, 
                   patience=20,
                   verbose=0, 
                   mode='auto', 
                   baseline=None, 
                   restore_best_weights=False)

mcp = keras.callbacks.ModelCheckpoint("saved_models/xception.adam.lr0.1_{epoch:02d}.hdf5", 
                                      monitor='val_loss', 
                                      verbose=0, 
                                      save_best_only=False, 
                                      save_weights_only=False, 
                                      mode='auto',
                                      period=1)

#%% Train model
from StaticDataAugmenter import superDirectorySize

history = model.fit_generator(
    train_generator,
    steps_per_epoch=superDirectorySize(trainDirectory) // 16,
    epochs=100,
    validation_data=validation_generator,
    validation_steps=superDirectorySize(valDirectory) // 16,
    callbacks=[metrics, es, mcp],
    workers=8,
    shuffle=False
)

老实说,我不知道是什么导致了这种行为,或者如何防止它。提前谢谢你,我为长代码片段道歉:)

标签: pythonkerasscikit-learnauc

解决方案


你的学习率太大了。尝试降低学习率。

我也遇到过类似的情况,这发生在迁移学习中。在二进制分类的情况下,多个时期的扩展 AUC 为 0.5 意味着您的卷积神经网络没有学习任何东西。

使用0.0001, 0.00001,的 learning_rates 0.000001

同时,您应该尝试解冻/使某些层可训练,因为您的整个特征提取器都被冻结了;事实上,这可能是网络无法学习任何东西的另一个原因。

如果您降低学习率,我很有信心您的问题将得到解决:)。


推荐阅读