首页 > 解决方案 > 当验证损失满足某些标准时提前停止

问题描述

我正在 Keras 中训练一个神经网络模型。我想监控验证损失并在达到特定条件时停止训练。

我知道当给定轮EarlyStopping数的训练没有改善时,我可以停止训练。patience

我想要一些不同的东西。我想在回合后val_loss超过一个值时停止训练。xn

为了清楚起见,让我们说xin0.5nis 50epoch我只想在数字大于50val_loss高于时才停止模型的训练0.5

我怎么能在 Keras 中做到这一点。?

标签: pythonmachine-learningkerasdeep-learning

解决方案


您可以通过继承 KerasEarlyStopping回调并用您自己的逻辑覆盖它来定义自己的回调:

from keras.callbacks import EarlyStopping # use as base class

class MyCallBack(EarlyStopping):
    def __init__(self, threshold, min_epochs, **kwargs):
        super(MyCallBack, self).__init__(**kwargs)
        self.threshold = threshold # threshold for validation loss
        self.min_epochs = min_epochs # min number of epochs to run

    def on_epoch_end(self, epoch, logs=None):
        current = logs.get(self.monitor)
        if current is None:
            warnings.warn(
                'Early stopping conditioned on metric `%s` '
                'which is not available. Available metrics are: %s' %
                (self.monitor, ','.join(list(logs.keys()))), RuntimeWarning
            )
            return

        # implement your own logic here
        if (epoch >= self.min_epochs) & (current >= self.threshold):
            self.stopped_epoch = epoch
            self.model.stop_training = True

小例子来说明它应该工作:

from keras.layers import Input, Dense
from keras.models import Model
import numpy as np

# Generate some random data
features = np.random.rand(100, 5)
labels = np.random.rand(100, 1)

validation_feat = np.random.rand(100, 5)
validation_labels = np.random.rand(100, 1)

# Define a simple model
input_layer = Input((5, ))
dense_layer = Dense(10)(input_layer)
output_layer = Dense(1)(dense_layer)
model = Model(inputs=input_layer, outputs=output_layer)
model.compile(loss='mse', optimizer='sgd')

# Fit with custom callback
callbacks = [MyCallBack(threshold=0.001, min_epochs=10, verbose=1)] 
model.fit(features, labels, validation_data=(validation_feat, validation_labels), callbacks=callbacks, epochs=100)   

推荐阅读