首页 > 解决方案 > 如何使用 keras 从函数返回损失图并将它们打印为子图?

问题描述

我想知道如何hist在训练 2 个模型(RNN 和 LSTM)后在以下函数中返回代表历史的哪个代表并在子图中打印它们的损失函数:

def train_model(model_type):
    '''
    This code is parallelised and runs on each process
    It trains a model with different layer sizes (hyperparameters)
    It saves the model and returns the score (error)
    '''
    import time

    import numpy as np
    import pandas as pd
    import multiprocessing
    import matplotlib.pyplot as plt

    from keras.layers import LSTM, SimpleRNN, Dense, Activation
    from keras.models import Sequential
    from keras.callbacks import EarlyStopping, ReduceLROnPlateau
    from keras.layers.normalization import BatchNormalization

    print(f'Training a model: {model_type}')

    callbacks = [
        EarlyStopping(patience=10, verbose=1),
        ReduceLROnPlateau(factor=0.1, patience=3, min_lr=0.00001, verbose=1),
    ]

    model = Sequential()

    if model_type == 'rnn':
        model.add(SimpleRNN(units=1440, input_shape=(trainX.shape[1], trainX.shape[2])))
    elif model_type == 'lstm':
        model.add(LSTM(units=1440, input_shape=(trainX.shape[1], trainX.shape[2])))

    model.add(Dense(480))
    model.add(BatchNormalization())
    model.add(Activation('tanh'))
    model.compile(loss='mean_squared_error', optimizer='adam')
    model.fit(
        trainX,
        trainY,
        epochs=50,
        batch_size=20,
        validation_data=(testX, testY),
        verbose=1,
        callbacks=callbacks,
    )

    # predict
    Y_Train_pred = model.predict(trainX)
    Y_Test_pred = model.predict(testX)

    train_MSE = mean_squared_error(trainY, Y_Train_pred)
    test_MSE = mean_squared_error(testY, Y_Test_pred)

    # you can also return values eg. the eval score
    return {'type': model_type, 'train_MSE': train_MSE, 'test_MSE': test_MSE}

我尝试了以下代码:

def train_model(model_type):

...
hist = model.fit(... )

# Return values eg. the eval score or plots history
    return {..., 'hist': hist}

num_workers = 2
model_types = ['rnn', 'lstm']
# guard in the main module to avoid creating subprocesses recursively.
if __name__ == "__main__":
     pool = multiprocessing.Pool(num_workers, init_worker)

    scores = pool.map(train_model, model_types  )
    for s in scores:
        #plot losses for RNN + LSTM
        f, ax = plt.subplots(figsize=(20, 15))
        plt.subplot(1, 2, 1)
        ax=plt.plot(s['hist'].history['loss']    ,label='Train loss')
        #ax=plt.plot(hist_RNN.history['loss']    ,label='Train loss')

        plt.subplot(1, 2, 2)
        #ax=plt.plot(hist_LSTM.history['loss']    ,label='Train loss')
        ax=plt.plot(s['hist'].history['loss']    ,label='Train loss')

        plt.subplots_adjust(top=0.80, bottom=0.38, left=0.12, right=0.90, hspace=0.37, wspace=0.28)
        plt.savefig('_All_Losses_history_.png')
        plt.show()

print(scores)

通常我想分配独立的模型名称plt.plot(hist_RNN...)plt.plot(hist_LSTM...)就像我评论它,以便我可以独立调用/传递它们,但由于 RNN 和 LSTM 模型设计是相同的,为了减少我没有这样做的代码,现在我'正在寻找一种优雅的方式来返回这些情节并最终在子情节中的任何正确位置打印它们!任何帮助将不胜感激。

标签: pythonmatplotlibkerassubplot

解决方案


print(history.history.keys())
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])

您可以像 history.history['loss'] 一样分配这些其他人并与他们一起玩。


推荐阅读