首页 > 解决方案 > 回调导致 ValueError

问题描述

在过去的几个月里,这些代码运行良好,但在我做了一些事情后不知何故出错了,但我无法恢复它。

def bi_LSTM_model(X_train, y_train, X_test, y_test, num_classes, loss,batch_size=68, units=128, learning_rate=0.005,epochs=20, dropout=0.2, recurrent_dropout=0.2):
    
    class myCallback(tf.keras.callbacks.Callback):
        def on_epoch_end(self, epoch, logs={}):
            if (logs.get('acc') > 0.90):
                print("\nReached 90% accuracy so cancelling training!")
                self.model.stop_training = True
                
    callbacks = myCallback()

    model = tf.keras.models.Sequential()
    model.add(Masking(mask_value=0.0, input_shape=(X_train.shape[1], X_train.shape[2])))
    model.add(Bidirectional(LSTM(units, dropout=dropout, recurrent_dropout=recurrent_dropout)))
    model.add(Dense(num_classes, activation='softmax'))

    model.compile(loss=loss,
                  optimizer=adamopt,
                  metrics=['accuracy'])

    history = model.fit(X_train, y_train,
                        batch_size=batch_size,
                        epochs=epochs,
                        validation_data=(X_test, y_test),
                        verbose=1,
                        callbacks=[callbacks])

    score, acc = model.evaluate(X_test, y_test,
                                batch_size=batch_size)

    yhat = model.predict(X_test)

    return history, yhat





def duo_bi_LSTM_model(X_train, y_train, X_test, y_test, num_classes, loss,batch_size=68, units=128, learning_rate=0.005,epochs=20, dropout=0.2, recurrent_dropout=0.2):
    
    class myCallback(tf.keras.callbacks.Callback):
        def on_epoch_end(self, epoch, logs={}):
            if (logs.get('acc') > 0.90):
                print("\nReached 90% accuracy so cancelling training!")
                self.model.stop_training = True
                     
    callbacks = myCallback()

        
    
    model = tf.keras.models.Sequential()
    model.add(Masking(mask_value=0.0, input_shape=(X_train.shape[1], X_train.shape[2])))
    model.add(Bidirectional(
        LSTM(units, dropout=dropout, recurrent_dropout=recurrent_dropout, return_sequences=True)))
    model.add(Bidirectional(LSTM(units, dropout=dropout, recurrent_dropout=recurrent_dropout)))
    model.add(Dense(num_classes, activation='softmax'))

    model.compile(loss=loss,
                  optimizer=adamopt,
                  metrics=['accuracy'])

    history = model.fit(X_train, y_train,
                        batch_size=batch_size,
                        epochs=epochs,
                        validation_data=(X_test, y_test),
                        verbose=1,
                        callbacks=[callbacks])

    score, acc = model.evaluate(X_test, y_test,
                                batch_size=batch_size)

    yhat = model.predict(X_test)

    return history, yhat

基本上,我已经定义了两个模型,每当第二个模型运行时,就会出现错误。

顺便说一句,我tf.keras.backend.clear_session()在模型之间使用。

ValueError: Tensor("Adam/bidirectional/forward_lstm/kernel/m:0", shape=(), dtype=resource) must be from the same graph as Tensor("bidirectional/forward_lstm/kernel:0", shape=(), dtype=resource).

我对代码所做的唯一修改是我试图将callback类从两个模型中取出,并将其放在它们之前,以减少代码的冗余。

标签: pythontensorflowmachine-learning

解决方案


问题不在于回调函数。出现错误是因为您将相同的优化器传递给两个不同的模型,这是不可能的,因为它们是两个不同的计算图。

尝试在调用之前定义模型的函数内部定义优化器model.compile()


推荐阅读