python - 回调导致 ValueError
问题描述
在过去的几个月里,这些代码运行良好,但在我做了一些事情后不知何故出错了,但我无法恢复它。
def bi_LSTM_model(X_train, y_train, X_test, y_test, num_classes, loss,batch_size=68, units=128, learning_rate=0.005,epochs=20, dropout=0.2, recurrent_dropout=0.2):
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if (logs.get('acc') > 0.90):
print("\nReached 90% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
model = tf.keras.models.Sequential()
model.add(Masking(mask_value=0.0, input_shape=(X_train.shape[1], X_train.shape[2])))
model.add(Bidirectional(LSTM(units, dropout=dropout, recurrent_dropout=recurrent_dropout)))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=loss,
optimizer=adamopt,
metrics=['accuracy'])
history = model.fit(X_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(X_test, y_test),
verbose=1,
callbacks=[callbacks])
score, acc = model.evaluate(X_test, y_test,
batch_size=batch_size)
yhat = model.predict(X_test)
return history, yhat
def duo_bi_LSTM_model(X_train, y_train, X_test, y_test, num_classes, loss,batch_size=68, units=128, learning_rate=0.005,epochs=20, dropout=0.2, recurrent_dropout=0.2):
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if (logs.get('acc') > 0.90):
print("\nReached 90% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
model = tf.keras.models.Sequential()
model.add(Masking(mask_value=0.0, input_shape=(X_train.shape[1], X_train.shape[2])))
model.add(Bidirectional(
LSTM(units, dropout=dropout, recurrent_dropout=recurrent_dropout, return_sequences=True)))
model.add(Bidirectional(LSTM(units, dropout=dropout, recurrent_dropout=recurrent_dropout)))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=loss,
optimizer=adamopt,
metrics=['accuracy'])
history = model.fit(X_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(X_test, y_test),
verbose=1,
callbacks=[callbacks])
score, acc = model.evaluate(X_test, y_test,
batch_size=batch_size)
yhat = model.predict(X_test)
return history, yhat
基本上,我已经定义了两个模型,每当第二个模型运行时,就会出现错误。
顺便说一句,我tf.keras.backend.clear_session()
在模型之间使用。
ValueError: Tensor("Adam/bidirectional/forward_lstm/kernel/m:0", shape=(), dtype=resource) must be from the same graph as Tensor("bidirectional/forward_lstm/kernel:0", shape=(), dtype=resource).
我对代码所做的唯一修改是我试图将callback
类从两个模型中取出,并将其放在它们之前,以减少代码的冗余。
解决方案
问题不在于回调函数。出现错误是因为您将相同的优化器传递给两个不同的模型,这是不可能的,因为它们是两个不同的计算图。
尝试在调用之前定义模型的函数内部定义优化器model.compile()
。
推荐阅读
- python - Plotly Dash URL 路由到当前页面中的 id
- reactjs - 动态添加表格行反应钩子
- ios - ios (xcode 11): 仅与设备切片的白痴静态库链接
- python - 具有多个正方形的 Python 数据可视化(绘图、图表)
- architecture - 如果我在 64 位中运行多个 32 位应用程序,每个应用程序都会全速运行吗?
- node.js - 提高 Firebase 节点 js 函数的性能
- docker - 带有 BUILDKIT_INLINE_CACHE 的 docker --cache-from 不能每秒钟运行一次
- javascript - lodash 过多的递归
- powershell - 可以从一个 Power Shell 窗口运行 yarn 命令,但不能从另一个窗口运行。什么可以解释这种差异?
- r - 在西班牙计算机上将 .csv 文件读入 R 时忽略逗号分隔符