首页 > 解决方案 > RNN 给出错误 ValueError:一个操作对梯度有 `None`

问题描述

深度 RNN 模型就像一个月前一样工作。以免它作为一个不同的项目接管。现在回来并尝试进行训练时出现错误。收到错误:

Traceback (most recent call last):

文件“/home/matiss/.local/share/JetBrains/Toolbox/apps/PyCharm-P/ch-0/201.7223.92/plugins/python/helpers/pydev/_pydevd_bundle/pydevd_exec2.py”,第 3 行,在 Exec exec(exp, global_vars, local_vars) File "", line 1, in File "/home/matiss/Documents/python_work/PycharmProjects/NectCleave/functions.py", line 358, in weighted_model File "/usr/local/lib/ python3.8/dist-packages/keras/engine/training.py”,第 1213 行,适合 self._make_train_function() 文件“/usr/local/lib/python3.8/dist-packages/keras/engine/training. py”,第 314 行,在 _make_train_function training_updates = self.optimizer.get_updates(文件“/usr/local/lib/python3.8/dist-packages/keras/legacy/interfaces.py”,第 91 行,在包装器中返回 func( *args, **kwargs) 文件 "/usr/local/lib/python3.8/dist-packages/keras/backend/tensorflow_backend.py”,第 75 行,在 symbolic_fn_wrapper 返回 func(*args, **kwargs) 文件“/usr/local/lib/python3.8/dist-packages/keras/optimizers .py”,第 504 行,在 get_updates 中 grads = self.get_gradients(loss, params) 文件“/usr/local/lib/python3.8/dist-packages/keras/optimizers.py”,第 93 行,在 get_gradients 中引发 ValueError ('一个操作有None为渐变。' ValueError: 一个操作有None梯度。请确保您的所有操作都定义了渐变(即可微分)。没有梯度的常见操作:K.argmax、K.round、K.eval。

我的模型架构:

def make_model(metrics='', output_bias=None, timesteps=None, features=None):
    from keras import regularizers

    if output_bias is not None:
        output_bias = Constant(output_bias)
    K.clear_session()
    model = Sequential()
    # First LSTM layer
    model.add(
        Bidirectional(LSTM(units=50, return_sequences=True, recurrent_dropout=0.1), input_shape=(timesteps, features)))
    model.add(Dropout(0.5))

    # Second LSTM layer
    model.add(Bidirectional(LSTM(units=50, return_sequences=True)))
    model.add(Dropout(0.5))

    # Third LSTM layer
    model.add(Bidirectional(LSTM(units=50, return_sequences=True)))
    model.add(Dropout(0.5))

    # Forth LSTM layer
    model.add(Bidirectional(LSTM(units=50, return_sequences=False)))
    model.add(Dropout(0.5))

    # First Dense Layer
    model.add(Dense(units=128, kernel_initializer='he_normal', activation='relu'))
    model.add(Dropout(0.5))

    # Adding the output layer
    if output_bias == None:
        model.add(Dense(units=1, activation='sigmoid', kernel_regularizer=regularizers.l2(0.001)))
    else:
        model.add(Dense(units=1, activation='sigmoid',
                        bias_initializer=output_bias, kernel_regularizer=regularizers.l2(0.001)))
    # https://keras.io/api/losses/
    model.compile(optimizer=Adam(lr=1e-3), loss=BinaryCrossentropy(), metrics=metrics)


    return model

请帮忙。为什么会这样?

标签: tensorflowkerasrecurrent-neural-network

解决方案


好的,经过半天的谷歌搜索和检查,我找不到解决方案。然后我决定建立一个新的 python 虚拟环境,安装所有必需的包并繁荣:它再次工作。不知道是什么问题以及它是如何发生的,但它现在有效。

希望这可以为遇到相同问题的其他人节省一些时间。


推荐阅读