首页 > 解决方案 > 让 Keras / Tensorflow 输出 OneHotCategorical,但操作没有梯度

问题描述

问题描述

我有x作为指标变量的输入和输出y,其中每一行是一个随机的单热向量,它取决于x(数据示例如下所示)的值。

我想训练一个模型,该模型基本上以每列权重的形式x学习概率关系。y模型必须“选择”一个并且只有一个指标来输出。我目前的方法是对分类随机变量进行采样并生成一个单热向量作为预测。

问题是ValueError: An operation has `None` for gradient当我尝试训练我的 Keras 模型时出现错误。

我觉得这个错误很奇怪,因为我已经使用 Keras 和 Tensorflow 训练了混合网络,它们使用tf.contrib.distributions.Categorical,并且我没有遇到任何与梯度相关的问题。

代码

实验

import tensorflow as tf
import tensorflow.contrib.distributions as tfd
import numpy as np
from keras import backend as K
from keras.layers import Layer
from keras.models import Sequential
from keras.utils import to_categorical


def make_xy_prob(rng, size=10000):
    rng = np.random.RandomState(rng) if isinstance(rng, int) else rng
    cols = 3
    weights = np.array([[1, 2, 3]])

    # generate data and drop zeros for now
    x = rng.choice(2, (size, cols))
    is_zeros = x.sum(axis=1) == 0
    x = x[~is_zeros]

    # use weights to create probabilities for determining y
    weighted_x = x * weights
    prob_x = weighted_x / weighted_x.sum(axis=1, keepdims=True)
    y = np.row_stack([to_categorical(rng.choice(cols, p=p), cols) for p in prob_x])

    # add zeros back and shuffle
    zeros = np.zeros(((size - len(x), cols)))
    x = np.row_stack([x, zeros])
    y = np.row_stack([y, zeros])
    shuffle_idx = rng.permutation(size)
    x = x[shuffle_idx]
    y = y[shuffle_idx]
    return x, y


class OneHotGate(Layer):
    def build(self, input_shape):
        self.kernel = self.add_weight(name='kernel', shape=(1, input_shape[1]), initializer='ones')

    def call(self, x):
        zero_cond = x < 1
        x_shape = tf.shape(x)

        # weight indicators so that more probability is assigned to more likely columns
        weighted_x = x * self.kernel

        # fill zeros with -inf so that zero probability is assigned to that column
        ninf_fill = tf.fill(x_shape, -np.inf)
        masked_x = tf.where(zero_cond, ninf_fill, weighted_x)
        onehot_gate = tf.squeeze(tfd.OneHotCategorical(logits=masked_x, dtype=x.dtype).sample(1))

        # fill gate with zeros where input was originally zero
        zeros_fill = tf.fill(x_shape, 0.0)
        masked_gate = tf.where(zero_cond, zeros_fill, onehot_gate)
        return masked_gate


def experiment(epochs=10):
    K.clear_session()
    rng = np.random.RandomState(2)

    X, y = make_xy_prob(rng)
    input_shape = (X.shape[1], )

    model = Sequential()
    gate_layer = OneHotGate(input_shape=input_shape)
    model.add(gate_layer)
    model.compile('adam', 'categorical_crossentropy')
    model.fit(X, y, 64, epochs, verbose=1)

数据样本

>>> x 
array([[1., 1., 1.],
       [0., 1., 0.],
       [1., 0., 1.],
       ...,
       [1., 1., 1.],
       [1., 1., 1.],
       [1., 1., 0.]])

>>> y
array([[0., 0., 1.],
       [0., 1., 0.],
       [1., 0., 0.],
       ...,
       [0., 0., 1.],
       [1., 0., 0.],
       [1., 0., 0.]])

错误

ValueError: An operation has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.

标签: pythontensorflowmachine-learningkerasdeep-learning

解决方案


问题在于OneHotCategorical执行不连续采样的事实 - 导致梯度计算失败的原因。为了用连续(宽松)版本替换这种不连续采样,可以尝试使用RelaxedOneHotCategorical(基于有趣的Gumbel Softmax技术)。


推荐阅读