首页 > 解决方案 > 用于计算分布的 Jensen-Shannon Divergence 的自定义指标,真阳性率为 50%,在 tensorflow 2 中

问题描述

我是 TensorFlow 的新手,并且是一般的编码。我正在尝试使用自定义指标:输入变量之一的概率分布的 Jensen-Shannon Divergence,真阳性率(召回率)为 50%。我正在努力让它发挥作用。我还使用了一个自定义损失函数,我设法让它工作,(但为了简单起见,我在下面的代码中保留了一个标准损失)。

def custom_loss(x,lambda_):
  def loss(y_true, y_pred):
    los_1 = tf.keras.backend.categorical_crossentropy(y_true, y_pred)
    return los_1
  return loss

def custom_metric(x):
  thresh = [0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5,0.55,0.6,0.65,0.7,0.75,0.8,0.85,0.9,0.95,1]
  def jsd(y_true, y_pred):
    thresh_ = tf.convert_to_tensor(thresh)

    tpr = tf.metrics.Recall(thresholds=thresh)
    e=tpr(y_true,y_pred)
    tr50 = tf.math.reduce_max(e) # minimum threshold which gives a true postive rate > 50%

    m = x[:,0] #input variable for jsd calculation

    m_pred = m[y_pred[:,0]<tr50] #predicted input variable
    m_actual = m[testY[:,0]==0] #actual input variable
  
    a = tf.histogram_fixed_width(m_actual,[-1,1] ,nbins=50)
    sum_a = tf.math.reduce_sum(a)
    prob_a = a/sum_a #create a probablity distribution for m_actual


    b = tf.histogram_fixed_width(m_pred,[-1,1] ,nbins=50)
    sum_b = tf.math.reduce_sum(b)
    prob_b = b/sum_b #create a probablity distribution for m_pred

    m = (prob_a + prob_b)/2

    js = (tf.keras.losses.KLDivergence(prob_a,m) + tf.keras.losses.KLDivergence(prob_b,m))/2


    return js
  return jsd

定义自定义指标后,我使用功能 API(为简单起见,我将正则化器保持为零,我还禁用了 Eager Execution,因为我在 Google Colab 中使用 GPU 运行时,并且 Eager Execution 不允许我的实际自定义损失工作适当地。)

tf.compat.v1.disable_eager_execution()

initializer = keras.initializers.Orthogonal()
l2_layer1 = 0.0
l2_layer2 = 0.0
l2_layer3 = 0.0
l2_layer4 = 0.0


def neural_network():
        # create model
        i = Input(shape=(n_cols,))
       

        x1 = Dense(32, activation='relu',kernel_regularizer=l2(l2_layer1),kernel_initializer=initializer)(i)
       

        x2 = Dense(32, activation='relu',kernel_regularizer=l2(l2_layer2),kernel_initializer=initializer)(x1)
      

        x3 = Dense(32, activation='relu',kernel_regularizer=l2(l2_layer3),kernel_initializer=initializer)(x2)      

        x4 = Dense(32, activation='relu',kernel_regularizer=l2(l2_layer3),kernel_initializer=initializer)(x3)
      

        o = Dense(2, activation='softmax',kernel_regularizer=l2(l2_layer4),kernel_initializer=initializer)(x4)

        model = Model(i,o)

        opt = tf.keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999)
        model.compile(loss=custom_loss(i,10), optimizer=opt,metrics=['accuracy',custom_metric(i)])
        return model

model = neural_network()
history = History()
# fit the model
#history = model.fit(trainX, trainY, epochs=5, verbose=1,batch_size=2048,shuffle = True)
history = model.fit(trainX, trainY, validation_data=(valX, valY), epochs=50, verbose=0 ,batch_size=2048,shuffle = True)

运行上述行时,我收到此错误:

ValueError: in user code:

    <ipython-input-79-2185a89bc166>:39 jsd  *
        js = (tf.keras.losses.KLDivergence(prob_a,m) + tf.keras.losses.KLDivergence(prob_b,m))/2
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:1091 __init__  **
        kl_divergence, name=name, reduction=reduction)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:235 __init__
        super(LossFunctionWrapper, self).__init__(reduction=reduction, name=name)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:99 __init__
        losses_utils.ReductionV2.validate(reduction)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/losses/loss_reduction.py:68 validate
        raise ValueError('Invalid Reduction Key %s.' % key)

    ValueError: Invalid Reduction Key Tensor("metrics_2/jsd/truediv:0", shape=(50,), dtype=float64).

标签: pythontensorflowkeras

解决方案


推荐阅读