首页 > 解决方案 > 增加 sigmoid 预测输出值?

问题描述

我为文本分类创建了一个 Conv1D 模型。

在最后一个密集处使用 softmax / sigmoid 时,它产生的结果为

softmax => [0.98502016 0.0149798 ]
sigmoid => [0.03902826 0.00037046]

我只希望 sigmoid 结果的第一个索引应该至少大于 0.8。只是希望多类应该有独立的结果。我如何实现这一目标?

型号总结:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding (Embedding)        (None, 128, 100)          600       
_________________________________________________________________
conv1d (Conv1D)              (None, 126, 128)          38528     
_________________________________________________________________
max_pooling1d (MaxPooling1D) (None, 63, 128)           0         
_________________________________________________________________
conv1d_1 (Conv1D)            (None, 61, 128)           49280     
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 30, 128)           0         
_________________________________________________________________
conv1d_2 (Conv1D)            (None, 28, 128)           49280     
_________________________________________________________________
max_pooling1d_2 (MaxPooling1 (None, 14, 128)           0         
_________________________________________________________________
flatten (Flatten)            (None, 1792)              0         
_________________________________________________________________
dense (Dense)                (None, 2)                 3586      
=================================================================
Total params: 141,274
Trainable params: 141,274
Non-trainable params: 0
_________________________________________________________________
model.add(keras.layers.Dense(num_class, activation='sigmoid'))
model.compile(loss='categorical_crossentropy',
              optimizer='rmsprop', metrics=['acc'])

标签: pythontensorflowkeras

解决方案


我同意@blue-phoenox 的评论,即您不应该使用具有交叉熵的 sigmoid,因为类的概率之和不等于 1。但是如果你有理由使用sigmoid,你可以通过向量元素的总和来标准化你的输出,使其等于 1:

output = output/tf.reshape(tf.reduce_sum(output, 1), (-1, 1))

你会得到:

import tensorflow as tf

output = tf.Variable([[0.03902826, 0.00037046]])
output = output/tf.reshape(tf.reduce_sum(output, 1), (-1, 1))
summedup = tf.reduce_sum(output, axis=1)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    print(output.eval()) # [[0.9905971  0.00940284]] - new output
    print(summedup.eval()) # [1.] -  summs up to 1

要在其中实现它,keras您可以创建一个tf.keras.layers.Layer像这样的子类:

from tensorflow.keras import layers

class NormLayer(layers.Layer):
    def __init__(self):
        super(NormLayer, self).__init__()

    def call(self, inputs):
        return inputs / tf.reshape(tf.reduce_sum(inputs, 1), (-1, 1))

然后在您的Sequential()模型中使用它:

# using dummy data to illustrate
x_train = np.array([[-1.551, -1.469], [1.022, 1.664]], dtype=np.float32)
y_train = np.array([[0, 1], [1, 0]], dtype=np.int32)

model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(units=2, activation=tf.nn.sigmoid, input_shape=(2, )))
model.add(NormLayer())

model.compile(loss='categorical_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])

model.fit(x=x_train,
          y=y_train,
          epochs=2,
          batch_size=2)
# ...

推荐阅读