首页 > 解决方案 > tf.nn.softmax_cross_entropy_with_logits_v2 为 MLP 返回零

问题描述

我为二进制分类问题制作了 3 个隐藏层 MLP,但我的成本函数遇到了问题。我目前正在运行一小部分数据,其形状是(由于 OHE 导致的大量特征):

x_train shape: (150, 1929)
y_train shape: (150, 1)
x_test shape: (51, 1929)
y_test shape: (51, 1)

tensowflow 图是:

# Parameters
learning_rate = 0.01
training_epochs = 500
iter_num = 500
batch_size = 200
display_step = training_epochs/10


# Network Parameters
n_hidden_1 = 1000 # 1st layer number of features
n_hidden_2 = 100 # 2nd layer number of features
n_hidden_3 = 8 # 3rd layer number of features
n_input = num_features # Number of input feature
n_classes = 1 # Number of classes to predict


# tf Graph input
x = tf.placeholder(tf.float32, [None, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])

# Create model
def multilayer_perceptron(x, weights, biases):
    # Hidden layer with sigmoid activation function
    layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
    layer_1 = tf.nn.sigmoid(layer_1)
    # Hidden layer with sigmoid activation function
    layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
    layer_2 = tf.nn.sigmoid(layer_2)    
    #Hidden layer with sigmoid activation
    layer_3 = tf.add(tf.matmul(layer_2, weights['h3']), biases['b3'])
    layer_3 = tf.nn.sigmoid(layer_3)
    # Output layer with softmax activation
    out_layer = tf.matmul(layer_3, weights['out']) + biases['out']
    out_layer = tf.nn.softmax(out_layer)
    return out_layer

# Store layers weight & bias
weights = {
    'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
    'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
    'h3': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_3])),
    'out': tf.Variable(tf.random_normal([n_hidden_3, n_classes]))
}

biases = {
    'b1': tf.Variable(tf.random_normal([n_hidden_1])),
    'b2': tf.Variable(tf.random_normal([n_hidden_2])),
    'b3': tf.Variable(tf.random_normal([n_hidden_3])),
    'out': tf.Variable(tf.random_normal([n_classes]))
}


# Construct model
pred = multilayer_perceptron(x, weights, biases)

# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=pred, labels=y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)

correct = tf.cast(tf.equal(pred, y), dtype=tf.float32)
accuracy = tf.reduce_mean(tf.cast(correct, "float"))

init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)

然后我用代码运行这个图:

# Training loop
loss_vec = []
test_loss = []
train_acc = []
test_acc = []
predic = []

for epoch in range(iter_num):
    rand_index = np.random.choice(len(train_X), size=batch_size)
    rand_x = train_X[rand_index]
    rand_y = train_y[rand_index]

    temp_loss = sess.run(cost, feed_dict={x: rand_x,y: rand_y})    
    test_temp_loss = sess.run(cost, feed_dict={x: test_X, y: test_y})
    temp_train_acc = sess.run(accuracy, feed_dict={x: train_X, y: train_y})
    temp_test_acc = sess.run(accuracy, feed_dict={x: test_X, y: test_y})

    temp_prediction = sess.run(pred, feed_dict={x: test_X, y: test_y})
    predic.append(temp_prediction)

    loss_vec.append(np.sqrt(temp_loss))
    test_loss.append(np.sqrt(test_temp_loss))
    train_acc.append(temp_train_acc)
    test_acc.append(temp_test_acc)
    # output

    if (epoch + 1) % (iter_num/10) == 0:
        print('epoch: {:4d} loss: {:5f} train_acc: {:5f} test_acc: {:5f}'.format(epoch + 1, temp_loss,
                                                                          temp_train_acc, temp_test_acc))  

但是,当我运行它时,测试和训练的准确性保持不变,并且对于所有时期,损失都保持为零。

输出:

epoch:   50 loss: 0.000000 train_acc: 0.300000 test_acc: 0.235294
epoch:  100 loss: 0.000000 train_acc: 0.300000 test_acc: 0.235294
epoch:  150 loss: 0.000000 train_acc: 0.300000 test_acc: 0.235294
....

我无法弄清楚为什么我的损失为零?我的目标和预测似乎都具有相同的形状,并且绝对不相等。

标签: pythontensorflow

解决方案


tf.nn.softmax_cross_entropy_with_logits_v2 已经为您计算了 softmax,您需要将无界 logits 传递给您的交叉熵函数,而不是 softmax 返回的概率分布。试试这个:

def multilayer_perceptron(x, weights, biases):
    # ... 
    logits = tf.matmul(layer_3, weights['out']) + biases['out']
    out_layer = tf.nn.softmax(logits)
    return logits, out_layer

然后使用logits计算交叉熵和out_layer进行推理。

logits, pred = multilayer_perceptron(x, weights, biases)
cost = tf.reduce_mean(
  tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=y))

优化器是计算梯度并将它们应用于变量的操作,这个操作本质上是什么让你的网络“学习”,你已经声明了它,但我没有看到你在循环中调用它。这应该这样做:

_, temp_loss = sess.run([optimizer, cost], feed_dict={x: rand_x,y: rand_y})

您有一个二元分类问题,我猜您的标签是 0 或 1。您还有一个输出神经元,其中的 softmax 将始终返回 1.0。我的建议是制作 2 个输出神经元,这样 softmax 将计算 2 个类别的概率分布。那么您的推断是该分布的 argmax:

correct = tf.cast(tf.equal(tf.argmax(pred, 1), y), dtype=tf.float32)
accuracy = tf.reduce_mean(correct)

在这种情况下,您需要使用tf.nn.sparse_softmax_cross_entropy_with_logits()来计算交叉熵:

cost = tf.reduce_mean(
  tf.nn.tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=y))

我会进一步建议您查看这篇文章以获取有关我所谈论内容的更多详细信息。


推荐阅读