首页 > 解决方案 > ConvNN 奇怪的准确度分数和准确度图

问题描述

嗨(开始我想说我是神经网络的新手)!我一直在用 python 编写简单的猫狗分类器。我正在使用带有张量流的python。NN的类型是Conv。我训练了几次网络,我得到的准确度分数很低(50%),准确度图看起来很奇怪。损失图 准确度图

这是神经网络:

def create_net():
    weights, biases = init_weights_biases()

    l1 = conv2d(x, weights['wc1'], biases['bc1'])
    l1 = maxpool2d(l1)

    l2 = conv2d(l1, weights['wc2'], biases['bc2'])
    l2 = maxpool2d(l2)

    l3 = conv2d(l2, weights['wc3'], biases['bc3'])
    l3 = maxpool2d(l3)

    l4 = tf.reshape(l3, shape=[-1, weights['wfc'].get_shape().as_list()[0]])
    l4 = tf.add(tf.matmul(l4, weights['wfc']), biases['bfc'])
    l4 = tf.nn.softmax(l4)
    l4 = tf.nn.dropout(l4, .5)

    out = tf.add(tf.matmul(l4, weights['bout']), biases['bout'])
    return out

pred = create_net()
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=pred, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=.001).minimize(cost)
correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

这是我训练神经网络的方法:

for i in range(epochs):
    if previous_batch >= len(X_train):
        previous_batch = 0

    current_batch = previous_batch + batch

    X_train_i = X_train[previous_batch:current_batch]
    X_train_i = np.array(X_train_i).reshape(batch, 64, 64, 1)

    y_train_i = y_train[previous_batch:current_batch]
    y_train_i = np.array(y_train_i)

    sess.run(optimizer, feed_dict={
        x: X_train_i,
        y: y_train_i
    })

    previous_batch += batch

标签: pythontensorflowdeep-learningconv-neural-network

解决方案


我认为您没有在每个时期都遍历完整的数据集。您只是batch在每次迭代中使用我认为相当低的大小数据,例如 100、128、256 等。这可能是您获得低准确度分数的原因。

例如,考虑以下训练循环的输出(与您的相同)带有一些随机数据:

import numpy as np

epochs = 5
previous_batch = 0
X_train = np.random.rand(1000, 5)
batch = 128
y_train = np.random.rand(1000, 2)
for i in range(epochs):
    if previous_batch >= len(X_train):
        previous_batch = 0

    current_batch = previous_batch + batch
    print(previous_batch, current_batch)

    X_train_i = X_train[previous_batch:current_batch]
    y_train_i = y_train[previous_batch:current_batch]   
    print(i, X_train_i.shape, y_train_i.shape)

    previous_batch += batch

输出:

0 128
0 (128, 5) (128, 2) # epoch 0
128 256
1 (128, 5) (128, 2) # epoch 1
256 384
2 (128, 5) (128, 2) # epoch 2
384 512
3 (128, 5) (128, 2) # epoch 3
512 640
4 (128, 5) (128, 2) # epoch 4

在这里,每次迭代仅使用来自整个数据集的 128 个数据样本。换句话说,我们在训练时没有在每次迭代中使用整个数据集(X_train,y_train),这可能会导致训练结果不佳。

为了在每次迭代中遍历整个数据集,请执行以下操作:

for i in range(epochs):
    previous_batch = 0
    for j in range(len(X_train)//batch):
        current_batch = previous_batch + batch

        X_train_i = X_train[previous_batch:current_batch]
        X_train_i = np.array(X_train_i).reshape(batch, 64, 64, 1)

        y_train_i = y_train[previous_batch:current_batch]
        y_train_i = np.array(y_train_i)

        sess.run(optimizer, feed_dict={
            x: X_train_i,
            y: y_train_i
        })

        previous_batch += batch

推荐阅读