首页 > 解决方案 > 有人可以帮我用 Python 编写一段关于神经网络和 MNIST 数据集的代码吗?

问题描述

对于一个学校项目,我已经分析了下面的代码,但我想给它一个特征:我想给神经网络,当它完成训练后,一个来自 MNIST 的手写数字的图像(比如说一个 8),这样它可以尝试定义数字 8。因为我对编码和机器学习完全陌生,虽然我真的很喜欢它并想了解更多,但我自己无法弄清楚这样的代码应该是什么样子。有人能帮我吗?

代码是用 Python 编写的:

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data", one_hot=True)

learning_rate = 0.0001
batch_size = 100
update_step = 10

layer_1_nodes = 500
layer_2_nodes = 500
layer_3_nodes = 500
output_nodes = 10

network_input = tf.placeholder(tf.float32, [None, 784])
target_output = tf.placeholder(tf.float32, [None, output_nodes])

layer_1 = tf.Variable(tf.random_normal([784, layer_1_nodes]))
layer_1_bias = tf.Variable(tf.random_normal([layer_1_nodes]))
layer_2 = tf.Variable(tf.random_normal([layer_1_nodes, layer_2_nodes]))
layer_2_bias = tf.Variable(tf.random_normal([layer_2_nodes]))
layer_3 = tf.Variable(tf.random_normal([layer_2_nodes, layer_3_nodes]))
layer_3_bias = tf.Variable(tf.random_normal([layer_3_nodes]))
out_layer = tf.Variable(tf.random_normal([layer_3_nodes, output_nodes]))
out_layer_bias = tf.Variable(tf.random_normal([output_nodes]))

l1_output = tf.nn.relu(tf.matmul(network_input, layer_1) + layer_1_bias)
l2_output = tf.nn.relu(tf.matmul(l1_output, layer_2) + layer_2_bias)
l3_output = tf.nn.relu(tf.matmul(l2_output, layer_3) + layer_3_bias)
ntwk_output_1 = tf.matmul(l3_output, out_layer) + out_layer_bias
ntwk_output_2 = tf.nn.softmax(ntwk_output_1)
cf =   
tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=ntwk_output_1, 
labels=target_output))
ts = tf.train.GradientDescentOptimizer(learning_rate).minimize(cf)
cp = tf.equal(tf.argmax(ntwk_output_2, 1), tf.argmax(target_output, 1))
acc = tf.reduce_mean(tf.cast(cp, tf.float32))


with tf.Session() as sess:
sess.run(tf.global_variables_initializer())

num_epochs = 10
for epoch in range(num_epochs):
    total_cost = 0
    for _ in range(int(mnist.train.num_examples / batch_size)):
        batch_x, batch_y = mnist.train.next_batch(batch_size)
        t, c = sess.run([ts, cf], feed_dict={network_input: batch_x, target_output: batch_y})       
        total_cost += c
    print('Epoch', epoch, 'completed out of', num_epochs, 'loss:', total_cost)
print('Accuracy:', acc.eval({network_input: mnist.test.images,target_output: mnist.test.labels}))

标签: pythonmachine-learningneural-networkmnist

解决方案


    with tf.Session() as sess:
         number_prediction = tf.argmax(ntwk_output_2 , 1)
         number_prediction = sess.run(number_prediction , feed_dict={network_input : 
                              yourImageNdArray } )
         print("your prediction : ",number_prediction)

你需要知道的:

  • ntwk_ouput_2 是神经网络的输出,它给你 10 个概率——你用 tf.argmax 取最大的一个(tf argmax 不返回最大值,而是它的位置)

  • sess.run 负责运行您的张量流图并评估第一个参数中给出的张量

  • 您还需要在 feed_dict 中为您的网络提供要预测的图像

    希望有帮助!


推荐阅读