首页 > 解决方案 > TensorFlow 和 Keras 中神经网络的性能有何不同?

问题描述

我是机器学习(ML)的新手,我正在尝试实现算法来理解 ML 框架等的基本语法。现在我正在研究手写数字数据集的 MNIST 数据库。

我只实现了一层(我的意思是:输入层有 784 个输入,隐藏层有 512 个节点,输出层有 10 个输出)使用 TensorFlow 框架的神经网络,没有数据预处理,128 个批量大小,10 个 epoch,ADAM 优化器。该算法在训练集上达到了大约 0.95 的准确度。

之后我尝试在 Keras 中实现完全相同的架构。但是,准确度(训练集)约为 0.3。我试图在互联网上找到许多不同的实现,但我仍然找不到问题出在哪里。我相信这是愚蠢的(一如既往):-/

我认为 Keras 中的相同架构应该给出与 TensorFlow 中的实现相同的结果,对吗?

我的 Keras 实现是:

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from keras.layers import Input, Dense
from keras.models import Model
from keras.utils.np_utils import to_categorical

df_train = pd.read_csv('datasets/MNIST_train.csv', delimiter=',', header=0)
Y_train, X_train = np.split(df_train.values, [1], axis=1)

m, n_x = X_train.shape
n_y = len(np.unique(Y_train))
n_layer1 = 512
batch_size = 128
num_epochs = 10

Y_train = to_categorical(Y_train)

X_input = Input(shape=(n_x,), name='input')
X = Dense(n_layer1, activation='relu', name='hidden')(X_input)
X = Dense(n_y, activation='softmax', name='output')(X)

model = Model(inputs=X_input, outputs=X, name='Neural Network')

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

model.fit(X_train, Y_train, epochs=num_epochs, batch_size=batch_size)

我的 TensorFlow 实现是:

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf

def one_hot(a, num_classes):
  return np.eye(num_classes)[a.reshape(-1)]

def get_minibatches(batch_size, m, X, Y):
    output_batches = []

    for index in range(0, m, batch_size):
        index_end = index + batch_size
        batch = [X[index:index_end], Y[index:index_end]]
        output_batches.append(batch)

    return output_batches

def dense_layer(input, channels_in, channels_out, activation=None):
   initializer = tf.contrib.layers.xavier_initializer()
   w = tf.Variable(initializer([channels_in, channels_out]), name="w")
   b = tf.Variable(tf.zeros([1, channels_out]), name="b")

   if (activation == 'relu'):
       a = tf.nn.relu(tf.matmul(input, w) + b)
       return a
   else:
       z = tf.matmul(input, w) + b
       return z

df_train = pd.read_csv('datasets/MNIST_train.csv', delimiter=',', header=0)
Y_train, X_train = np.split(df_train.values, [1], axis=1)

m, n_x = X_train.shape
n_y = len(np.unique(Y_train))
n_layer1 = 512
batch_size = 128
num_epochs = 10

Y_train = one_hot(Y_train, n_y)

X = tf.placeholder(tf.float32, [None, n_x], name="X")
Y = tf.placeholder(tf.float32, [None, n_y], name="Y")

hidden = dense_layer(X, n_x, n_layer1, 'relu')
output = dense_layer(hidden, n_layer1, n_y)

loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=output, labels=Y))

optimizer = tf.train.AdamOptimizer().minimize(loss)

predict = tf.argmax(output, 1)
correct_prediction = tf.equal(predict, tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

minibatches = get_minibatches(batch_size, m, X_train, Y_train)

with tf.Session() as sess:        
    sess.run(tf.global_variables_initializer())

    current_cost = sess.run(loss, feed_dict={X: X_train, Y: Y_train})
    train_accuracy = sess.run(accuracy, feed_dict={X: X_train, Y: Y_train})
    print('Epoch: {:<4} - Loss: {:<8.3} Train Accuracy: {:<5.3} '.format(0, current_cost, train_accuracy))

    for epoch in range(num_epochs):            
        for minibatch in minibatches:
            minibatch_X, minibatch_Y = minibatch

            sess.run(optimizer, feed_dict={ X: minibatch_X, Y: minibatch_Y })

        current_cost = sess.run(loss, feed_dict={X: X_train, Y: Y_train})
        train_accuracy = sess.run(accuracy, feed_dict={X: X_train, Y: Y_train})
        print('Epoch: {:<4} - Loss: {:<8.3} Train Accuracy: {:<5.3} '.format(epoch + 1, current_cost, train_accuracy))

你能帮助我并建议我做错了什么吗?谢谢彼得

标签: pythontensorflowmachine-learningneural-networkkeras

解决方案


我想到了。至少部分。我标准化了输入 ((x - xmean) / xstd),Keras 实现已经开始返回与 TensorFlow 实现相似的结果……</p>


推荐阅读