首页 > 解决方案 > tf.distribute.MirroredStrategy 实现与会话不与 Keras?

问题描述

我想在多个 GPU 上部署深度模型。我搜索了谷歌,发现可以使用tf.distribute.MirroredStrategy().TensorFlow 引入 MirroredStrategy 在多个并行 GPU 上训练模型。该链接可在此处使用 Keras 进行分布式训练供您参考。我也在关注一个博客,但这也在 Keras 中。使用 TensorFlow 在多 GPU 上训练神经网络。关于这个库的所有帮助材料都可用于支持 Keras。我不想在我的程序中使用 Keras。对于用例,我提到了一些行代码:

# Train CNN
print("Training CNN... ")

max_accuracy = 0.0

for i in range(num_training_iterations):

idx_train = np.random.randint(0, train_size, batch_size)

xt = np.reshape(data_train[idx_train], [batch_size, segment_size * num_input_channels])

yt = np.reshape(labels_train[idx_train], [batch_size, n_classes])
ft = np.reshape(features[idx_train], [batch_size, num_features])

sess.run(train_step, feed_dict={x: xt, y_: yt, h_feat: ft, keep_prob: dropout_rate})

if i % eval_iter == 0:   # After every 1000 iterations it check test data on the model => Moreover in first iteration it try test on the model which is trained just on 200 samples data

    train_accuracy, train_entropy, y_pred = sess.run([accuracy, cross_entropy, y_conv],
    feed_dict={ x : data_test, y_: labels_test, h_feat: features_test, keep_prob: 1}) #in testing no dropout so keep_prob=1

    print("step %d, entropy %g" % (i, train_entropy))
    print("step %d, max accuracy %g, accuracy %g" % (i, max_accuracy, train_accuracy))
    print(classification_report(labels_test_unary, np.argmax(y_pred, axis=1), digits=4))

    if max_accuracy < train_accuracy:
        max_accuracy = train_accuracy

在上面给出代码的情况下,如何在没有 Keras 库的情况下使用 tf.distribute.MirroredStrategy()?

标签: pythontensorflowkerasdeep-learningdistributed-system

解决方案


推荐阅读