首页 > 解决方案 > 避免张量流中静态图中的feed_dict机制

问题描述

我正在尝试实现一个用于生成/重建样本的模型(变分自动编码器)。在测试期间,我希望能够通过为模型提供一个潜在变量来使模型生成新样本,但这需要将输入更改为计算图的一部分。

我可以使用 feed_dict 来“动态地”这样做,因为我不能直接更改静态图,但我想避免在 GPU 和系统 RAM 之间交换数据的开销。

就目前而言,我使用迭代器提供数据。

def make_mnist_dataset(batch_size, shuffle=True, include_labels=True):
"""Loads the MNIST data set and returns the relevant
iterator along with its initialization operations.
"""

# load the data
train, test = tf.keras.datasets.mnist.load_data()

# binarize and reshape the data sets
temp_train = train[0]
temp_train = (temp_train > 0.5).astype(np.float32).reshape(temp_train.shape[0], 784)
train = (temp_train, train[1])

temp_test = test[0]
temp_test = (temp_test > 0.5).astype(np.float32).reshape(temp_test.shape[0], 784)
test = (temp_test, test[1])

# prepare Dataset objects
if include_labels:
    train_set = tf.data.Dataset.from_tensor_slices(train).repeat().batch(batch_size)
    test_set = tf.data.Dataset.from_tensor_slices(test).repeat(1).batch(batch_size)
else:
    train_set = tf.data.Dataset.from_tensor_slices(train[0]).repeat().batch(batch_size)
    test_set = tf.data.Dataset.from_tensor_slices(test[0]).repeat(1).batch(batch_size)

if shuffle:
    train_set = train_set.shuffle(buffer_size=int(0.5*train[0].shape[0]), 
                                  seed=123)

# make the iterator
iter = tf.data.Iterator.from_structure(train_set.output_types,
                                       train_set.output_shapes)
data = iter.get_next()

# create initialization ops
train_init = iter.make_initializer(train_set)
test_init = iter.make_initializer(test_set)

return train_init, test_init, data

这是将被迭代的数据馈送到图表的代码片段:

train_init, test_init, next_batch = make_mnist_dataset(batch_size, include_labels=True)
ops = build_graph(next_batch[0], next_batch[1], learning_rate, is_training, 
                  latent_dim, tau, batch_size, inf_layers, gen_layers)

有没有办法在测试期间从 Iterator 对象“切换”到不同的输入源,而不使用 feed_dict?

标签: python-3.xtensorflowmachine-learningdeep-learningtensorflow-datasets

解决方案


推荐阅读