首页 > 解决方案 > 如何使输入张量可训练

问题描述

以下是在优化过程中尝试使用输入图像作为训练变量的代码。它从 keras 模型开始,然后将其转换为 tensorflow 模型。该张量流模型将张量作为输入,并尝试使用输入张量作为可训练变量来优化成本函数。

错误是:

NotImplementedError: ('尝试更新张量', )

原因是输入张量不是变量。问题是如何使输入图像可训练或将张量转换为 tf.variable。帮助表示赞赏:

import tensorflow as tf
from keras.models import Sequential, load_model, Model
from keras import backend as K
from keras.layers.core import Dense, Dropout, Activation
import os
from tensorflow.python.framework import graph_util
from tensorflow.python.framework import graph_io

n_classes = 10

model = Sequential()
model.add(Dense(10, input_shape=(784,)))
model.add(Activation('relu'))                            
model.add(Dense(n_classes, name='logits'))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', metrics=['accuracy'],  optimizer='adam')

# Write the graph in binary .pb file
outdir = "model4_tf"
try:
    os.mkdir(outdir )
except:
    pass


prefix = "simple_nn" 
name = 'output_graph.pb'
# Alias the outputs in the model - this sometimes makes them easier to access in TF
pred = []
pred_node_names = []
for i, o in enumerate(model.outputs):
    pred_node_names.append(prefix+'_'+str(i))
    pred.append(tf.identity(o, 
                            name=pred_node_names[i]))
print('Output nodes names are: ', pred_node_names)


sess = K.get_session()


constant_graph = graph_util.convert_variables_to_constants(sess,                                         
sess.graph.as_graph_def(), pred_node_names)
graph_io.write_graph(constant_graph, outdir, name, as_text=False)


tf.reset_default_graph()

def load_graph(model_name):
    #graph = tf.Graph()
    graph = tf.get_default_graph()
    graph_def = tf.GraphDef()
    with open(model_name, "rb") as f:
        graph_def.ParseFromString(f.read())
    with graph.as_default():
        tf.import_graph_def(graph_def)
    return graph

my_graph = load_graph(model_name=os.path.join(outdir, name))


# In[15]:

input_op = my_graph.get_operation_by_name("import/dense_1_input")
output_op = my_graph.get_operation_by_name("import/simple_nn_0")
logit_op = my_graph.get_operation_by_name("import/logits/BiasAdd")


x_hat = input_op.outputs[0] # input tensor
labels = output_op.outputs[0] # label tensor
logits = logit_op.outputs[0] # logits tensor

learning_rate = tf.placeholder(tf.float32, ())

loss = tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=[labels])
optim_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss,     var_list=[x_hat])

标签: python-3.xtensorflowkeras

解决方案


推荐阅读