首页 > 解决方案 > 无法在 tf.graph (saved_model) 中定位输入张量

问题描述

训练后,我将模型保存为 saved_model 格式(我想将其保存为这种格式而不是 .h5)。加载模型并打印图形时,我找不到输入张量(只有 serving_default_input)。能够做出预测。

keras.applications.VGG16第一次,我使用然后添加了我的模型,keras.Input()但没有任何改变。

这就是我定义模型的方式:

model = keras.applications.VGG16(weights = "imagenet",
                     include_top = False,
                     input_shape = (IMG_SIZE[0],IMG_SIZE[1], 3))
for layer in model.layers:
    layer.trainable = False

x = model.output
x = Dense(16 , activation="relu")(x)
x = Flatten()(x)
predictions = Dense(1, activation = "sigmoid")(x)
model = Model(inputs = model.input, outputs = predictions)
model.summary()  #in the first attempt :

Model: "model_3"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_3 (InputLayer)         [(None, 512, 512, 3)]     0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 512, 512, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 512, 512, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 256, 256, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 256, 256, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 256, 256, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 128, 128, 128)     0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 128, 128, 256)     295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 128, 128, 256)     590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 128, 128, 256)     590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 64, 64, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 64, 64, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 64, 64, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 64, 64, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 32, 32, 512)       0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 32, 32, 512)       2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 32, 32, 512)       2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 32, 32, 512)       2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 16, 16, 512)       0         
_________________________________________________________________
dense_5 (Dense)              (None, 16, 16, 16)        8208      
_________________________________________________________________
flatten_3 (Flatten)          (None, 4096)              0         
_________________________________________________________________
dense_6 (Dense)              (None, 1)                 4097      
=================================================================
Total params: 14,726,993
Trainable params: 12,305
Non-trainable params: 14,714,688
_________________________________________________________________
None

model.summary()  #in the second attempt :

Model: "model_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_5 (InputLayer)         [(None, 512, 512, 3)]     0         
_________________________________________________________________
vgg16 (Model)                (None, 16, 16, 512)       14714688  
_________________________________________________________________
dense_3 (Dense)              (None, 16, 16, 16)        8208      
_________________________________________________________________
flatten_2 (Flatten)          (None, 4096)              0         
_________________________________________________________________
dense_4 (Dense)              (None, 1)                 4097      
=================================================================
Total params: 14,726,993
Trainable params: 12,305
Non-trainable params: 14,714,688
_________________________________________________________________
None

从 keras 的角度来看。转换为 SavedModel 后,

tf.reset_default_graph()
graph = tf.Graph()
sess =  tf.Session(graph=graph) 
tf.saved_model.loader.load(sess, [tf.saved_model.SERVING], "SavedModel")
sess.graph.get_operations()
[<tf.Operation 'dense_3_1/kernel' type=VarHandleOp>,
 <tf.Operation 'dense_3_1/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
<tf.Operation 'dense_3_1/bias' type=VarHandleOp>,
 <tf.Operation 'dense_3_1/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'dense_4_1/kernel' type=VarHandleOp>,
 <tf.Operation 'dense_4_1/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'dense_4_1/bias' type=VarHandleOp>,
 <tf.Operation 'dense_4_1/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block1_conv1/kernel' type=VarHandleOp>,
 <tf.Operation 'block1_conv1/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block1_conv1/bias' type=VarHandleOp>,
 <tf.Operation 'block1_conv1/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block1_conv2/kernel' type=VarHandleOp>,
 <tf.Operation 'block1_conv2/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block1_conv2/bias' type=VarHandleOp>,
 <tf.Operation 'block1_conv2/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block2_conv1/kernel' type=VarHandleOp>,
 <tf.Operation 'block2_conv1/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block2_conv1/bias' type=VarHandleOp>,
 <tf.Operation 'block2_conv1/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block2_conv2/kernel' type=VarHandleOp>,
 <tf.Operation 'block2_conv2/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block2_conv2/bias' type=VarHandleOp>,
 <tf.Operation 'block2_conv2/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block3_conv1/kernel' type=VarHandleOp>,
 <tf.Operation 'block3_conv1/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block3_conv1/bias' type=VarHandleOp>,
 <tf.Operation 'block3_conv1/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block3_conv2/kernel' type=VarHandleOp>,
 <tf.Operation 'block3_conv2/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block3_conv2/bias' type=VarHandleOp>,
 <tf.Operation 'block3_conv2/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block3_conv3/kernel' type=VarHandleOp>,
 <tf.Operation 'block3_conv3/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block3_conv3/bias' type=VarHandleOp>,
 <tf.Operation 'block3_conv3/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block4_conv1/kernel' type=VarHandleOp>,
 <tf.Operation 'block4_conv1/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block4_conv1/bias' type=VarHandleOp>,
 <tf.Operation 'block4_conv1/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block4_conv2/kernel' type=VarHandleOp>,
 <tf.Operation 'block4_conv2/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block4_conv2/bias' type=VarHandleOp>,
 <tf.Operation 'block4_conv2/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block4_conv3/kernel' type=VarHandleOp>,
 <tf.Operation 'block4_conv3/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block4_conv3/bias' type=VarHandleOp>,
 <tf.Operation 'block4_conv3/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block5_conv1/kernel' type=VarHandleOp>,
 <tf.Operation 'block5_conv1/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block5_conv1/bias' type=VarHandleOp>,
 <tf.Operation 'block5_conv1/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block5_conv2/kernel' type=VarHandleOp>,
 <tf.Operation 'block5_conv2/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block5_conv2/bias' type=VarHandleOp>,
 <tf.Operation 'block5_conv2/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block5_conv3/kernel' type=VarHandleOp>,
 <tf.Operation 'block5_conv3/kernel/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'block5_conv3/bias' type=VarHandleOp>,
 <tf.Operation 'block5_conv3/bias/Read/ReadVariableOp' type=ReadVariableOp>,
 <tf.Operation 'NoOp' type=NoOp>,
 <tf.Operation 'Const' type=Const>,
 <tf.Operation 'serving_default_input_5' type=Placeholder>,
 <tf.Operation 'StatefulPartitionedCall' type=StatefulPartitionedCall>,
 <tf.Operation 'saver_filename' type=Placeholder>,
 <tf.Operation 'StatefulPartitionedCall_1' type=StatefulPartitionedCall>,
 <tf.Operation 'StatefulPartitionedCall_2' type=StatefulPartitionedCall>]

所以当我试图做出预测时:

in_t = sess.graph.get_tensor_by_name('serving_default_input_5:0')
out  = sess.graph.get_tensor_by_name('dense_4_1/bias/Read/ReadVariableOp:0')
...
pred = sess.run([out], feed_dict={ in_t: image}) # image has the right shape

如何将形状 (512,512,3) 的图像传递给我加载的 save_model?

提前TY^^

标签: pythontensorflowmachine-learningkeras

解决方案


如果in_t = sess.graph.get_tensor_by_name('serving_default_input_5:0')不起作用,您可以尝试使用,因为这是默认情况下Placeholder_0赋予的名称。Input

此外,不确定您是否Input在将其传递给Prediction.

请找到以下代码pre-processing

IMG_SIZE = 512

img_array = cv2.imread('Image.jpg')
new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
new_array = new_array / 255

new_array = new_array.reshape(-1, 512, 512, 3)

下面的代码应该可以工作:

with tf.Session(graph=tf.Graph()) as sess:
    tf.saved_model.loader.load(
        sess,
        [tf.saved_model.tag_constants.SERVING],
        "Saved_Model"
    )

    prediction = sess.run(
        [out],
        feed_dict={'Placeholder:0': new_array})

    print(prediction)

如果您仍然遇到任何错误,请分享完整的错误跟踪,我们将很乐意为您提供帮助。

希望这可以帮助。快乐学习!


推荐阅读