首页 > 解决方案 > 在 TF2 中加载 SavedModel 时,报告“signature_wrapper 采用 0 个位置参数,但给出了 1 个”

问题描述

我尝试创建一个解码器,它以五张量元组作为输入。当我保存它时.h5它工作正常,但是当我尝试保存(没有错误报告)、加载和进行推理时,它报告:

Traceback (most recent call last):
  File "D:/MA/Recources/monodepth2-torch/dsy.py", line 196, in <module>
    build_model(inputs)
  File "D:/MA/Recources/monodepth2-torch/dsy.py", line 185, in build_model
    outputs = decoder_pb(inputs)
  File "C:\Users\Dexxh\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\eager\function.py", line 1655, in __call__
    return self._call_impl(args, kwargs)
  File "C:\Users\Dexxh\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\eager\function.py", line 1673, in _call_impl
    return self._call_with_flat_signature(args, kwargs, cancellation_manager)
  File "C:\Users\Dexxh\AppData\Roaming\Python\Python36\site-packages\tensorflow\python\eager\function.py", line 1695, in _call_with_flat_signature
    len(args)))
TypeError: signature_wrapper(input_1, input_2, input_3, input_4, input_5) takes 0 positional arguments but 1 were given

我对模型的定义如下。细节似乎没有问题,因为当我将它加载为 Keras 模型时它运行良好。我tensorflow 2.3.1会在您需要时使用。

class DepthDecoder(tf.keras.Model):
    def __init__(self):
        super(DepthDecoder, self).__init__()
        self.num_ch_enc = [64, 64, 128, 256, 512]
        self.num_ch_dec = [16, 32, 64, 128, 256]
        self.scales = [0,1,2,3]  # range(4)
        self.num_output_channels = 1

        self.convs_0 = [None]*len(self.num_ch_dec)
        self.convs_1 = [None]*len(self.num_ch_dec)

        # todo: dispconv can be multiple output
        self.dispconv_0 = self.make_conv(self.num_ch_dec[0], self.num_output_channels, activate_type=None,
                                         pad_mode='reflect', type='disp', index=0)

        for i in range(4, -1, -1):
            # upconv_0
            num_ch_in = self.num_ch_enc[-1] if i == 4 else self.num_ch_dec[i + 1]
            num_ch_out = self.num_ch_dec[i]
            self.convs_0[i] = self.make_conv(num_ch_in, num_ch_out, pad_mode='reflect', activate_type='elu',
                                             type='conv_0', index=i)

            # upconv_1
            num_ch_in = self.num_ch_dec[i]
            if i > 0:
                num_ch_in += self.num_ch_enc[i - 1]
            num_ch_out = self.num_ch_dec[i]
            self.convs_1[i] = self.make_conv(num_ch_in, num_ch_out, pad_mode='reflect', activate_type='elu',
                                             type='conv_1', index=i)



    def make_conv(self, input_channel, filter_num, activate_type=None, pad_mode='reflect',
                  type:str=None, index=-1, input_shape:tuple=None):
        name = None
        if type is not None and index != -1:
            name = ''.join([type, '_%d'%index])
        if pad_mode == 'reflect':
            padding = 'valid'
        else:
            padding = 'same'
        conv = Conv2D(filters=filter_num, kernel_size=3, activation=activate_type,
                      strides=1, padding=padding, use_bias=True, name=name)
        return conv


    def call(self, input_features, training=None, mask=None):
        ch_axis = 3
        x = input_features[-1]
        for i in range(4, -1, -1):
            x = tf.pad(x, [[0, 0], [1, 1], [1, 1], [0, 0]], mode='REFLECT')
            x = self.convs_0[i](x)
            x = [tf.keras.layers.UpSampling2D()(x)]
            if i > 0:
                x += [input_features[i - 1]]
            x = tf.concat(x, ch_axis)
            x = tf.pad(x, [[0, 0], [1, 1], [1, 1], [0, 0]], mode='REFLECT')
            x = self.convs_1[i](x)
                # outputs.append(tf.math.sigmoid(x))
        x = tf.pad(x, [[0, 0], [1, 1], [1, 1], [0, 0]], mode='REFLECT')
        x = self.dispconv_0(x)
        disp0 = tf.math.sigmoid(x)

        return disp0

然后保存并加载:

inputs = (tf.random.uniform(shape=(1,96, 320, 64)),
          tf.random.uniform(shape=(1,48, 160, 64)),
          tf.random.uniform(shape=(1,24, 80, 128)),
          tf.random.uniform(shape=(1,12, 40, 256)),
          tf.random.uniform(shape=(1,6, 20, 512)))
# Load
decoder = DepthDecoder()
outputs = decoder.predict(inputs)
decoder = decoder_load_weights(decoder) # a custom weights loading from Pytorch, weights, details see below
tf.keras.models.save_model(decoder, "decoder_test")

# Inference
decoder_import = tf.saved_model.load("decoder_test")
decoder_pb = decoder_import.signatures['serving_default']
outputs = decoder_pb(inputs)
for k, v in outputs:
   print(v.shape)

# For completeness, here is the decoder_load_weigths() function
def decoder_load_weights(decoder, weights_path=None):
    # Weights as List of ndarray, stored layerwise. Since it's fully convolutional, it's like [[#conv_0]*5,[#conv1]*5, [dispconv]], nothing else.
    decoder_weights = np.load(weights_path, allow_pickle=True)
    ind = 0
    for l in decoder.layers:
        print(l.name)
        weights = l.get_weights()
        if len(weights) == 0:
            print("no weigths")
        else:
            print(weights[0].shape, "\t", weights[1].shape)
            print(weights_grouped[ind][0].shape, "\t", weights_grouped[ind][1].shape)
            new_weights = weights_grouped[ind]
            l.set_weights(new_weights)
            print("loading the %dnd conv layer..."% ind)
            ind += 1
    return decoder

奇怪的是它说需要0位置参数,这表明不允许输入。你能提供任何见解吗?谢谢!!

最后,让我贴一张里面的快照decoder_pbinfer在快照中调用)。您可以看到decoder_pb确实已经Tensor命名为input_1input_2等等,所以问题是如何将我的输入分配给它们。我不能直接将张量分配给它们,因为分配的张量的“名称”不是input_1并且EagerTensor 不能重命名

我记得在TF1.x里面有 feed_dict传入session,不知道有没有关系... 在此处输入图像描述

标签: pythontensorflowkerastensorflow2.0

解决方案


解决了!

事实证明这正是错误报告的意思......它不需要位置参数,这意味着它只需要关键字参数。所以解决方案可以是

res = infer(input_1=features[0], input_2=features[1], ...)

或者

# feed_dict = {'input_1' : features[0], ...}
res = infer(**feed_dict)

但这不被接受:

disp_raw = decoder(features[0], features[1],features[2],features[3], features[4])

这实际上很奇怪,因为通常我们不需要指定关键字,只要我们以正确的顺序传递它们。此外,当我们只有一个输入时,我们不需要这个,例如res = infer(#one_tensor).

所以我猜这是一个错误?无论如何,希望遇到这个问题的其他人可以从这个答案中受益:)


推荐阅读