首页 > 解决方案 > tf.convert_to_tensor(pred_labels) - ValueError: Argument must be a dense tensor: got shape [2, 436, 1024, 2],但想要 [2]

问题描述

我正在尝试将现有光流网络的流输出从 numpy 数组转换回张量,以便我可以通过可微插值网络运行它。

PWC-Net 代码采用两个相同大小的输入图像并计算流对应关系。如果这是一对图像,我认为流将是 x 和 y 中的像素位移。尺寸为 [1, h, W, 2]。但是,批次中可以有不同数量的图像对。所以我们可以称它为 b 表示批量大小,因此 4D 体积变为 [b,h,w,2]。

https://github.com/philferriere/tfoptflow/blob/master/tfoptflow/pwcnet_predict_from_img_pairs.py

通过使用张量

pred_labels_tensor = tf.convert_to_tensor(pred_labels)

我读过这些

将 Python 序列转换为 NumPy 数组,填充缺失值

如何在 tf.data.Dataset 中输入不同大小的列表列表

但我仍然不明白我需要做什么才能让它发挥作用。

我还调查了这个文件运行的代码,它确实使用了 np.asarray。

这两个链接让我认为这与它是一个列表列表或可能需要一些零有关。我怎样才能弄清楚问题到底是什么?我该如何解决?

从这个 python 文件:- https://github.com/philferriere/tfoptflow/blob/master/tfoptflow/pwcnet_predict_from_img_pairs.py

要重现问题,您可以使用 github 下载中提供的现有示例,使用此代码代替 for 循环。

image_path1 = f'./samples/mpisintel_test_clean_ambush_1_frame_0001.png'
image_path2 = f'./samples/mpisintel_test_clean_ambush_1_frame_0002.png'
image1, image2 = imread(image_path1), imread(image_path2)
img_pairs.append((image1, image2))

image_path1 = f'./samples/mpisintel_test_clean_ambush_1_frame_0003.png'
image_path2 = f'./samples/mpisintel_test_clean_ambush_1_frame_0004.png'
image1, image2 = imread(image_path1), imread(image_path2)
img_pairs.append((image1, image2))

pred_labels = nn.predict_from_img_pairs(img_pairs, batch_size=1, verbose=False)

pred_labels_tensor = tf.convert_to_tensor(pred_labels)

我希望得到张量输出,但是,我在 MSVSCode 的终端中收到此错误:-

ValueError: Argument must be a dense tensor: [array([[[ 0.32990038, -0.11566047],
        [ 0.35661912, -0.09227534],
        [ 0.38333783, -0.06889021],
        ...,
        [-0.1237613 ,  0.07946336],
        [-0.1237613 ,  0.07946336],
        [-0.1237613 ,  0.07946336]],

       [[ 0.34405386, -0.09286585],
        [ 0.36766803, -0.07679807],
        [ 0.39128217, -0.06073029],
        ...,
        [-0.10938472,  0.08551764],
        [-0.10938472,  0.08551764],
        [-0.10938472,  0.08551764]],

       [[ 0.35820735, -0.07007124],
        [ 0.37871695, -0.0613208 ],
        [ 0.39922655, -0.05257037],
        ...,
        [-0.09500815,  0.09157193],
        [-0.09500815,  0.09157193],
        [-0.09500815,  0.09157193]],

       ...,

       [[ 0.9003515 ,  1.0893728 ],
        [ 0.93065804,  1.0662789 ],
        [ 0.96096456,  1.0431851 ],
        ...,
        [ 0.43580785,  0.02744378],
        [ 0.43580785,  0.02744378],
        [ 0.43580785,  0.02744378]],

       [[ 0.9003515 ,  1.0893728 ],
        [ 0.93065804,  1.0662789 ],
        [ 0.96096456,  1.0431851 ],
        ...,
        [ 0.43580785,  0.02744378],
        [ 0.43580785,  0.02744378],
        [ 0.43580785,  0.02744378]],

       [[ 0.9003515 ,  1.0893728 ],
        [ 0.93065804,  1.0662789 ],
        [ 0.96096456,  1.0431851 ],
        ...,
        [ 0.43580785,  0.02744378],
        [ 0.43580785,  0.02744378],
        [ 0.43580785,  0.02744378]]], dtype=float32), array([[[ 0.49922907,  0.08599953],
        [ 0.5034714 ,  0.1123561 ],
        [ 0.5077137 ,  0.13871266],
        ...,
        [-0.3719127 ,  0.1080336 ],
        [-0.3719127 ,  0.1080336 ],
        [-0.3719127 ,  0.1080336 ]],

       [[ 0.49763823,  0.11536887],
        [ 0.4972613 ,  0.13717887],
        [ 0.49688435,  0.15898886],
        ...,
        [-0.36932352,  0.11556612],
        [-0.36932352,  0.11556612],
        [-0.36932352,  0.11556612]],

       [[ 0.49604735,  0.14473821],
        [ 0.4910512 ,  0.16200164],
        [ 0.48605505,  0.17926508],
        ...,
        [-0.36673436,  0.12309864],
        [-0.36673436,  0.12309864],
        [-0.36673436,  0.12309864]],

       ...,

       [[ 0.46260613, -0.47470346],
        [ 0.46841043, -0.46383834],
        [ 0.47421476, -0.4529732 ],
        ...,
        [-0.29265293, -0.4021799 ],
        [-0.29265293, -0.4021799 ],
        [-0.29265293, -0.4021799 ]],

       [[ 0.46260613, -0.47470346],
        [ 0.46841043, -0.46383834],
        [ 0.47421476, -0.4529732 ],
        ...,
        [-0.29265293, -0.4021799 ],
        [-0.29265293, -0.4021799 ],
        [-0.29265293, -0.4021799 ]],

       [[ 0.46260613, -0.47470346],
        [ 0.46841043, -0.46383834],
        [ 0.47421476, -0.4529732 ],
        ...,
        [-0.29265293, -0.4021799 ],
        [-0.29265293, -0.4021799 ],
        [-0.29265293, -0.4021799 ]]], dtype=float32)] - [2, 436, 1024, 2], but wanted [2]

如果我将批量大小减少到一个,我会收到此错误

ValueError: Argument must be a dense tensor: - 得到形状 [1, 436, 1024, 2],但想要 [1]。

要获得最小的可重现示例,它需要以下内容:-

Python 3.7.3 Tensorflow 1.13.1(最新稳定版本)另外,您需要下载并复制下面的脚本并将其粘贴到现有脚本上

还有一个模型可以从https://drive.google.com/drive/folders/1iRJ8SFF6fyoICShRVWuzr192m_DgzSYp下载 pwcnet-lg-6-2-multisteps-chairsthingsmix

"""
pwcnet_predict_from_img_pairs.py
Run inference on a list of images pairs.
Written by Phil Ferriere
Licensed under the MIT License (see LICENSE for details)
"""
from __future__ import absolute_import, division, print_function

from voxel_flow_geo_layer_utils import bilinear_interp
from voxel_flow_geo_layer_utils import meshgrid

from copy import deepcopy
from skimage.io import imread
from model_pwcnet import ModelPWCNet, _DEFAULT_PWCNET_TEST_OPTIONS
#from visualize import display_img_pairs_w_flows
import visualize
import numpy as np
import tensorflow as tf

# TODO: Set device to use for inference
# Here, we're using a GPU (use '/device:CPU:0' to run inference on the CPU)
gpu_devices = ['/device:GPU:0']  
controller = '/device:GPU:0'

# TODO: Set the path to the trained model (make sure you've downloaded it first https://drive.google.com/drive/folders/1iRJ8SFF6fyoICShRVWuzr192m_DgzSYp)
ckpt_path = './models/pwcnet-lg-6-2-multisteps-chairsthingsmix/pwcnet.ckpt-595000'

# Build a list of image pairs to process (in this case it's just one image pair)
img_pairs = []
image_path1 = f'./samples/mpisintel_test_clean_ambush_1_frame_0001.png'
image_path2 = f'./samples/mpisintel_test_clean_ambush_1_frame_0002.png'
image1, image2 = imread(image_path1), imread(image_path2)
img_pairs.append((image1, image2))

# Configure the model for inference, starting with the default options
nn_opts = deepcopy(_DEFAULT_PWCNET_TEST_OPTIONS)
nn_opts['verbose'] = True
nn_opts['ckpt_path'] = ckpt_path
nn_opts['batch_size'] = 1
nn_opts['gpu_devices'] = gpu_devices
nn_opts['controller'] = controller

# We're running the PWC-Net-large model in quarter-resolution mode
# That is, with a 6 level pyramid, and upsampling of level 2 by 4 in each dimension as the final flow prediction
nn_opts['use_dense_cx'] = True
nn_opts['use_res_cx'] = True
nn_opts['pyr_lvls'] = 6
nn_opts['flow_pred_lvl'] = 2

# The size of the images in this dataset are not multiples of 64, while the model generates flows padded to multiples
# of 64. Hence, we need to crop the predicted flows to their original size
nn_opts['adapt_info'] = (1, 8, 8, 2)

# Instantiate the model in inference mode and display the model configuration
nn = ModelPWCNet(mode='test', options=nn_opts)
nn.print_config()

# Generate the predictions and display them
pred_labels = nn.predict_from_img_pair # pred_labels shape is [436, 1024,2]
# array has len 1 when there is only one image pair.
pred_labels_tensor = tf.convert_to_tensor(pred_labels)

标签: pythontensorflow

解决方案


推荐阅读