python - 带有 cv2.reprojectImageTo3D 的 OpenCV 点云得到深度由 n x 3 矩阵
问题描述
我使用 SBGM 算法创建了一个视差图像,它给我一个漂亮的图像
import numpy as np
import cv2
#load unrectified images
unimgR =cv2.imread("R.jpg")
unimgL =cv2.imread("L.jpg")
#load calibration from calibration file
calibration = np.load(r"C:\Users\XXX\PycharmProjects\rectify\Test3_OpenCV_Rectified.npz", allow_pickle=False) # load variables from calibration file
imageSize = tuple(calibration["imageSize"])
leftMatrix = calibration["leftMatrix"]
leftDist = calibration["leftDist"]
leftMapX = calibration["leftMapX"]
leftMapY = calibration["leftMapY"]
leftROI = tuple(calibration["leftROI"])
rightMatrix = calibration["rightMatrix"]
rightDist = calibration["rightDist"]
rightMapX = calibration["rightMapX"]
rightMapY = calibration["rightMapY"]
rightROI = tuple(calibration["rightROI"])
disparityToDepthMap = calibration["disparityToDepthMap"]
# Rectify images (including monocular undistortion)
imgL = cv2.remap(unimgL, leftMapX, leftMapY, cv2.INTER_LINEAR)
imgR = cv2.remap(unimgR, rightMapX, rightMapY, cv2.INTER_LINEAR)
# SGBM Parameters
window_size = 15 # wsize default 3; 5; 7 for SGBM reduced size image; 15 for SGBM full size image (1300px and above); 5 Works nicely
left_matcher = cv2.StereoSGBM_create(
minDisparity=0,
numDisparities=160, # max_disp has to be dividable by 16 f. E. HH 192, 256
blockSize=5,
P1=8 * 3 * window_size ** 2,
# wsize default 3; 5; 7 for SGBM reduced size image; 15 for SGBM full size image (1300px and above); 5 Works nicely
P2=32 * 3 * window_size ** 2,
disp12MaxDiff=1,
uniquenessRatio=15,
speckleWindowSize=0,
speckleRange=2,
preFilterCap=63,
mode=cv2.STEREO_SGBM_MODE_SGBM_3WAY
)
right_matcher = cv2.ximgproc.createRightMatcher(left_matcher)
# FILTER Parameters
lmbda = 80000
sigma = 1.2
visual_multiplier = 1.0
# Weighted least squares filter to fill sparse (unpopulated) areas of the disparity map
# by aligning the images edges and propagating disparity values from high- to low-confidence regions
wls_filter = cv2.ximgproc.createDisparityWLSFilter(matcher_left=left_matcher)
wls_filter.setLambda(lmbda)
wls_filter.setSigmaColor(sigma)
# Get depth information/disparity map using SGBM
displ = left_matcher.compute(imgL, imgR) # .astype(np.float32)/16
dispr = right_matcher.compute(imgR, imgL) # .astype(np.float32)/16
displ = np.int16(displ)
dispr = np.int16(dispr)
filteredImg = wls_filter.filter(displ, imgL, None, dispr) # important to put "imgL" here!!!
filteredImg = cv2.normalize(src=filteredImg, dst=filteredImg, beta=0, alpha=255, norm_type=cv2.NORM_MINMAX);
filteredImg = np.uint8(filteredImg)
# Calculate 3D point cloud
pointCloud = cv2.reprojectImageTo3D(filteredImg,disparityToDepthMap) / 420 # needs to be divided by 420 to obtain metric values (80 without normalization)
print('...shape of the pointcloud:', pointCloud.shape)
print(pointCloud[1000][550])
cv2.imshow('Disparity Map', filteredImg)
cv2.waitKey()
cv2.destroyAllWindows()
现在我想用 pointCloud = cv2.reprojectImageTo3D 根据图像中的这个ReprojectImageTo3D 对应像素计算 z 坐标(立体视觉)
X,Y=1000,550 点的距离为 10m,但它给了我 [-0.09156016 0.09407288 0.32270285](3 维而不是 1)
我不知道该怎么办:(
我认为它是 3D 点,对吧?但是如何从 XY 坐标中获得单个单位中的距离 Z 分量?
解决方案
如果您从 X、Y 坐标计算欧几里得距离以获得 Z 值,这将有所帮助,这就是您想要的距离。
print("Distance pointcloud in m :",np.sqrt(np.sum(np.power(pointCloud[1000][550],2))))
推荐阅读
- javascript - Using bracket notation depending on input data
- windows - 使用 Windows API 写入和读取可用磁盘空间
- python - Pandas inter-column referencing
- python - What is the hotkey for command in Pyautogui?
- r - How to set a numeraire to graph and compare different treatment variables
- python - 是否可以使用 python/pytest 测试测试依赖项没有泄漏到实际代码中
- sql-server - 在没有 UNION 的情况下获取两个数据集的结果
- javascript - 如何检查变量是否为空,以便稍后更改?
- c# - 如何在 C# 中制作格式化的 json 文件
- python - python中三重嵌套循环的替代方案