首页 > 解决方案 > 如何使用 cv.recoverPose() 计算自我运动?

问题描述

最近,我使用findEssentialMatandrecoverPose来获取给定两帧相机的旋转角度(俯仰、偏航、滚动)。我注意到这recoverPose实际上给出了一个违反直觉的结果,因为它计算的是点移动而不是相机的方向,请参见此处

根据链接的解释,我可以通过简单地反转R 矩阵,即 R^(-1) 来得到“正确”的答案。但是,当我使用这两个简单的图像进行测试时,结果euler angle是:

[ 1.19186314e+02 -3.94661600e-02 -1.79903575e+02]  # from inverted R
[ 1.19186327e+02 -6.49371021e-02 -1.79918523e+02]  # from original R

两者都是错误的(无论我是否倒置),因为我只是向上旋转相机(围绕 x 轴旋转)。

我使用的图像是:第一的第二

以下是我的完整代码:

import cv2 as cv
import numpy as np
from scipy.spatial.transform import Rotation as R
import matplotlib.pyplot as plt

# invert the original R from direct output of Essential matrix 
def invert_R(R_mat):
    R_mat_inv = R.from_matrix(R_mat).inv()
    return R_mat_inv.as_euler('zyx', degrees=True)


# testing code
img1 = cv.imread('img/5.jpg') #self-defined
img2 = cv.imread('img/6.jpg')

# use ORB detector to do feature matching
orb = cv.ORB_create()
kp1, des1 = orb.detectAndCompute(img1,None)
kp2, des2 = orb.detectAndCompute(img2,None)

bf = cv.BFMatcher(cv.NORM_HAMMING, crossCheck=True)
matches = bf.match(des1, des2)
matches = sorted(matches, key = lambda x:x.distance)

# extract coordinates of matched keypoints
kpts1 = np.array([kp1[m.trainIdx].pt for m in matches], dtype=np.int)
kpts2 = np.array([kp2[m.queryIdx].pt for m in matches], dtype=np.int)
img3 = cv.drawMatches(img1,kp1,img2,kp2,matches[:10],None,flags=cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)

# calculate Essential, get Roation matrix
E, mask = cv.findEssentialMat(kpts1, kpts2, prob=0.9999, threshold=0.1)
_, R_est, t_est, mask_pose = cv.recoverPose(E, kpts1, kpts2)


# calculate inverted R and origianl R matrix, both displayed in euler angles.
R1_ang = invert_R(R_est)
R2_ang = R.from_matrix(R_est).as_euler('zyx', degrees=True)
print(R1_ang)
print(R2_ang)
plt.imshow(img3), plt.show()

标签: pythonopencvcomputer-vision

解决方案


推荐阅读