首页 > 解决方案 > 创建图像拼接的邻接矩阵

问题描述

我在成对的图像之间建立了单应性。如何创建邻接矩阵来描述哪些图像相互重叠?

这是我的代码。我调用函数“匹配”来获得两个图像之间的单应性

a = s.left_list[0]
N = len(s.left_list)
adjacency_matrix = np.zeros((N, N))
for i in range(N):
    for j in range(i + 1, N):
        for b in s.left_list[1:]:
            H = s.matcher_obj.match(a, b, 'left')
            print("Homography is : ", H)




def match(self, i1, i2, direction=None):
    imageSet1 = self.getSURFFeatures(i1)
    imageSet2 = self.getSURFFeatures(i2)
    print("Direction : ", direction)
    matches = self.flann.knnMatch(
        imageSet2['des'],
        imageSet1['des'],
        k=2
        )
    good = []
    for i , (m, n) in enumerate(matches):
        if m.distance < 0.9*n.distance:
            good.append((m.trainIdx, m.queryIdx))

    if len(good) > 4:
        pointsCurrent = imageSet2['kp']
        pointsPrevious = imageSet1['kp']

        matchedPointsCurrent = np.float32(
            [pointsCurrent[i].pt for (__, i) in good]
        )
        matchedPointsPrev = np.float32(
            [pointsPrevious[i].pt for (i, __) in good]
            )

        H, s = cv2.findHomography(matchedPointsCurrent,matchedPointsPrev, cv2.RANSAC, 4)
       return H

    return None

def getSURFFeatures(self, im):

    gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
    kp, des = self.surf.detectAndCompute(gray, None)
    return {'kp':kp, 'des':des}

标签: pythonimageopencvimage-processingimage-stitching

解决方案


换句话说,您有许多图像,并且您想确定哪些图像重叠,以便您将它们放入图像拼接器中。

最常见的方法之一是简单地遍历每对独特的图像,然后计算这两个图像之间的局部单应性。一旦计算了单应性,您就可以计算作为单应性内点的关键点对的总比例。如果它高于某个阈值,比如 50%,那么您会认为这两个图像之间存在大量重叠,您会将它们视为有效对。

假设您的图像存储在某个列表中,伪代码如下lst

N = len(lst)
adjacency_matrix = np.zeros((N, N))
for i in range(N):
    for j in range(i + 1, N):
        1. Calculate homography between lst[i] and lst[j]
        2. Compute the total number of inlier keypoint pairs from the homography
        3. Take (2) and divide by the total number of keypoint pair matches
        4. If (3) is above a threshold (50% or 0.5), then:
            adjacency_matrix[i, j] = 1
            adjacency_matrix[j, i] = 1

使用您刚刚向我展示的代码,请注意,它cv2.findHomography不仅返回一个矩阵,还返回一个掩码,告诉您哪些点对被用作构建矩阵的内点。您可以简单地将掩码相加,然后除以此掩码中的元素总数,即可得出该比例。这只会改变return你的代码语句。您已将重投影阈值指定为 4 像素,这非常大,因此您可能会得到较差的拼接结果。当你运行它时,让它变小。

def match(self, i1, i2, direction=None):
    imageSet1 = self.getSURFFeatures(i1)
    imageSet2 = self.getSURFFeatures(i2)
    print("Direction : ", direction)
    matches = self.flann.knnMatch(
        imageSet2['des'],
        imageSet1['des'],
        k=2
        )
    good = []
    for i , (m, n) in enumerate(matches):
        if m.distance < 0.9*n.distance:
            good.append((m.trainIdx, m.queryIdx))

    if len(good) > 4:
        pointsCurrent = imageSet2['kp']
        pointsPrevious = imageSet1['kp']

        matchedPointsCurrent = np.float32(
            [pointsCurrent[i].pt for (__, i) in good]
        )
        matchedPointsPrev = np.float32(
            [pointsPrevious[i].pt for (i, __) in good]
            )

        H, s = cv2.findHomography(matchedPointsCurrent,matchedPointsPrev, cv2.RANSAC, 4)
        return H, sum(s) / len(s)  #  Change here

    return None

最后,这里的伪代码是使用您的特定设置实现的:

lst = s.left_list
N = len(lst)
adjacency_matrix = np.zeros((N, N)) 
for i in range(N):
    for j in range(i + 1, N):
        # 1. Calculate homography between lst[i] and lst[j]
        out = s.matcher_obj.match(lst[i], lst[j], 'left')

        # 2. Compute the total number of inlier keypoint pairs from the homography - done in (1)
        # 3. Take (2) and divide by the total number of keypoint pair matches - done in (1)

        # 4. If (3) is above a threshold (50% or 0.5), then:
        #    adjacency_matrix[i, j] = 1
        #    adjacency_matrix[j, i] = 1
        if out is not None:
            H, s = out
            if s >= 0.5:
                adjacency_matrix[i, j] = 1
                adjacency_matrix[j, i] = 1

请注意,None如果内点数量不足以提供对良好单应性的信心,则匹配方法会输出。因此,我们需要检查输出是否不是None,然后相应地检查比例。最后,我强烈建议调整重投影阈值和相似度阈值(在上面的代码中为 0.5),直到您获得足够好的针迹来满足您的目的。目前这些都包含在您的代码中,但请考虑使这些可调。


推荐阅读