首页 > 解决方案 > 如何正确计算预定义关键点的描述符?

问题描述

我想开发一个面部对齐程序。有一个视频,从中提取并对齐面部。它以以下方式发生:有一个结果帧,由视频的第一帧构成,然后每个下一帧的人脸与其对齐并重新记录为结果帧。对齐是通过单应性执行的。所以对于每一帧,我需要进行寻找关键点的操作,将它们与当前人脸和结果人脸进行匹配,并计算单应性。

这是问题所在。在我的管道中,当前帧的关键点不能重复计算。相反,提出了以下算法:

  1. 2d numpy 数组格式中有一些预定义的点。(一般来说,它们可以是图像上的任何点,但例如,让我们假设这些点是一些面部标志)

  2. 对于使用akaze特征检测器的第一帧,我在靠近项目 1 的初始点的区域中搜索关键点。

  3. cv2.calcOpticalFlowPyrLK用来跟踪这些关键点,所以在下一帧中我不再检测它们,而是使用前一帧的跟踪关键点。

所以这里是这个的代码:

# Parameters for lucas kanade optical flow
    lk_params = dict( winSize  = (15,15),
                  maxLevel = 2,
                  criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 0.03))
    

    # previous keypoints are the keypoints from the previous frame. It is the list of cv2.Keypoint
    # here I cast them to the input format for optical flow
    coord_keypoints = np.array(list(map(lambda point: [point.pt[0], point.pt[1]], previous_keypoints)), dtype = np.float32)
    p0 = coord_keypoints.copy().reshape((-1, 1, 2))


    # oldFace_gray and faceImg1 are the faces from previous and current frame respectively
    p1, st, err = cv2.calcOpticalFlowPyrLK(oldFace_gray, faceImg1, p0, None, **lk_params)
    indices = np.where(st==1)[0]
    good_new = p1[st==1]
    good_old = p0[st==1]
    
    
    # Here I cast tracked points back to the type of cv2.Keypoint for description and matching
    keypoints1 = []
    for idx, point in zip(indices, good_new):
        keypoint = cv2.KeyPoint(x=point[0], y=point[1],
                                _size=previous_keypoints[idx].size,
                                _class_id=previous_keypoints[idx].class_id,
                                _response=previous_keypoints[idx].response)
        keypoints1.append(keypoint)

     
    # here I create descriptors for keypoints defined above for current face and find and describe keypoints for result face
    akaze = cv2.AKAZE_create(threshold = threshold)
    keypoints1, descriptors1 = akaze.compute(faceImg1, keypoints1)
    keypoints2, descriptors2 = akaze.detectAndCompute(faceImg2, mask=None)


    # Then I want to filter keypoints for result face by their distance to points on current face and previous result face
    # For that firstly define a function
    def landmarkCondition(point, landmarks, eps):
        for borderPoint in landmarks:
            if np.linalg.norm(np.array(point.pt) - np.array(borderPoint)) < eps:
                return True
        return False


    # Then use filters. landmarks_result is 2d numpy array of coordinates of keypoints founded on the previous result face.
    keypoints_descriptors2 = (filter(lambda x : landmarkCondition(x[0], landmarks_result, eps_result), zip(keypoints2, descriptors2)))
        
    keypoints_descriptors2 = list(filter(lambda x : landmarkCondition(x[0], good_new, eps_initial), keypoints_descriptors2))
        
    keypoints2, descriptors2 = [], []
    for keypoint, descriptor in keypoints_descriptors2:
        keypoints2.append(keypoint)
        descriptors2.append(descriptor)
    descriptors2 = np.array(descriptors2)
 
    # Match founded keypoints
    height, width, channels = coloredFace2.shape

    matcher = cv2.DescriptorMatcher_create(cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_SL2)
    matches = matcher.match(descriptors1, descriptors2, None)

    # # Sort matches by score
    matches.sort(key=lambda x: x.distance, reverse=False)
    numGoodMatches = int(len(matches) * GOOD_MATCH_PERCENT)

    matches = matches[:numGoodMatches]
    

    # I want to eliminate obviously bad matches. Since two images are meant to be similar, lines connecting two correspoindg points on images should be almost horizontal with length approximately equal width of the image
    def correct(point1, point2 , width, eps=NOT_ZERO_DIVIDER):
        x1, y1 = point1
        x2, y2 = point2
        angle = abs((y2-y1) / (x2 - x1 + width + eps))
        length = x2 - x1 + width
        return True if angle < CRITICAL_ANGLE and (length > (1-RELATIVE_DEVIATION) * width and length < (1 + RELATIVE_DEVIATION) * width) else False


    goodMatches = []
    for i, match in enumerate(matches):
        if correct(keypoints1[match.queryIdx].pt, keypoints2[match.trainIdx].pt, width):
            goodMatches.append(match)


    # Find homography
    points1 = np.zeros((len(goodMatches), 2), dtype=np.float32)
    points2 = np.zeros((len(goodMatches), 2), dtype=np.float32)

    
    for i, match in enumerate(goodMatches):
        points1[i, :] = keypoints1[match.queryIdx].pt
        points2[i, :] = keypoints2[match.trainIdx].pt
    
    h, mask = cv2.findHomography(points1, points2, method)
    height, width, channels = coloredFace2.shape
    result = cv2.warpPerspective(coloredFace1, h, (width, height))
    resultGray = cv2.cvtColor(result, cv2.COLOR_BGR2GRAY)


这种匹配和对齐的结果很差。如果我在不跟踪的情况下在每个步骤上计算两个图像的关键点,结果会非常好。我在某个地方犯错了吗?

PS我不确定是否发布最小再现示例,因为视频中的帧有很多预处理。

标签: pythonopencvfeature-detectionopticalflow

解决方案


推荐阅读