首页 > 解决方案 > 使用 OpenCV 增强灯的角检测

问题描述

我正在使用以下代码来检测明亮的灯。照明可能会有所不同。我正在使用以下代码来检测相同的内容。

img = cv2.imread("input_img.jpg")
rgb = img.copy()
img_grey = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
while True:

    th3 = cv2.adaptiveThreshold(img_grey, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, \
                                cv2.THRESH_BINARY, 11, 2)

    cv2.imshow("th3",th3)

    edged = cv2.Canny(th3, 50, 100)
    edged = cv2.dilate(edged, None, iterations=1)
    edged = cv2.erode(edged, None, iterations=1)

    cv2.imshow("edge", edged)

    cnts = cv2.findContours(edged.copy(), cv2.RETR_TREE,
                            cv2.CHAIN_APPROX_SIMPLE)
    cnts = imutils.grab_contours(cnts)
    areaArray = []

    for i, c in enumerate(cnts):
        area = cv2.contourArea(c)
        areaArray.append(area)
    sorteddata = sorted(zip(areaArray, cnts), key=lambda x: x[0], reverse=True)

    thirdlargestcontour = sorteddata[2][1]
    x, y, w, h = cv2.boundingRect(thirdlargestcontour)
    cv2.drawContours(rgb, thirdlargestcontour, -1, (255, 0, 0), 2)

    cv2.rectangle(rgb, (x, y), (x + w, y + h), (0, 255, 0), 2)

    cv2.imshow("rgb", rgb)
    if cv2.waitKey(1) == 27:
        break

上面的代码有效,但是,

  1. 它只给出包围灯的矩形。如何精确获取灯的四个角点?
  2. 如何改进检测?目前我正在挑选第三大轮廓,它不能保证它永远是灯,因为环境构成挑战?

在此处输入图像描述

ApproxPolydp 在轮廓完整时工作,但如果轮廓不完整,ApproxPolydp 不会返回正确的坐标。例如在下图中, approxpolydp 返回错误的坐标。

在此处输入图像描述

标签: pythonopencvcorner-detectionimage-enhancement

解决方案


这是在 Python/OpenCV 中执行此操作的一种方法。

  • 读取输入图像并转换为灰度
  • 使用自适应阈值获得灯区域的粗轮廓
  • 找到轮廓
  • 过滤区域上的轮廓以去除无关区域并仅保留两者中较大的一个(阈值区域的内部和外部轮廓)
  • 获取周边
  • 将周长拟合为多边形,该多边形应该是具有正确选择参数的四边形。
  • 在输入图像的副本上绘制轮廓(红色)和多边形(蓝色)作为结果

输入:

在此处输入图像描述

import cv2
import numpy as np

# load image
img = cv2.imread("lamp.jpg")

# convert to gray
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# threshold image
thresh = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 11, 10)
thresh = 255 - thresh

# find contours
cntrs = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cntrs = cntrs[0] if len(cntrs) == 2 else cntrs[1]

# Contour filtering -- remove small objects and those that are too large
# Keep the larger of the two contours (inner and outer contours from thresh)
area_thresh = 0
for c in cntrs:
    area = cv2.contourArea(c)
    if area > 200 and area > area_thresh:
        big_contour = c
        area_thresh = area

# draw big_contour on image in red and polygon in blue and print corners
results = img.copy()
cv2.drawContours(results,[big_contour],0,(0,0,255),1)
peri = cv2.arcLength(big_contour, True)
corners = cv2.approxPolyDP(big_contour, 0.04 * peri, True)
cv2.drawContours(results,[corners],0,(255,0,0),1)
print(len(corners))
print(corners)

# write result to disk
cv2.imwrite("lamp_thresh.jpg", thresh)
cv2.imwrite("lamp_corners.jpg", results)

cv2.imshow("THRESH", thresh)
cv2.imshow("RESULTS", results)
cv2.waitKey(0)
cv2.destroyAllWindows()


阈值图像:

在此处输入图像描述

结果图像:

在此处输入图像描述

角坐标:

[[[233 145]]

 [[219 346]]

 [[542 348]]

 [[508 153]]]



推荐阅读