首页 > 解决方案 > 洪水填充功能效果不佳

问题描述

在此处输入图像描述在此处输入图像描述我在opencv中应用了floodfill函数从背景中提取前景,但图像中的一些对象未被算法识别,所以我想知道如何改进我的检测以及需要进行哪些修改。

image = cv2.imread(args["image"])
image = cv2.resize(image, (800, 800))
h,w,chn = image.shape
ratio = image.shape[0] / 800.0
orig = image.copy()

gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (5, 5), 0)
edged = cv2.Canny(gray, 75, 200)
# show the original image and the edge detected image
print("STEP 1: Edge Detection")
cv2.imshow("Image", image)
cv2.imshow("Edged", edged)

warped1 = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
T = threshold_local(warped1, 11, offset = 10, method = "gaussian")
warped1 = (warped1 > T).astype("uint8") * 255
print("STEP 3: Apply perspective transform")

seed = (10, 10)

foreground, birdEye = floodFillCustom(image, seed)
cv2.circle(birdEye, seed, 50, (0, 255, 0), -1)
cv2.imshow("originalImg", birdEye)

cv2.circle(birdEye, seed, 100, (0, 255, 0), -1)

cv2.imshow("foreground", foreground)
cv2.imshow("birdEye", birdEye)

gray = cv2.cvtColor(foreground, cv2.COLOR_BGR2GRAY)
cv2.imshow("gray", gray)
cv2.imwrite("gray.jpg", gray)

threshImg = cv2.threshold(gray, 1, 255, cv2.THRESH_BINARY)[1]
h_threshold,w_threshold = threshImg.shape
area = h_threshold*w_threshold

cv2.imshow("threshImg", threshImg)[![enter image description here][1]][1]

floodFillCustom 函数如下 -

def floodFillCustom(originalImage, seed):

    originalImage = np.maximum(originalImage, 10)
    foreground = originalImage.copy()

    cv2.floodFill(foreground, None, seed, (0, 0, 0),
                  loDiff=(10, 10, 10), upDiff=(10, 10, 10))

    return [foreground, originalImage]

[1]:https://i.stack.imgur.com/69UUh.jpg

标签: pythonopencvimage-processingcomputer-visionflood-fill

解决方案


有点晚了,但这是分割工具的另一种解决方案。它涉及将图像转换为CMYK颜色空间并提取K(Key) 分量。这个组件可以是thresholded一个漂亮的二进制掩码工具,过程非常简单:

  1. 将图像转换为CMYK颜色空间
  2. 提取K(Key)分量
  3. 通过 Otsu 的阈值对图像进行阈值处理
  4. 应用一些形态(关闭)来清理面具
  5. (可选)获取所有工具的边界矩形

让我们看看代码:

# Imports
import cv2
import numpy as np

# Read image
imagePath = "C://opencvImages//"
inputImage = cv2.imread(imagePath+"DAxhk.jpg")

# Create deep copy for results:
inputImageCopy = inputImage.copy()

# Convert to float and divide by 255:
imgFloat = inputImage.astype(np.float) / 255.

# Calculate channel K:
kChannel = 1 - np.max(imgFloat, axis=2)

# Convert back to uint 8:
kChannel = (255*kChannel).astype(np.uint8)

第一步是将BGR图像转换为CMYK. OpenCV 中没有对此进行直接转换,所以我直接应用了转换公式。我们可以从该公式中获得每个颜色空间分量,但我们只对K通道感兴趣。转换很容易,但我们需要注意数据类型。我们需要对float数组进行操作。获取K通道后,我们将图像转换回unsigned 8-bit数组,这是生成的图像:

让我们使用Otsu 的阈值方法对该图像进行阈值处理:

# Threshold via Otsu:
_, binaryImage = cv2.threshold(kChannel, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)

这会产生以下二进制图像:

看起来很不错!此外,我们可以使用morphological closing. 让我们应用 arectangular structuring element的大小5 x 5并使用2迭代:

# Use a little bit of morphology to clean the mask:
# Set kernel (structuring element) size:
kernelSize = 5
# Set morph operation iterations:
opIterations = 2
# Get the structuring element:
morphKernel = cv2.getStructuringElement(cv2.MORPH_RECT, (kernelSize, kernelSize))
# Perform closing:
binaryImage = cv2.morphologyEx(binaryImage, cv2.MORPH_CLOSE, morphKernel, None, None, opIterations, cv2.BORDER_REFLECT101)

结果是:

很酷。以下是可选的。我们可以bounding rectangles通过寻找外部(外部)轮廓来获得每个工具的 :

# Find the contours on the binary image:
contours, hierarchy = cv2.findContours(binaryImage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

# Look for the outer bounding boxes (no children):
for _, c in enumerate(contours):

    # Get the contours bounding rectangle:
    boundRect = cv2.boundingRect(c)

    # Get the dimensions of the bounding rectangle:
    rectX = boundRect[0]
    rectY = boundRect[1]
    rectWidth = boundRect[2]
    rectHeight = boundRect[3]

    # Set bounding rectangle:
    color = (0, 0, 255)
    cv2.rectangle( inputImageCopy, (int(rectX), int(rectY)),
                   (int(rectX + rectWidth), int(rectY + rectHeight)), color, 5 )

    cv2.imshow("Bounding Rectangles", inputImageCopy)
    cv2.waitKey(0)

生成最终图像:


推荐阅读