首页 > 解决方案 > 在一行中识别具有不同字符高度的俄罗斯车牌 OpenALPR

问题描述

我对俄罗斯车牌有疑问。当我想用 openalpr 工具对字符进行分类时,我得到以下信息:

前

后

OCR 剪切了我上面的数字片段。我使用以下参数为该国家/地区生成新的 .conf 文件:

char_analysis_min_pct = 0.29
char_analysis_height_range = 0.20
char_analysis_height_step_size = 0.10
char_analysis_height_num_steps = 6

segmentation_min_speckle_height_percent = 0.3
segmentation_min_box_width_px = 6
segmentation_min_charheight_percent = 0.1;
segmentation_max_segment_width_percent_vs_average = 1.95;

plate_width_mm= 520
plate_height_mm = 112

multiline = 1

char_height_mm = 58
char_width_mm = 44

char_whitespace_top_mm = 18
char_whitespace_bot_mm = 18

template_max_width_px = 300
template_max_height_px = 64

; Higher sensitivity means less lines
plateline_sensitivity_vertical = 10
plateline_sensitivity_horizontal = 45

; Regions smaller than this will be disqualified
min_plate_size_width_px = 65
min_plate_size_height_px = 18

; Results with fewer or more characters will be discarded
postprocess_min_characters = 8
postprocess_max_characters = 9

;detector_file= eu.xml
ocr_language = lamh

;Override for postprocess letters/numbers regex.
postprocess_regex_letters = [A,B,C,E,H,K,M,O,P,T,X,Y]
postprocess_regex_numbers = [0-9]

; Whether the plate is always dark letters on light background, light letters on dark background, or both
; value can be either always, never, or auto
invert = auto

任何人都知道如何解决它?

从这个存储库开始,我使用了 OCR 文件https://github.com/KostyaKulakov/Russian_System_of_ANPR

原始照片

谢谢你。

标签: opencvocropenalpr

解决方案


也许是这样的?

资源

import cv2
import numpy as np

image = cv2.imread('plate.png')
# cv2.imshow('original', image)

gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# cv2.imshow('gray', gray)

ret, thresh = cv2.threshold(gray, 60, 255, cv2.THRESH_BINARY_INV)
cv2.imshow('thresh', thresh)

blur = cv2.medianBlur(thresh, 1)

kernel = np.ones((10, 20), np.uint8)
img_dilation = cv2.dilate(blur, kernel, iterations=1)
cv2.imshow('dilated', img_dilation)

im2, ctrs, hier = cv2.findContours(img_dilation.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)

# sort contours
sorted_ctrs = sorted(ctrs, key=lambda ctr: cv2.boundingRect(ctr)[0])

for i, ctr in enumerate(sorted_ctrs):
    # Get bounding box
    x, y, w, h = cv2.boundingRect(ctr)

    # Getting ROI
    roi = image[y:y + h, x:x + w]

    if (h > 50 and w > 50) and h < 200:

        # show ROI
        # cv2.imshow('segment no:'+str(i),roi)
        cv2.rectangle(image, (x, y), (x + w, y + h), (255, 255, 255), 1)
        # cv2.waitKey(0)

        cv2.imwrite('{}.png'.format(i), roi)

cv2.imshow('marked areas', image)
cv2.waitKey(0)

这将为您节省所有需要的投资回报率......

输出2

(最后一张图片可以再次分割)

...并对每个字符进行 OCR。在我看来,这可能比对整个图像进行 OCR 更容易。

如果您对每个图像进行一些阈值处理并添加一个 IDK 5 像素边框,那么所有过程都会更容易。


推荐阅读