首页 > 解决方案 > Spacy Entity Rule 不适用于红衣主教(社会安全号码)

问题描述

我已使用实体规则为社会安全号码添加新标签。我什至设置了 overwrite_ents=true 但它仍然无法识别

我验证了正则表达式是正确的。不确定我还需要做什么 我之前尝试过="ner" 但结果相同

text = "My name is yuyyvb and I leave on 605 W Clinton Street. My social security 690-96-4032"
nlp = spacy.load("en_core_web_sm")
ruler = EntityRuler(nlp, overwrite_ents=True)
ruler.add_patterns([{"label": "SSN", "pattern": [{"TEXT": {"REGEX": r"\d{3}[^\w]\d{2}[^\w]\d{4}"}}]}])
nlp.add_pipe(ruler)
doc  = nlp(text)
for ent in doc.ents:
    print("{} {}".format(ent.text, ent.label_))

标签: python-3.xspacynamed-entity-recognition

解决方案


实际上,您拥有的 SSN 已被 spacy 标记为 5 个块:

print([token.text for token in nlp("690-96-4032")])
# => ['690', '-', '96', '-', '4032']

因此,要么使用自定义分词器,其中-数字之间不会被拆分为单独的令牌,要么 - 更简单 - 为连续的 5 个令牌创建一个模式:

patterns = [{"label": "SSN", "pattern": [{"TEXT": {"REGEX": r"^\d{3}$"}}, {"TEXT": "-"}, {"TEXT": {"REGEX": r"^\d{2}$"}}, {"TEXT": "-"}, {"TEXT": {"REGEX": r"^\d{4}$"}} ]}]

完整的空间演示:

import spacy
from spacy.pipeline import EntityRuler

nlp = spacy.load("en_core_web_sm")
ruler = EntityRuler(nlp, overwrite_ents=True)
patterns = [{"label": "SSN", "pattern": [{"TEXT": {"REGEX": r"^\d{3}$"}}, {"TEXT": "-"}, {"TEXT": {"REGEX": r"^\d{2}$"}}, {"TEXT": "-"}, {"TEXT": {"REGEX": r"^\d{4}$"}} ]}]
ruler.add_patterns(patterns)
nlp.add_pipe(ruler)

text = "My name is yuyyvb and I leave on 605 W Clinton Street. My social security 690-96-4032"
doc = nlp(text)
print([(ent.text, ent.label_) for ent in doc.ents])
# => [('605', 'CARDINAL'), ('690-96-4032', 'SSN')]

因此,{"TEXT": {"REGEX": r"^\d{3}$"}}匹配仅由三位数字组成的标记,{"TEXT": "-"}是一个-字符等。

用 spacy 覆盖连字符数字标记化

如果您对如何通过覆盖默认标记化来实现它感兴趣,请注意infixesr"(?<=[0-9])[+\-\*^](?=[0-9-])"regex make spacy 将连字符分隔的数字拆分为单独的标记。要使1-2-31-2类似的子字符串被标记为单个标记,请-从正则表达式中删除。好吧,你不能那样做,这要棘手得多:你需要用 2 个正则表达式替换它:r"(?<=[0-9])[+*^](?=[0-9-])"并且r"(?<=[0-9])-(?=-)"因为事实上在数字 ( ) 和连字符-之间也检查了 (参见)。(?<=[0-9])(?=[0-9-])

所以,整个事情看起来像

import spacy
from spacy.tokenizer import Tokenizer
from spacy.pipeline import EntityRuler
from spacy.util import compile_infix_regex

def custom_tokenizer(nlp):
    # Take out the existing rule and replace it with a custom one:
    inf = list(nlp.Defaults.infixes)
    inf.remove(r"(?<=[0-9])[+\-\*^](?=[0-9-])")
    inf = tuple(inf)
    infixes = inf + tuple([r"(?<=[0-9])[+*^](?=[0-9-])", r"(?<=[0-9])-(?=-)"]) 
    infix_re = compile_infix_regex(infixes)

    return Tokenizer(nlp.vocab, prefix_search=nlp.tokenizer.prefix_search,
                                suffix_search=nlp.tokenizer.suffix_search,
                                infix_finditer=infix_re.finditer,
                                token_match=nlp.tokenizer.token_match,
                                rules=nlp.Defaults.tokenizer_exceptions)

nlp = spacy.load("en_core_web_sm")
nlp.tokenizer = custom_tokenizer(nlp)
ruler = EntityRuler(nlp, overwrite_ents=True)
ruler.add_patterns([{"label": "SSN", "pattern": [{"TEXT": {"REGEX": r"^\d{3}\W\d{2}\W\d{4}$"}}]}])
nlp.add_pipe(ruler)

text = "My name is yuyyvb and I leave on 605 W Clinton Street. My social security 690-96-4032. Some 9---al"
doc = nlp(text)
print([t.text for t in doc])
# =>  ['My', 'name', 'is', 'yuyyvb', 'and', 'I', 'leave', 'on', '605', 'W', 'Clinton', 'Street', '.', 'My', 'social', 'security', '690-96-4032', '.', 'Some', '9', '-', '--al']
print([(ent.text, ent.label_) for ent in doc.ents])
# => [('605', 'CARDINAL'), ('690-96-4032', 'SSN'), ('9', 'CARDINAL')]

如果你遗漏了r"(?<=[0-9])-(?=-)"['9', '-', '--al']就会变成'9---al'.

注意您需要使用^\d{3}\W\d{2}\W\d{4}$regex:^$匹配令牌的开始和结束(否则,部分匹配的令牌也将被识别为 SSN)并且[^\w]等于\W.


推荐阅读