python - spaCy - 向管道添加扩展功能会导致堆栈溢出
问题描述
我正在尝试将基于匹配器规则的函数添加到我的 spaCy 管道中。但是,将其添加到管道会导致 StackOverflow 错误。很有可能是用户错误。任何建议或想法将不胜感激。
在不将其添加到管道的情况下运行该函数可以正常工作。
代码示例:
import spacy
from spacy.matcher import PhraseMatcher
from spacy.tokens import Span
nlp = spacy.load("en_core_web_sm")
def extend_matcher_entities(doc):
matcher = PhraseMatcher(nlp.vocab, attr="SHAPE")
matcher.add("TIME", None, nlp("0305Z"), nlp("1315z"),nlp("0830Z"),nlp("0422z"))
new_ents = []
for match_id, start, end in matcher(doc):
new_ent = Span(doc, start, end, label=nlp.vocab.strings[match_id])
new_ents.append(new_ent)
doc.ents = new_ents
return doc
# Add the component after the named entity recognizer
nlp.add_pipe(extend_matcher_entities, after='ner')
doc = nlp("At 0560z, I walked over to my car and got in to go to the grocery store.")
# extend_matcher_entities(doc)
print([(ent.text, ent.label_) for ent in doc.ents])
这个来自 spacy 代码示例的示例工作正常:
import spacy
from spacy.tokens import Span
nlp = spacy.load("en_core_web_sm")
def expand_person_entities(doc):
new_ents = []
for ent in doc.ents:
if ent.label_ == "PERSON" and ent.start != 0:
prev_token = doc[ent.start - 1]
if prev_token.text in ("Dr", "Dr.", "Mr", "Mr.", "Ms", "Ms."):
new_ent = Span(doc, ent.start - 1, ent.end, label=ent.label)
print(new_ent)
new_ents.append(new_ent)
else:
new_ents.append(ent)
doc.ents = new_ents
print(new_ents)
return doc
# Add the component after the named entity recognizer
nlp.add_pipe(expand_person_entities, after='ner')
doc = nlp("Dr. Alex Smith chaired first board meeting of Acme Corp Inc.")
print([(ent.text, ent.label_) for ent in doc.ents])
我错过了什么?
解决方案
你有一个循环引用的违规行是这个:
matcher.add("TIME", None, nlp("0305Z"), nlp("1315z"),nlp("0830Z"),nlp("0422z"))
把它从你的函数定义中去掉就可以了:
import spacy
from spacy.matcher import PhraseMatcher
from spacy.tokens import Span
nlp = spacy.load("en_core_web_sm")
pattern = [nlp(t) for t in ("0305Z","1315z","0830Z","0422z")]
def extend_matcher_entities(doc):
matcher = PhraseMatcher(nlp.vocab, attr="SHAPE")
matcher.add("TIME", None, *pattern)
new_ents = []
for match_id, start, end in matcher(doc):
new_ent = Span(doc, start, end, label=nlp.vocab.strings[match_id])
new_ents.append(new_ent)
doc.ents = new_ents
# doc.ents = list(doc.ents) + new_ents
return doc
# Add the component after the named entity recognizer
nlp.add_pipe(extend_matcher_entities, after='ner')
doc = nlp("At 0560z, I walked over to my car and got in to go to the grocery store.")
# extend_matcher_entities(doc)
print([(ent.text, ent.label_) for ent in doc.ents])
[('0560z', 'TIME')]
另请注意,doc.ents = new_ents
您将覆盖之前提取的任何实体
推荐阅读
- c++ - 对于派生对象的基类型指针,make_unique 的语法是什么?
- python - Pygame没有定义?
- sql - Redshift 查询需要很长时间进行分组
- go - 你如何反思地调用 Go 结构的方法?
- java - 使用本机代码从后台颤振应用程序启动活动
- netlogo - 我怎么打电话
- 这里的程序当 是作为参数或netlogo 中的函数内的参数传递吗? - groovy - 有没有办法让 GroovyMock(在 Spock 中)接受 GString 作为 String 参数?
- django - Rest-Framework:序列化与同一模型有关系的模型
- r - 加入数据表(警告消息:在 `[.data.table`(dt[dt2 on = .(common_key), : 第 8 组 j 的结果的第 1 项长度为零)
- html - 垂直居中浮动