首页 > 解决方案 > 使用自定义缩写的 Python nltk 不正确的句子标记化

问题描述

我正在使用nltk tokenize库来拆分英文句子。许多句子包含缩写,e.g.因此eg.我用这些自定义缩写更新了标记器。不过,我发现了一个奇怪的标记化行为:

import nltk

nltk.download("punkt")
sentence_tokenizer = nltk.data.load("tokenizers/punkt/english.pickle")

extra_abbreviations = ['e.g', 'eg']
sentence_tokenizer._params.abbrev_types.update(extra_abbreviations)

line = 'Required experience with client frameworks (e.g. React, Vue.js) and testing (e.g. Karma, Tape)'

for s in sentence_tokenizer.tokenize(line):
    print(s)

# OUTPUT
# Required experience with client frameworks (e.g. React, Vue.js) and testing (e.g.
# Karma, Tape)

因此,您可以看到标记器不会在第一个缩写(正确)上拆分,但在第二个缩写(不正确)上拆分。

奇怪的是,如果我Karma在其他任何地方改变这个词,它就可以正常工作。

import nltk

nltk.download("punkt")
sentence_tokenizer = nltk.data.load("tokenizers/punkt/english.pickle")

extra_abbreviations = ['e.g', 'eg']
sentence_tokenizer._params.abbrev_types.update(extra_abbreviations)

line = 'Required experience with client frameworks (e.g. React, Vue.js) and testing (e.g. SomethingElse, Tape)'

for s in sentence_tokenizer.tokenize(line):
    print(s)

# OUTPUT
# Required experience with client frameworks (e.g. React, Vue.js) and testing (e.g. SomethingElse, Tape)

任何线索为什么会发生这种情况?

标签: pythonnlpnltktokenize

解决方案


您可以看到为什么 punkt 使用该debug_decisions方法做出中断选择。

>>> for d in sentence_tokenizer.debug_decisions(line):
...     print(nltk.tokenize.punkt.format_debug_decision(d))
... 
Text: '(e.g. React,' (at offset 47)
Sentence break? None (default decision)
Collocation? False
'e.g.':
    known abbreviation: True
    is initial: False
'react':
    known sentence starter: False
    orthographic heuristic suggests is a sentence starter? unknown
    orthographic contexts in training: {'MID-UC', 'MID-LC'}

Text: '(e.g. Karma,' (at offset 80)
Sentence break? True (abbreviation + orthographic heuristic)
Collocation? False
'e.g.':
    known abbreviation: True
    is initial: False
'karma':
    known sentence starter: False
    orthographic heuristic suggests is a sentence starter? True
    orthographic contexts in training: {'MID-LC'}

这告诉我们在用于训练的语料库中,“react”和“React”都出现在句子的中间,所以它不会在你的行中的“React”之前中断。但是,仅出现小写形式的“业力”,因此它认为这是一个可能的句子起点。

请注意,这与库的文档一致:

但是,Punkt 旨在从类似于目标域的语料库中无监督地学习参数(缩写列表等)。因此,预打包的模型可能不适合:用于PunktSentenceTokenizer(text)从给定文本中学习参数。

PunktTrainer从部分文本中学习参数,例如缩写列表(无监督)。直接使用 aPunktTrainer允许增量训练和修改用于决定什么被视为缩写等的超参数。

因此,虽然针对这种特殊情况的快速破解正在调整私人_params进一步说“业力”也可能出现在句子中间:

>>> sentence_tokenizer._params.ortho_context['karma'] |= nltk.tokenize.punkt._ORTHO_MID_UC
>>> sentence_tokenizer.tokenize(line)
['Required experience with client frameworks (e.g. React, Vue.js) and testing (e.g. Karma, Tape)']

相反,也许您应该从包含所有这些库名称的 CV 中添加额外的训练数据:

from nltk.tokenize.punkt import PunktSentenceTokenizer, PunktTrainer
trainer = PunktTrainer()
# tweak trainer params here if helpful
trainer.train(my_corpus_of_concatted_tech_cvs)
sentence_tokenizer = PunktSentenceTokenizer(trainer.get_params())

推荐阅读