首页 > 解决方案 > 如何将某些单词视为 nltk Python 中的分隔符?

问题描述

我正在尝试使用 stopwords('is', 'the', 'was') 作为分隔符来标记以下文本

预期的输出是这样的:

['Walter', 
 'feeling anxious', 
 'He', 
 'diagnosed today,' 
 'He probably', 
 'best person I know']

这是我试图进行上述输出的代码

import nltk 
stopwords = ['is', 'the', 'was']

sents = nltk.sent_tokenize("Walter was feeling anxious. He was diagnosed today. He probably is the best person I know.")

sents_rm_stopwords = [] 

for sent in sents:
    sents_rm_stopwords.append(' '.join(w for w in nltk.word_tokenize(sent) if w not in stopwords))

我的代码输出是这样的:

['Walter feeling anxious .',
 'He diagnosed today .', 
 'He probably best person I know .']

我怎样才能做出预期的输出?

标签: pythonnltktokenize

解决方案


所以这个问题同时考虑了停用词和行分隔符。假设我们可以通过符号定义一条线.,您可以通过使用将其引入多个拆分re.split()

import re
s = "Walter was feeling anxious. He was diagnosed today. He probably is the best person I know."
result = re.split(" was | is | the |\. |\.", s)

results
>>
['Walter',
 'feeling anxious',
 'He',
 'diagnosed today',
 'He probably',
 'the best person I know',
 '']

因为我们同时使用单个空格.. 后面的空格,所以拆分结果将返回一个额外的''. 假设这种句子结构是一致的,您可以对结果进行切片以获得您的预期结果。

result[:-1]
>>
['Walter',
 'feeling anxious',
 'He',
 'diagnosed today',
 'He probably',
 'the best person I know']

推荐阅读