首页 > 解决方案 > Spacy PhraseMatcher - 不匹配关键字之一,而是匹配字符串中的所有关键字

问题描述

我正在尝试解决根据关键字将文本分类到存储桶中的任务。当我需要将文本与一个或多个关键字匹配时(因此关键字之一应在文本中),这很容易做到,但是当我需要确保其中存在多个关键字时,我很难理解如何进行匹配字符串。

下面是一个小样本。假设 mydfArticles是一个 pandas 数据框,其中有一列Text包含我要匹配的文本文章:

dfArticles['Text']
Out[2]: 
0       (Reuters) - Major Middle Eastern markets ended...
1       MIDEAST STOCKS-Oil price fall hurts major Gulf...
2       DUBAI, 21st September, 2020 (WAM) -- The Minis...
3       DUBAI, (UrduPoint / Pakistan Point News / WAM ...
4       Brent crude was down 99 cents or 2.1% at $42.2.

假设我的数据框dfTopics包含我要匹配的关键字列表以及与关键字关联的存储桶:

dfTopics
Out[3]: 
            Topic              Keywords
0     Regulations                   law
1     Regulations            regulatory
2     Regulations            regulation
3     Regulations           legislation
4     Regulations                 rules
5          Talent            capability
6          Talent             workforce

当我只需要检查文本是否与此关键字之一匹配时,它很简单:

def prep_match_patterns(dfTopics):
    
    matcher = PhraseMatcher(nlp.vocab, attr="LOWER")
    
    for topic in dfTopics['Topic'].unique():
        keywords = dfTopics.loc[dfTopics['Topic'] == topic, 'Keywords'].to_list()
        patterns_topic = [nlp.make_doc(text) for text in keywords]
        matcher.add(topic, None, *patterns_topic)
    return matcher

然后我可以很容易地检查一下文本属于哪个桶:

nlp = spacy.load("en_core_web_lg")
nlp.disable_pipes(["parser"])
# extract the sentences from the documents
nlp.add_pipe(nlp.create_pipe('sentencizer'))

matcher = prep_match_patterns(dfTopics)


dfResults = pd.DataFrame([],columns=['ArticleID', 'Topic'])


articles = []
topics = []


for index, row in tqdm(dfArticles.iterrows(), total=len(dfArticles)):
    doc = nlp(row['Text'])
    matches = matcher(doc)
    if len(matches)<1:
        continue
    else:
        for match_id, start, end in matches:
            string_id = nlp.vocab.strings[match_id]  # Get string representation
            articles.append(row['ID'])
            topics.append(string_id)
    
dfResults['ArticleID'] = articles
dfResults['Topic'] = topics


dfResults.drop_duplicates(inplace=True)

但现在的诀窍是 - 有时要将文本分类到存储桶中,我需要确保它同时匹配多个关键字

假设我有一个名为“医疗保健系统上下文”的新主题,并且要使文本落入此存储桶中,我需要该文本包含所有 3 个子字符串:“fragmentation”、“approval process”和“drug”。顺序无关紧要,但所有三个关键字都需要在那里。有没有办法用 PhraseMatcher 做到这一点?

标签: pythonspacy

解决方案


我认为你过于复杂了。你可以用简单的python实现你想要的。

假设我们有:

df_topics
    Topic   Keywords
0   Regulations law
1   Regulations regulatory
2   Regulations regulation
3   Regulations legislation
4   Regulations rules
5   Talent  capability
6   Talent  workforce

然后您可以将您的主题关键字组织到字典中:

topics = df_topics.groupby("Topic")["Keywords"].agg(lambda x: x.to_list()).to_dict()
topics
{'Regulations': ['law', 'regulatory', 'regulation', 'legislation', 'rules'],
 'Talent': ['capability', 'workforce']}

最后,定义一个函数来匹配关键字:

def textToTopic(text, topics):
    t = []
    for k,v in topics.items():
        if all([topic in text.split() for topic in v]):
            t.append(k)
    return t

演示:

textToTopic("law regulatory regulation rules legislation workforce", topics)
['Regulations']

textToTopic("law regulatory regulation rules legislation workforce capability", topics)
['Regulations', 'Talent']

您可以将此功能应用于文本df


推荐阅读