首页 > 解决方案 > 将语料库中的频率附加到推文中的每个标记

问题描述

我正在使用带有 NLTK POS 标记器的 POS 标记的推文数据。我的令牌看起来像:

[['wasabi', 'NN'], 
['juice', 'NN']]

我还有美国国家语料库频率、单词列表、词性标签及其频率。我想从标记中查找单词和 pos-tag,如果找到,将 ANC 中的频率附加到标记中。

来自 SO 的优秀建议有所帮助,但我发现有几个标记没有附加频率(可能是因为 NLTK 标记器非常不准确,例如将“沉默”称为名词,而不是形容词),当我试图附加时频率,我不断收到一个关键错误,因为 NLTK 已将“吉尔”标记为 NN,而不是 NNP。

最后,如果找到这个词,我决定取第一个频率。现在的问题是我得到了这个词出现的所有频率。我只想要第一个,所以输出将是:

[['wasabi', 'NN', '5'], 
['juice', 'NN', '369']]

代码,

with open('ANC-all-count.txt', 'r', errors='ignore') as f:
    freqs = csv.reader(f, delimiter='\t')

    freqs = {}
    for word, pos, f in freq_list:
        if word not in freqs: freqs[word] = {}
        freqs[word][pos] = f

        for i, (word, pos) in enumerate(tokens):
            if word not in freqs: 
                tokens[i].append(0)
                continue
            if pos not in freqs[word]:
                tokens[i] = [tokens[i][0:2]]
                single_token = tokens[i][0]
                if single_token[0] in freqs:
                    tokens[i].append(freqs[word].values())
                continue
            tokens[i].append(freqs[word][pos])

标签: pythonnltkcorpus

解决方案


TL;博士

>>> from itertools import chain
>>> from collections import Counter

>>> from nltk.corpus import brown
>>> from nltk import pos_tag, word_tokenize

# Access first hundred tokenized sentence from brown corpus
# POS tag these sentences.
>>> tagged_sents = [pos_tag(tokenized_sent) for tokenized_sent in brown.sents()[:100]]

# Sanity check that the tagged_sents are what we want.
>>> list(chain(*tagged_sents))[:10]
[('The', 'DT'), ('Fulton', 'NNP'), ('County', 'NNP'), ('Grand', 'NNP'), ('Jury', 'NNP'), ('said', 'VBD'), ('Friday', 'NNP'), ('an', 'DT'), ('investigation', 'NN'), ('of', 'IN')]

# Use a collections.Counter to get the counts.
>>> freq = Counter(chain(*tagged_sents))

# Top 20 most common words.
>>> dict(freq.most_common(20))
{('the', 'DT'): 128, ('.', '.'): 89, (',', ','): 88, ('of', 'IN'): 67, ('to', 'TO'): 55, ('a', 'DT'): 50, ('and', 'CC'): 40, ('in', 'IN'): 39, ('``', '``'): 35, ("''", "''"): 34, ('The', 'DT'): 28, ('said', 'VBD'): 24, ('that', 'IN'): 24, ('for', 'IN'): 22, ('be', 'VB'): 21, ('was', 'VBD'): 18, ('jury', 'NN'): 17, ('Fulton', 'NNP'): 14, ('election', 'NN'): 14, ('will', 'MD'): 14}

# All the words from most to least common.
>>> dict(freq.most_common())


# To print out the word, pos and counts to file.
>>> with open('freq-counts', 'w') as fout:
...     for (word,pos), count in freq.most_common(20):
...         print('\t'.join([word, pos, str(count)]))

推荐阅读