首页 > 解决方案 > 通过在 Python 3.7 中逐段计算单词的自定义数据结构

问题描述

我有完成以下任务的要求:

示例数据集如下所示:

<P ID=1>
I have always wanted to try like, multiple? Different rasteraunts. Not quite sure which kind, maybe burgers!
</P>

<P ID=2>
Nice! I love burgers. Cheeseburgers, too. Have you ever gone to a diner type restauraunt? I have always wanted to try every diner in the country.
</P>

<P ID=3>
I am not related to the rest of these paragraphs at all.
</P>

“段落”的定义是存在<P ID=x> </P> tags

我需要的是创建一个看起来有点像这样的数据结构(我想它是一个dict):

{'i': X Y, 'have': X Y, etc}

或者,可能是一个pandas如下所示的数据框:

| Word | Content Frequency | Document Frequency |
|   i  |         4         |          3         |
| have |         3         |          2         |
| etc  |         etc       |          etc       |

目前,我可以使用以下代码毫无问题地找到内容频率。

import nltk
import string
from nltk.tokenize import word_tokenize, RegexpTokenizer
import csv
import numpy
import operator
import re

# Requisite
def get_input(filepath):
    f = open(filepath, 'r')
    content = f.read()
    return content

# 1
def normalize_text(file):
    file = re.sub('<P ID=(\d+)>', '', file)
    file = re.sub('</P>', '', file)
    tokenizer = RegexpTokenizer(r'\w+')
    all_words = tokenizer.tokenize(file)
    lower_case = []
    for word in all_words:
        curr = word.lower()
        lower_case.append(curr)

    return lower_case

# Requisite for 3
# Answer for 4
def get_collection_frequency(a):
    g = {}
    for i in a:
        if i in g: 
            g[i] +=1
        else: 
            g[i] =1
    return g

myfile = get_input('example.txt')
words = normalize_text(myfile)

## ANSWERS
collection_frequency = get_collection_frequency(words)
print("Collection frequency: ", collection_frequency)

返回:

Collection frequency:  {'i': 4, 'have': 3, 'always': 2, 'wanted': 2, 
'to': 4, 'try': 2, 'like': 1, 'multiple': 1, 'different': 1,
'rasteraunts': 1, 'not': 2, 'quite': 1, 'sure': 1, 'which': 1,
'kind': 1, 'maybe': 1, 'burgers': 2, 'nice': 1, 'love': 1,
'cheeseburgers': 1, 'too': 1, 'you': 1, 'ever': 1, 'gone': 1, 'a': 1,
'diner': 2, 'type': 1, 'restauraunt': 1, 'every': 1, 'in': 1, 'the': 2,
'country': 1, 'am': 1, 'related': 1, 'rest': 1, 'of': 1, 'these': 1, 
'paragraphs': 1, 'at': 1, 'all': 1}

但是,我目前正在normalize_text使用以下行删除函数中段落的“标题”:

file = re.sub('<P ID=(\d+)>', '', file)
file = re.sub('</P>', '', file)

因为我不希望P, ID, 1, 2,3计入我的字典,因为这些只是段落标题。

那么我怎样才能将一个单词的出现与它在一段中的实例联系起来,这会产生我想要的结果呢?我什至不确定尝试创建这种数据结构的逻辑。

标签: pythonpython-3.xstringdata-structures

解决方案


尝试这个:

import re
from nltk.tokenize import word_tokenize, RegexpTokenizer

def normalize_text(file):
    file = re.sub('<P ID=(\d+)>', '', file)
    file = re.sub('</P>', '', file)
    tokenizer = RegexpTokenizer(r'\w+')
    all_words = tokenizer.tokenize(file)
    lower_case = []
    for word in all_words:
        curr = word.lower()
        lower_case.append(curr)

    return lower_case

def find_words(filepath):
    with open(filepath, 'r') as f:
        file = f.read()
    word_list = normalize_text(file)
    data = file.replace('</P>','').split('<P ID=')
    result = {}
    for word in word_list:
        result[word] = {}
        for p in data:
            if p:
                result[word][f'paragraph_{p[0]}'] = p[2:].count(word)
    print(result)
    return result

find_words('./test.txt')

如果你想按段落分组,然后按单词出现:

def find_words(filepath):
    with open(filepath, 'r') as f:
        file = f.read()
    word_list = normalize_text(file)
    data = file.replace('</P>','').split('<P ID=')
    result = {}
    for p in data:
        if p:
            result[f'paragraph_{p[0]}'] = {}
            for word in word_list:
                result[f'paragraph_{p[0]}'][word] = p[2:].count(word)


    print(result)
    return result 

不过还是有点难读。如果漂亮打印对象对您很重要,您可以尝试使用漂亮打印包

要查找单词出现的段落数:

def find_paragraph_occurrences(filepath):
    with open(filepath, 'r') as f:
        file = f.read()
    word_list = normalize_text(file)
    data = file.replace('</P>','').lower().split('<P ID=')
    result = {}
    for word in word_list:
        result[word] = 0
        for p in data:
            if word in p:
                result[word] += 1

    print(result)
    return result

推荐阅读