python - 如何从语料库中删除无意义或不完整的单词?
问题描述
我正在使用一些文本进行一些 NLP 分析。我已经清理了文本,采取了一些步骤来删除非字母数字字符、空格、重复词和停用词,还执行了词干提取和词形还原:
from nltk.tokenize import word_tokenize
import nltk.corpus
import re
from nltk.stem.snowball import SnowballStemmer
from nltk.stem.wordnet import WordNetLemmatizer
import pandas as pd
data_df = pd.read_csv('path/to/file/data.csv')
stopwords = nltk.corpus.stopwords.words('english')
stemmer = SnowballStemmer('english')
lemmatizer = WordNetLemmatizer()
# Function to remove duplicates from sentence
def unique_list(l):
ulist = []
[ulist.append(x) for x in l if x not in ulist]
return ulist
for i in range(len(data_df)):
# Convert to lower case, split into individual words using word_tokenize
sentence = word_tokenize(data_df['O_Q1A'][i].lower()) #data['O_Q1A'][i].split(' ')
# Remove stopwords
filtered_sentence = [w for w in sentence if not w in stopwords]
# Remove duplicate words from sentence
filtered_sentence = unique_list(filtered_sentence)
# Remove non-letters
junk_free_sentence = []
for word in filtered_sentence:
junk_free_sentence.append(re.sub("[^\w\s]", " ", word)) # Remove non-letters, but don't remove whitespaces just yet
#junk_free_sentence.append(re.sub("/^[a-z]+$/", " ", word)) # Take only alphabests
# Stem the junk free sentence
stemmed_sentence = []
for w in junk_free_sentence:
stemmed_sentence.append(stemmer.stem(w))
# Lemmatize the stemmed sentence
lemmatized_sentence = []
for w in stemmed_sentence:
lemmatized_sentence.append(lemmatizer.lemmatize(w))
data_df['O_Q1A'][i] = ' '.join(lemmatized_sentence)
但是当我显示前 10 个单词时(根据某些标准),我仍然会得到一些垃圾,例如:
ask
much
thank
work
le
know
via
sdh
n
sy
t
n t
recommend
never
在这前 10 个单词中,只有 5 个是合理的(、ask
、know
和recommend
)。我还需要做什么才能只保留有意义的单词?thank
work
解决方案
默认的 NLTK 停止列表是一个最小的列表,它当然不会包含诸如“询问”“多”之类的词,因为它们通常不是无意义的。这些话只与你无关,但对其他人可能无关。对于您的问题,您始终可以在使用 NLTK 后使用自定义停用词过滤器。一个简单的例子:
def removeStopWords(str):
#select english stopwords
cachedStopWords = set(stopwords.words("english"))
#add custom words
cachedStopWords.update(('ask','much','thank','etc.'))
#remove stop words
new_str = ' '.join([word for word in str.split() if word not in cachedStopWords])
return new_str
或者,您可以编辑 NLTK 停用词列表,它本质上是一个文本文件,存储在 NLTK 安装目录中。
推荐阅读
- typescript - 根据选项卡设置堆栈导航器 initialRouteName
- reactjs - React Emotion 是否支持 ::before/ ::after 选择器?
- pip - 无法在 Windows 10 上安装模糊 python 3.8
- java - 按不同类型分组列表
- python - 用相同的种子从另一个随机变量中预测一个随机变量
- python - 如何在pygame中显示一个utf-8字符?
- javascript - 如何停止显示来自 Javascript 车把的未定义值
- c# - C# 加载非托管 DLL:IIS 上的控制台应用程序和 webapp 之间的巨大性能差异
- java - Java不会替换所有字符串,因为标签旁边有文本(改进后)
- javascript - console.log 显示空对象的属性