首页 > 解决方案 > NLTK TypeError:不可散列的类型:'list'

问题描述

我目前正在对 csv 文件中的单词进行词形化,之后我以小写字母传递所有单词,删除所有标点符号并拆分列。

我只使用两个 CSV 列analyze.info()::

<class 'pandas.core.frame.DataFrame'> RangeIndex: 4637 entries, 0 to 4636. Data columns (total 2 columns):
#   Column          Non-Null Count  Dtype
0   Comments        4637 non-null   object
1   Classification  4637 non-null   object

import string
import pandas as pd
from nltk.corpus import stopwords
from nltk.stem import 

analyze = pd.read_csv('C:/Users/(..)/Talk London/ALL_dataset.csv', delimiter=';', low_memory=False, encoding='cp1252', usecols=['Comments', 'Classification'])

lower_case = analyze['Comments'].str.lower()

cleaned_text = lower_case.str.translate(str.maketrans('', '', string.punctuation))

tokenized_words = cleaned_text.str.split()

final_words = []
for word in tokenized_words:
    if word not in stopwords.words('english'):
       final_words.append(word)

wnl = WordNetLemmatizer()
lemma_words = []
lem = ' '.join([wnl.lemmatize(word) for word in tokenized_words])
lemma_words.append(lem)

当我运行代码时返回此错误:

回溯(最后一次调用):
文件“C:/Users/suiso/PycharmProjects/SA_working/SA_Main.py”,第 52 行,在 lem = ' '.join([wnl.lemmatize(word) for word in tokenized_words])
文件“C:/Users/suiso/PycharmProjects/SA_working/SA_Main.py”,第 52 行,在 lem = ''.join([wnl.lemmatize(word) for word in tokenized_words])
文件“C:\Users\suiso \PycharmProjects\SA_working\venv\lib\site-packages\nltk\stem\wordnet.py",第 38 行,在 lemmatize lemmas = wordnet._morphy(word, pos)
文件“C:\Users\suiso\PycharmProjects\SA_working\ venv\lib\site-packages\nltk\corpus\reader\wordnet.py",第 1897 行,在 _morphy 中,
如果形式异常:
TypeError: unhashable type: 'list'

标签: pythonnltklemmatization

解决方案


tokenized_words是一列列表。它不是一列字符串的原因是因为您使用了该split方法。所以你需要像这样使用双循环

lem = ' '.join([wnl.lemmatize(word) for word_list in tokenized_words for word in word_list])

推荐阅读