python - 如何根据常用词对文本进行分类
问题描述
这个问题是关于基于常用词的文本分类,我不知道我是否解决了这个问题遍历描述并根据文本中常用词的百分比或频率比较它们我想对描述进行分类并给它们另一个 ID。请看下面的例子......
#importing pandas as pd
import pandas as pd
# creating a dataframe
df = pd.DataFrame({'ID': ['12 ', '54', '88','9'],
'Description': ['Staphylococcus aureus is a Gram-positive, round-shaped
bacterium that is a member of the Firmicutes', 'Streptococcus pneumoniae,
or pneumococcus, is a Gram-positive, alpha-hemolytic or beta-hemolytic',
'Dicyemida, also known as Rhombozoa, is a phylum of tiny parasites ','A
television set or television receiver, more commonly called a television,
TV, TV set, or telly']})
ID Description
12 Staphylococcus aureus is a Gram-positive, round-shaped bacterium that is a member of the Firmicutes
54 Streptococcus pneumoniae, or pneumococcus, is a Gram-positive, round-shaped bacterium that is a member beta-hemolytic
88 Dicyemida, also known as Rhombozoa, is a phylum of tiny parasites
9 A television set or television receiver, more commonly called a television, TV, TV set, or telly
例如,12 和 54 描述有超过 75% 的常用词,它们将具有相同的 ID。输出会是这样的:
ID Description
12 Staphylococcus aureus is a Gram-positive, round-shaped bacterium that
is a member of the Firmicutes
12 Streptococcus pneumoniae, or pneumococcus, is a Gram-positive, round-
shaped bacterium that is a member beta-hemolytic
88 Dicyemida, also known as Rhombozoa, is a phylum of tiny parasites
9 A television set or television receiver, more commonly called a
television, TV, TV set, or telly
在这里我尝试过,我使用了两个不同的数据框 Risk1 和 Risk2,我没有遍历我需要做的行:
import codecs
import re
import copy
import collections
import pandas as pd
import numpy as np
import nltk
from nltk.stem import PorterStemmer
from nltk.tokenize import WordPunctTokenizer
import matplotlib.pyplot as plt
%matplotlib inline
nltk.download('stopwords')
from nltk.corpus import stopwords
# creating a dataframe 1
df = pd.DataFrame({'ID': ['12 '],
'Description': ['Staphylococcus aureus is a Gram-positive, round-shaped
bacterium that is a member of the Firmicutes']})
# creating a dataframe 2
df = pd.DataFrame({'ID': ['54'],
'Description': ['Streptococcus pneumoniae,
or pneumococcus, is a Gram-positive, alpha-hemolytic or beta-hemolytic']})
esw = stopwords.words('english')
esw.append('would')
word_pattern = re.compile("^\w+$")
def get_text_counter(text):
tokens = WordPunctTokenizer().tokenize(PorterStemmer().stem(text))
tokens = list(map(lambda x: x.lower(), tokens))
tokens = [token for token in tokens if re.match(word_pattern, token) and token not in esw]
return collections.Counter(tokens), len(tokens)
def make_df(counter, size):
abs_freq = np.array([el[1] for el in counter])
rel_freq = abs_freq / size
index = [el[0] for el in counter]
df = pd.DataFrame(data = np.array([abs_freq, rel_freq]).T, index=index, columns=['Absolute Frequency', 'Relative Frequency'])
df.index.name = 'Most_Common_Words'
return df
Risk1_counter, Risk1_size = get_text_counter(Risk1)
make_df(Risk1_counter.most_common(500), Risk1_size)
Risk2_counter, Risk2_size = get_text_counter(Risk2)
make_df(Risk2_counter.most_common(500), Risk2_size)
all_counter = Risk1_counter + Risk2_counter
all_df = make_df(Risk2_counter.most_common(1000), 1)
most_common_words = all_df.index.values
df_data = []
for word in most_common_words:
Risk1_c = Risk1_counter.get(word, 0) / Risk1_size
Risk2_c = Risk2_counter.get(word, 0) / Risk2_size
d = abs(Risk1_c - Risk2_c)
df_data.append([Risk1_c, Risk2_c, d])
dist_df= pd.DataFrame(data = df_data, index=most_common_words,
columns=['Risk1 Relative Freq', 'Risk2 Hight Relative Freq','Relative Freq Difference'])
dist_df.index.name = 'Most Common Words'
dist_df.sort_values('Relative Freq Difference', ascending = False, inplace=True)
dist_df.head(500)
解决方案
更好的方法可能是在 NLP 中使用句子相似度算法。一个很好的起点是使用来自 Google 的 Universal Sentence Embeddings,如Python notebook所示。如果预训练的 Google USE 不起作用,还有其他句子嵌入(例如从 Facebook 推断)。另一种选择是使用 word2vec 并对句子中每个单词获得的向量进行平均。
您希望找到句子嵌入之间的余弦相似度,而不是重新标记相似度高于某个阈值(如 0.8)的类别。您将不得不尝试不同的相似度阈值以获得最佳匹配性能。
推荐阅读
- javascript - 使用 JavaScript 检测 iOS 并更改 CSS
- python - 使用阈值的条件列
- mongodb - 从 WiredTiger 分片存档导入本地 mongodb 数据
- python - 如何使用 tkinter 上的图标图像制作主页按钮?
- python - Python pyplot colorbar和colormap有不同的颜色
- ansible - Openshift 3集群安装问题
- javascript - 让 Babel 识别 HTML 模板标签
- node.js - 收到此错误“引发了跨域错误。React 无权访问开发中的实际错误对象。”
- android - java.lang.RuntimeException:仅当 proguard 启用时存根 true
- debezium - 集群 JVM 中的 Debezium 方法