首页 > 解决方案 > 使用朴素贝叶斯分类的 Twitter 情绪分析仅返回“中性”标签

问题描述

我按照这里的教程:https ://towardsdatascience.com/creating-the-twitter-sentiment-analysis-program-in-python-with-naive-bayes-classification-672e5589a7ed创建了一个使用朴素贝叶斯的推特情绪分析器来自 nltk 库的分类器作为一种将推文分类为正面、负面或中性的方法,但它给出的标签只是中性或不相关的。我在下面包含了我的代码,因为我对任何机器学习都不是很有经验,所以我很感激任何帮助。

我尝试过使用不同的推文集进行分类,即使指定了像“快乐”这样的搜索关键字,它仍然会返回“中性”。我不

import nltk

def buildvocab(processedtrainingdata):
    all_words = []

    for (words, sentiment) in processedtrainingdata:
        all_words.extend(words)

    wordlist = nltk.FreqDist(all_words)
    word_features = wordlist.keys()

    return word_features

def extract_features(tweet):
    tweet_words = set(tweet)
    features = {}
    for word in word_features:
        features['contains(%s)' % word] = (word in tweet_words) #creates json key containing word x, its loc.
        # Every key has a T/F according - true for present , false for not
    return features 

# Building the feature vector

word_features = buildvocab(processedtrainingdata)
training_features = nltk.classify.apply_features(extract_features, processedtrainingdata)
# apply features does the actual extraction

Nbayes_result_labels = [Nbayes.classify(extract_features(tweet[0])) for tweet in processedtestset]

# get the majority vote [?]
if Nbayes_result_labels.count('positive') > Nbayes_result_labels.count('negative'):
    print('Positive')
    print(str(100*Nbayes_result_labels.count('positive')/len(Nbayes_result_labels)))
elif Nbayes_result_labels.count('negative') > Nbayes_result_labels.count('positive'):
    print(str(100*Nbayes_result_labels.count('negative')/len(Nbayes_result_labels)))
    print('Negative sentiment')
else:
    print('Neutral')


#the output is always something like this: 
print(Nbayes_result_labels)
['neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'irrelevant', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral', 'neutral']

标签: pythonnltk

解决方案


您的数据集高度不平衡。您自己在其中一条评论中提到了它,您有 550 条正面和 550 条负面标记的推文,但有 4000 条中立,这就是为什么它总是有利于多数派。如果可能的话,你应该对所有类有相同数量的话语。您还需要了解评估指标,然后您很可能会发现您的回忆不好。一个理想的模型应该在所有评估指标上都表现良好。为了避免过度拟合,有些人还添加了第四个“其他”类,但现在你可以跳过它。

您可以采取以下措施来提高模型的性能,或者(添加更多数据)通过添加可能的相似话语对少数类进行过度采样,或者对多数类进行欠采样,或者将两者结合使用。您可以在线阅读有关过采样、欠采样的信息。

在这个新的数据集中,如果可能的话,尝试以 1:1:1 的比例让所有类的话语。最后尝试其他算法以及通过网格搜索、随机搜索或 tpot 调整的超参数。

编辑:在您的情况下,“其他”类无关紧要,因此您现在有 4 个类尝试以每个类 1:1:1:1 的比例使用数据集。


推荐阅读