首页 > 解决方案 > Keras IMDB 情绪分析

问题描述

我是 ML 新手,根据我找到的教程,我正在尝试使用 Keras 对 IMDB 数据集进行情感分析。下面的代码运行并给出了大约 90% 的测试数据准确率。但是,当我尝试预测两个简单的句子(一个正面的,一个负面的)时,它给出的正面值约为 0.50,负面值为 0.73,其中正面的值应为 0.71,负面的值应小于 0.1,这是教程中显示的结果。

任何想法是什么问题?

from keras.datasets import imdb
from keras.preprocessing import sequence
from keras.models import *
from keras.layers import *
import numpy as np

top_words = 5000  # 5000
# first tuple is data and sentiment lists,
# the second is testing data with sentiment
# https://keras.io/datasets/
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=top_words)

# reverse lookup
word_to_id = imdb.get_word_index()
'''word_to_id = {k: (v + INDEX_FROM) for k, v in word_to_id.items()}'''
word_to_id["<PAD>"] = 0
word_to_id["<START>"] = 1
word_to_id["<UNK>"] = 2

# Truncate and pad the review sequences, to take only the first 500 words
max_review_length = 500
x_train = sequence.pad_sequences(x_train, maxlen=max_review_length)
x_test = sequence.pad_sequences(x_test, maxlen=max_review_length)

# Build the model

# embedding translates the words in a n dimensional
# space so "hi" becomes (0.2,0.1,0.5) in a 3 dimensional space
# it is the first layer of the network
embedding_vector_length = 32  # dimensions

# https://keras.io/getting-started/sequential-model-guide/
model = Sequential()  # sequential is a linear stack of layers

# layer of 500 x 32
model.add(
    Embedding(
        top_words,  # how many words to consider based on count
        embedding_vector_length,  # dimensions
        input_length=max_review_length))  # input vector
model.add(LSTM(100))  # the parameter are the memory units of the LSTM
# If you want you can replace LSTM by a flatten layer
# model.add(LSTM(100))
# model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))  # output 0<y<1 for every x
model.compile(
    loss='binary_crossentropy',
    optimizer='adam',
    metrics=['accuracy'])
print(model.summary())


# Train the model
model.fit(
    x_train,
    y_train,
    validation_data=(x_test, y_test),
    epochs=1)  # original epochs = 3, batch-size = 64

# Evaluate the model
scores = model.evaluate(x_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1] * 100))

# predict sentiment from reviews
bad = "this movie was terrible and bad"
good = "i really liked the movie and had fun"
for review in [good, bad]:
    tmp = []
    for word in review.split(" "):
        tmp.append(word_to_id[word])
    tmp_padded = sequence.pad_sequences([tmp], maxlen=max_review_length)
    print("%s. Sentiment: %s" % (
        review, model.predict(np.array([tmp_padded[0]]))))
# i really liked the movie and had fun. Sentiment: 0.715537
# this movie was terrible and bad. Sentiment: 0.0353295

标签: pythonkeras

解决方案


“有什么想法吗?” 本身可能没有问题。我有一些想法,按可能影响的顺序排列:

  1. 如果您的两个句子不能代表 IMDB 评论,那么可以预期该模型的预测效果不佳且不规律。

  2. 您的模型只有一个时期,并且模型可能没有足够的机会来学习从评论到情绪的稳健映射(假设这种映射在给定数据的情况下是可能的)。

  3. 神经网络有一个随机元素,因此,您开发的模型可能无法与教程中的模型进行完全相同的预测。

  4. 凭借“大约 90% 的准确度”,人们预计(取决于类别分布)大约十分之一的预测是不正确的。少量实例(在您的情况下为两个)通常不是评估模型性能的好方法。

当我运行你的代码时,我得到了大约 80% 的训练准确率和大约 85% 的测试准确率,并且“我真的很喜欢这部电影并且玩得很开心。情绪:[[0.75149596]]”和“这部电影很糟糕很糟糕. 情绪:[[0.93544275]]"。


推荐阅读