首页 > 解决方案 > 验证和测试的准确性很好,但预测不佳 keras lstm

问题描述

我在使用 LSTM 和 Keras 时遇到问题。

我尝试预测正常/假域名。

我的数据集是这样的:

domain,fake
google, 0
bezqcuoqzcjloc,1
...

50% 正常域和 50% 假域

这是我的 LSTM 模型:

def build_model(max_features, maxlen):
    """Build LSTM model"""
    model = Sequential()
    model.add(Embedding(max_features, 128, input_length=maxlen))
    model.add(LSTM(64))
    model.add(Dropout(0.5))
    model.add(Dense(1))
    model.add(Activation('sigmoid'))
    sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
    model.compile(loss='binary_crossentropy', optimizer=sgd, metrics=['acc'])

    return model

然后我预处理我的文本数据以将其转换为数字:

"""Run train/test on logistic regression model"""
indata = data.get_data()

# Extract data and labels
X = [x[1] for x in indata]
labels = [x[0] for x in indata]

# Generate a dictionary of valid characters
valid_chars = {x:idx+1 for idx, x in enumerate(set(''.join(X)))}

max_features = len(valid_chars) + 1
maxlen = 100

# Convert characters to int and pad
X = [[valid_chars[y] for y in x] for x in X]
X = sequence.pad_sequences(X, maxlen=maxlen)

# Convert labels to 0-1
y = [0 if x == 'benign' else 1 for x in labels]

然后我将数据分成训练、测试和验证集:

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)

print("Build model...")
model = build_model(max_features, maxlen)

print("Train...")
X_train, X_holdout, y_train, y_holdout = train_test_split(X_train, y_train, test_size=0.2)

然后我在训练数据和验证数据上训练我的模型,并在测试数据上进行评估:

history = model.fit(X_train, y_train, epochs=max_epoch, validation_data=(X_holdout, y_holdout), shuffle=False)

scores = model.evaluate(X_test, y_test, batch_size=batch_size)

在我的培训/测试结束时,我得到了以下结果:

训练时

在测试数据集上评估时的这些分数:

loss = 0.060554939906234596
accuracy = 0.978109902033532

但是,当我对这样的数据集样本进行预测时:

LSTM_model = load_model('LSTMmodel_64_sgd.h5')
data = pickle.load(open('traindata.pkl', 'rb'))

#### LSTM ####

"""Run train/test on logistic regression model"""

# Extract data and labels
X = [x[1] for x in data]
labels = [x[0] for x in data]

X1, _, labels1, _ = train_test_split(X, labels, test_size=0.9)

# Generate a dictionary of valid characters
valid_chars = {x:idx+1 for idx, x in enumerate(set(''.join(X1)))}

max_features = len(valid_chars) + 1
maxlen = 100

# Convert characters to int and pad
X1 = [[valid_chars[y] for y in x] for x in X1]
X1 = sequence.pad_sequences(X1, maxlen=maxlen)

# Convert labels to 0-1
y = [0 if x == 'benign' else 1 for x in labels1]

y_pred = LSTM_model.predict(X1)

我的表现很差:

accuracy = 0.5934741842730341
confusion matrix = [[25201 14929]
                    [17589 22271]]
F1-score = 0.5780171295094731

有人可以向我解释为什么吗?我为 LSTM 节点尝试了 64 而不是 128,为优化器尝试了 adam 和 rmsprop,增加了 batch_size 但性能仍然非常低。

标签: pythonmachine-learningkeraslstm

解决方案


好的,所以我找到了答案。

这是这条线

valid_chars = {x:idx+1 for idx, x in enumerate(set(''.join(X1)))}

在 Python 3set中,每次打开新的 python3 控制台时似乎都会产生不同的结果。

所以在 Python 2 中运行代码解决了我的问题!


推荐阅读