首页 > 解决方案 > 100% 的训练和评估准确率,也尝试过梯度裁剪

问题描述

我总是得到 100% 的训练和验证准确度。这是它的外观:

Epoch 17/20
27738/27738 [==============================] - 228s 8ms/step - loss: 4.1600e-05 - accuracy: 1.0000 - val_loss: 4.6773e-05 - val_accuracy: 1.0000
Epoch 18/20
27738/27738 [==============================] - 229s 8ms/step - loss: 3.6246e-05 - accuracy: 1.0000 - val_loss: 4.0900e-05 - val_accuracy: 1.0000
Epoch 19/20
27738/27738 [==============================] - 221s 8ms/step - loss: 3.1839e-05 - accuracy: 1.0000 - val_loss: 3.6044e-05 - val_accuracy: 1.0000
Epoch 20/20
27738/27738 [==============================] - 7616s 275ms/step - loss: 2.8176e-05 - accuracy: 1.0000 - val_loss: 3.1987e-05 - val_accuracy: 1.0000

这是该过程的整个代码:

encoder_input_sequences = pad_sequences(input_integer_seq, maxlen=max_input_len)
decoder_input_sequences = pad_sequences(output_input_integer_seq, maxlen=max_out_len, padding='post')
import numpy as np
read_dictionary = np.load('/Users/Downloads/wordvectors-master/hinvec.npy',allow_pickle='TRUE').item()
num_words = min(MAX_NUM_WORDS, len(word2idx_inputs) + 1)
embedding_matrix = np.zeros((num_words, EMBEDDING_SIZE))
for word, index in word2idx_inputs.items():
    embedding_vector = read_dictionary.get(word)
    if embedding_vector is not None:
        embedding_matrix[index] = embedding_vector
embedding_layer = Embedding(num_words, EMBEDDING_SIZE, weights=[embedding_matrix], input_length=max_input_len)
decoder_targets_one_hot = np.zeros((
        len(input_sentences),
        max_out_len,
        num_words_output
    ),
    dtype='float32'
)
decoder_targets_one_hot.shape
encoder_inputs_placeholder = Input(shape=(max_input_len,))
x = embedding_layer(encoder_inputs_placeholder)
encoder = LSTM(LSTM_NODES, return_state=True)
encoder_outputs, h, c = encoder(x)
encoder_states = [h, c]
decoder_inputs_placeholder = Input(shape=(max_out_len,))
decoder_embedding = Embedding(num_words_output, LSTM_NODES)
decoder_inputs_x = decoder_embedding(decoder_inputs_placeholder)decoder_lstm = LSTM(LSTM_NODES, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs_x, initial_state=encoder_states)

###########################from here I add activation function and apply some parameters:
decoder_dense = Dense(num_words_output, activation='sigmoid')
decoder_outputs = decoder_dense(decoder_outputs)

opt = keras.optimizers.Adam(learning_rate=0.0001, clipvalue=1.0)
model = Model([encoder_inputs_placeholder,
  decoder_inputs_placeholder], decoder_outputs)
model.compile(
    optimizer=opt,
    loss='binary_crossentropy',
    metrics=['accuracy']
)
history = model.fit(
    [encoder_input_sequences, decoder_input_sequences],
    decoder_targets_one_hot,
    batch_size=BATCH_SIZE,
    epochs=EPOCHS,
    validation_split=0.1,
)
plt.plot(history.history['accuracy'])
plt.show()

编辑:我更改了以下代码:

decoder_targets_one_hot.shape
############################ Added this
decoder_output_sequences = pad_sequences(output_integer_seq, maxlen=max_out_len, padding='post')
for i, d in enumerate(decoder_output_sequences):
    for t, word in enumerate(d):
        decoder_targets_one_hot[i, t, word] = 1
#############################
encoder_inputs_placeholder = Input(shape=(max_input_len,))

我认为这是正确的方法,但我仍然得到 100% 的准确率。这是正确的实施方式吗?顺便说一句,如果您想了解下面的输出,这里是教程的链接,唯一的区别是我的数据集是 eng-hin 而不是 eng-fra:https ://stackabuse.com/python-for-nlp-neural-machine-用 seq2seq-in-keras 翻译/

标签: pythonmachine-learningkerasdeep-learningneural-network

解决方案


您初始化decoder_targets_one_hot为零向量,但不要将真实类的索引设置为1任何地方。所以,基本上目标向量不是单热向量。该模型试图为所有输入学习相同的目标,即零向量。


推荐阅读