首页 > 解决方案 > 在 4600000 行数据上训练 keras 模型时出现内存错误

问题描述

我正在研究基于 LSTM 的编码器-解码器拼写校正模型,该模型提供了 4600000 行的训练数据。训练文件由两列组成 - 正确和不正确的句子。当数据小到 200000 时,该模型运行良好。但是当我增加它时,训练不会超过 2 个 epoch。它有时会给出错误,terminate called after throwing an instance of std::bad_alloc有时训练会在没有任何错误或警告的情况下停止。我尝试使用它,但它没有用。也许我用错了。

keras.clear_session() 

我还尝试将latent_dim 和batch_size 的值减小到128、64、32、16、8、4、1,但它们都不适用于如此大的数据。另外由于数据很大,所以我替换了

steps_per_epoch = train_samples//batch_size

steps_per_epoch = 2000

我清除了缓存以释放内存,但仍然没有完成训练。有人可以建议一种训练我的模型的方法吗?

def generate_batch(X = X_train, y = y_train, batch_size = 128):
    # Generate a batch of data 
    while True:
        for j in range(0, len(X), batch_size):
            encoder_input_data = np.zeros((batch_size, max_length_src),dtype='float32')
            decoder_input_data = np.zeros((batch_size, max_length_tar),dtype='float32')
            decoder_target_data = np.zeros((batch_size, max_length_tar, num_decoder_tokens),dtype='float32')
            for i, (input_text, target_text) in enumerate(zip(X[j:j+batch_size], y[j:j+batch_size])):
                for t, word in enumerate(input_text.split()):
                    encoder_input_data[i, t] = input_token_index[word] # encoder input seq
                for t, word in enumerate(target_text.split()):
                    if t<len(target_text.split())-1:
                        decoder_input_data[i, t] = target_token_index[word] # decoder input seq
                    if t>0:
                        # decoder target sequence (one hot encoded)
                        # does not include the START_ token
                        # Offset by one timestep
                        decoder_target_data[i, t - 1, target_token_index[word]] = 1.
            yield([encoder_input_data, decoder_input_data], decoder_target_data)

latent_dim = 50

# Encoder
encoder_inputs = Input(shape=(None,))
enc_emb =  Embedding(num_encoder_tokens+1, latent_dim, mask_zero = True)(encoder_inputs)
encoder_lstm = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder_lstm(enc_emb)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]

# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None,))
dec_emb_layer = Embedding(num_decoder_tokens, latent_dim, mask_zero = True)
dec_emb = dec_emb_layer(decoder_inputs)
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(dec_emb,
                                     initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)

# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)

model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])

train_samples = len(X_train)
val_samples = len(X_test)
batch_size = 128
epochs = 50

from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
from keras.callbacks import ModelCheckpoint

keras_callbacks   = [
      EarlyStopping(monitor ="val_loss", mode ="min", patience = 5, restore_best_weights = True),
      ModelCheckpoint('checkpoints.hdf5', monitor='val_loss', verbose=1, save_best_only=True, mode='min', save_freq=1)
]

model.fit_generator(generator = generate_batch(X_train, y_train, batch_size = batch_size),
                    #steps_per_epoch = train_samples//batch_size,
                    steps_per_epoch = 2000,
                    epochs=epochs,
                    verbose=1,
                    validation_data = generate_batch(X_test, y_test, batch_size = batch_size),
                    validation_steps = val_samples//batch_size,
                    callbacks=keras_callbacks)

model.save_weights('weights.h5')

标签: pythontensorflowkeras

解决方案


发生内存错误是因为您input_token_indexgenerate_batch函数中使用了“大”全局范围变量。此变量将在内存中多次复制以生成您的数据。

但是,我建议您使用原生 TF 功能进行文本标记化、矢量化和批处理,而不是解决这个特定问题,而不是编写自己的实现。

您可以在此处找到有关文本标记化和矢量化的更多信息 - Tensorflow 官方教程)。具体来说,您可以利用结合了标记化和填充的TensorFlow 文本矢量化功能。或者,您可以使用更成熟的 Tokeniser和通用pad_sequences

创建批次是SGD训练类型的一项超级基本和通用任务,因此不要重新发明轮子,只需使用model.fit,它会自动免费批量处理您的数据。


推荐阅读