首页 > 解决方案 > RuntimeError:索引 3 处的输入张量具有无效的形状 [2, 2, 16, 128, 64] 但预期为 [2, 4, 16, 128, 64]

问题描述

使用SageMaker - ml.p3.8xlarge实例中的Huggingface库对预训练的GPT2 中型模型进行微调时出现运行时错误。

finetuning_gpt2_script.py包含以下内容,

图书馆:

from transformers import Trainer, TrainingArguments
from transformers import EarlyStoppingCallback
from transformers import GPT2LMHeadModel, GPT2Tokenizer
from transformers import TextDataset,DataCollatorForLanguageModeling

预训练模型:

gpt2_model = GPT2LMHeadModel.from_pretrained("gpt2-medium")
gpt2_tokenizer = GPT2Tokenizer.from_pretrained("gpt2-medium")

训练和测试数据构建:

train_dataset = TextDataset(
          tokenizer=gpt2_tokenizer,
          file_path=train_path,
          block_size=128)
    
test_dataset = TextDataset(
          tokenizer=gpt2_tokenizer,
          file_path=test_path,
          block_size=128)
    
data_collator = DataCollatorForLanguageModeling(
        tokenizer=gpt2_tokenizer, mlm=False,
    )

train_path&test_path是大小为 145 万和 20 万行数据的非结构化文本数据文件

训练论据:

training_args = TrainingArguments(
        output_dir="./gpt2-finetuned-models", #The output directory
        overwrite_output_dir=True, #overwrite the content of the output directory
        num_train_epochs=1, # number of training epochs
        per_device_train_batch_size=8, # batch size for training #32
        per_device_eval_batch_size=8,  # batch size for evaluation #64
        save_steps=100, # after # steps model is saved
        warmup_steps=500,# number of warmup steps for learning rate scheduler
        prediction_loss_only=True,
        metric_for_best_model = "eval_loss",
        load_best_model_at_end = True,
        evaluation_strategy="epoch",
    )

training_args是为训练模型而构建的训练参数。

教练:

trainer = Trainer(
        model=gpt2_model,
        args=training_args,
        data_collator=data_collator,
        train_dataset=train_dataset,
        eval_dataset=test_dataset,
        callbacks = [early_stop_callback],
    )
early_stop_callback = EarlyStoppingCallback(early_stopping_patience  = 3)

训练:

trainer.train()
trainer.save_model(model_path)

在这里,使用ml.p3.8xlarge实例仅在 4 个 GPUS 中进行了 1 个 epoch 的训练。

训练是通过火炬分布完成的,如下所示,

python -m torch.distributed.launch finetuning_gpt2_script.py

在 epoch 结束时进行训练时,观察到以下错误,

RuntimeError: Input tensor at index 3 has invalid shape [2, 2, 16, 128, 64] but expected [2, 4, 16, 128, 64]

  1. RuntimeError因为train_datasettest_dataset构造使用的方式TextData吗?
  2. 我做错了torch-distribution吗?

标签: pythonpytorchamazon-sagemakerhuggingface-transformersgpt-2

解决方案


它可能与这里建议的批量大小不匹配(期望批量大小为 4,但接收的批量大小为 2)有关?drop_last提供的解决方案是像这样设置参数DataLoader

tain_text = DataLoader(train_dataset, batch_size=args.batch_size, shuffle=True, drop_last=True)

推荐阅读