pytorch - RuntimeError:CUDA 错误:触发了设备端断言 - BART 模型
问题描述
我正在尝试为文本生成任务运行 BART 语言模型。
当我用于另一个编码器-解码器模型 (T5) 时,我的代码运行良好,但使用 bart 我收到此错误:
File "train_bart.py", line 89, in train
outputs = model(input_ids = ids, attention_mask = mask, decoder_input_ids=y_ids, labels=lm_labels) cs-lab-host1" 12:39 10-Aug-21
File ".../venv/tf_23/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File ".../venv/tf_23/lib/python3.6/site-packages/transformers/models/bart/modeling_bart.py", line 1308, in forward
return_dict=return_dict,
File ".../venv/tf_23/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File ".../venv/tf_23/lib/python3.6/site-packages/transformers/models/bart/modeling_bart.py", line 1196, in forward
return_dict=return_dict,
File ".../venv/tf_23/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File ".../venv/tf_23/lib/python3.6/site-packages/transformers/models/bart/modeling_bart.py", line 985, in forward
attention_mask, input_shape, inputs_embeds, past_key_values_length
File ".../venv/tf_23/lib/python3.6/site-packages/transformers/models/bart/modeling_bart.py", line 866, in _prepare_decoder_attent
ion_mask
).to(self.device)
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
这就是发生错误的地方:
for _, data in tqdm(enumerate(loader, 0), total=len(loader), desc='Processing batches..'):
y = data['target_ids'].to(device, dtype = torch.long)
y_ids = y[:, :-1].contiguous()
lm_labels = y[:, 1:].clone().detach()
lm_labels[y[:, 1:] == tokenizer.pad_token_id] = -100
ids = data['source_ids'].to(device, dtype = torch.long)
mask = data['source_mask'].to(device, dtype = torch.long)
outputs = model(input_ids = ids, attention_mask = mask, decoder_input_ids=y_ids, labels=lm_labels)
loss = outputs[0]
loader
是标记化和处理过的数据。
解决方案
我建议您将批量大小更改为 1 并临时在 CPU 中运行代码以获得更具描述性的回溯错误。
这将告诉您错误在哪里。
萨尔塔克
推荐阅读
- java - 如何防止 For 循环继续循环,直到循环中调用的方法完成?
- python-asyncio - Asyncio 等待未运行所有协程
- javascript - 如果在两个组件中使用 React Native Redash onScrollEvent 不会改变 Y 值
- reactjs - 反应构建获取请求失败
- r - R合并相同大小的大型数据框的最佳方法
- reactjs - 如何在 React 状态更新加载主屏幕时删除警告
- python - 试着做季节作业
- firebase - 如何将 admob 与颤振集成
- windows - 在批处理中将文件移动到带有全名的修剪文件名位置
- python-xarray - 使用 xarray 如何将数据集中的所有值替换为具有匹配坐标子集的另一个匹配数据集中的值?