python - Pytorch:RuntimeError:reduce 同步失败:cudaErrorAssert:设备端断言已触发
问题描述
由于这是论文中发布的配置,我假设我做错了什么。
每次我尝试进行训练时,此错误都会出现在不同的图像上。
C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.1.1\helpers\pydev\pydevd.py", line 1741, in <module>
main()
File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.1.1\helpers\pydev\pydevd.py", line 1735, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.1.1\helpers\pydev\pydevd.py", line 1135, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.1.1\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Noam/Code/vision_course/hopenet/deep-head-pose/code/original_code_augmented/train_hopenet_with_validation_holdout.py", line 187, in <module>
loss_reg_yaw = reg_criterion(yaw_predicted, label_yaw_cont)
File "C:\Noam\Code\vision_course\hopenet\venv\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "C:\Noam\Code\vision_course\hopenet\venv\lib\site-packages\torch\nn\modules\loss.py", line 431, in forward
return F.mse_loss(input, target, reduction=self.reduction)
File "C:\Noam\Code\vision_course\hopenet\venv\lib\site-packages\torch\nn\functional.py", line 2204, in mse_loss
ret = torch._C._nn.mse_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction))
RuntimeError: reduce failed to synchronize: cudaErrorAssert: device-side assert triggered
有任何想法吗?
解决方案
这种错误通常在使用NLLLoss
or时发生CrossEntropyLoss
,并且当您的数据集具有负标签(或标签大于类数)时。这也是你得到断言t >= 0 && t < n_classes
失败的确切错误。
这不会发生MSELoss
,但是 OP 提到有一个CrossEntropyLoss
地方,因此会发生错误(程序在其他行异步崩溃)。解决方案是清理数据集并确保t >= 0 && t < n_classes
满足(其中t
代表标签)。
此外,请确保您的网络输出在 0 到 1 的范围内,以防您使用NLLLoss
或BCELoss
(然后分别需要softmax
或sigmoid
激活)。请注意,这不是必需的,CrossEntropyLoss
或者BCEWithLogitsLoss
因为它们在损失函数中实现了激活函数。(感谢@PouyaB 指出)。