首页 > 解决方案 > OOm - 尽管减少了批量大小,但仍无法运行 StyleGAN2

问题描述

我正在尝试使用配备八个 GPU(NVIDIA GeForce RTX 2080)的集群运行StyleGAN2 。目前,我正在使用以下配置training_loop.py

minibatch_size_dict     = {4: 512, 8: 256, 16: 128, 32: 64, 64: 32},       # Resolution-specific overrides.
minibatch_gpu_base      = 8,        # Number of samples processed at a time by one GPU.
minibatch_gpu_dict      = {},       # Resolution-specific overrides.
G_lrate_base            = 0.001,    # Learning rate for the generator.
G_lrate_dict            = {},       # Resolution-specific overrides.
D_lrate_base            = 0.001,    # Learning rate for the discriminator.
D_lrate_dict            = {},       # Resolution-specific overrides.
lrate_rampup_kimg       = 0,        # Duration of learning rate ramp-up.
tick_kimg_base          = 4,        # Default interval of progress snapshots.
tick_kimg_dict          = {4:10, 8:10, 16:10, 32:10, 64:10, 128:8, 256:6, 512:4}): # Resolution-specific overrides.

我正在使用一组 512x52 像素的图像进行训练。经过几次迭代后,我收到下面报告的错误消息,看起来脚本停止运行(使用watch nvidia-smi,我们发现 GPU 的温度和风扇活动都降低了)。我已经减少了批量大小,但看起来问题出在其他地方。您对如何解决此问题有任何提示吗?

我能够使用相同的数据集运行 StyleGAN。在论文中他们说 StyleGAN2 应该不那么重,所以我有点惊讶。

这是我收到的错误消息:

2019-12-16 18:22:54.909009: E tensorflow/stream_executor/cuda/cuda_driver.cc:828] failed to allocate 334.11M (350338048 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY: out of memory
2019-12-16 18:22:54.909087: W tensorflow/core/common_runtime/bfc_allocator.cc:314] Allocator (GPU_0_bfc) ran out of memory trying to allocate 129.00MiB (rounded to 135268352).  Current allocation summary follows.
2019-12-16 18:22:54.918750: W tensorflow/core/common_runtime/bfc_allocator.cc:319] **_***************************_*****x****x******xx***_******************************_***************
2019-12-16 18:22:54.918808: W tensorflow/core/framework/op_kernel.cc:1502] OP_REQUIRES failed at conv_grad_input_ops.cc:903 : Resource exhausted: OOM when allocating tensor with shape[4,128,257,257] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc

标签: tensorflowgpu

解决方案


StyleGAN2 的 config-f 模型实际上比 StyleGAN1 大。尝试使用较少 VRAM 消耗的配置,如 config-e。您实际上可以通过在 python 命令中传递一个标志来更改模型的配置,如下所示:https ://github.com/NVlabs/stylegan2/blob/master/run_training.py#L144

就我而言,我可以在 2 RTX 2080ti 上使用 config-e 训练 StyleGAN2。


推荐阅读