首页 > 解决方案 > TensorFlow 如何利用 100% 的 GPU 内存?

问题描述

我有一个 32Gb 显卡,在我的脚本开始时,我看到:

2019-07-11 01:26:19.985367: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 95.16G (102174818304 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-07-11 01:26:19.988090: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 85.64G (91957338112 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-07-11 01:26:19.990806: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 77.08G (82761605120 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-07-11 01:26:19.993527: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 69.37G (74485440512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-07-11 01:26:19.996219: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 62.43G (67036893184 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-07-11 01:26:19.998911: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 56.19G (60333203456 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-07-11 01:26:20.001601: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 50.57G (54299881472 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-07-11 01:26:20.004296: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 45.51G (48869892096 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-07-11 01:26:20.006981: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 40.96G (43982901248 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-07-11 01:26:20.009660: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 36.87G (39584608256 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-07-11 01:26:20.012341: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 33.18G (35626147840 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY

之后 TF 使用了我 96% 的内存。后来,当它用完内存时,它会尝试分配 65G

tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 65.30G (70111285248 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY

我的问题是,剩下的 1300MB (0.04*32480) 呢?我不介意在运行 OOM 之前使用它们。

如何让 TF 使用 99.9% 的内存而不是 96%?

更新: nvidia-smi 输出

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.40.04    Driver Version: 418.40.04    CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla V100-SXM2...  On   | 00000000:00:16.0 Off |                    0 |
| N/A   66C    P0   293W / 300W |  31274MiB / 32480MiB |    100%      Default |

我在问这些 1205MB (31274MiB - 32480MiB) 尚未使用。也许它们在那里是有原因的,也许它们是在 OOM 之前使用的。

标签: pythontensorflow

解决方案


监控GPU并不像监控CPU那么简单。有许多并行进程正在进行,可以bottleneck为您的 GPU 创建一个。

可能存在各种问题,例如:
1. 数据的读/写速度
2. CPU 或磁盘导致瓶颈

但我认为使用 96% 是很正常的。更不用说 nvidia-smi 只显示一个特定的实例。

您可以安装gpustat 并使用它来实时监控 GPU(您应该在 OOM 期间达到 100%)

pip install gpustat

gpustat -i

你能做什么 ?
1.可以使用data_iterator来更快的并行处理数据。
2. 增加批量大小。(我认为这不适用于您的情况OOM
3. 您可以超频 GPU(不推荐)

是一篇关于硬件加速的好文章。


推荐阅读