首页 > 解决方案 > Tensorflow MirroredStrategy() 看起来只能在一个 GPU 上运行?

问题描述

我终于得到了一台带有 2 gpus 的计算机,并测试了https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.htmlhttps://github.com/tensorflow/models/tree/master/tutorials/image/ cifar10_estimator并确认两个 GPU 都在使用(两者的瓦数都增加到 160-180,内存几乎都达到最大值,GPU-Util 同时增加到大约 45%)。

MirroredStrategy()所以我决定在我过去用一个 GPU 训练过的现有神经网络上尝试 tensorflow 。

我不明白的是,两者的功率都增加了,而且两者的内存几乎都达到了最大值,但只有一个 GPU 看起来它的利用率为 98%,而另一个 GPU 的利用率仅为 3%。我在我的代码中搞砸了什么吗?或者这是否按设计工作?

strategy = tensorflow.distribute.MirroredStrategy()
with strategy.scope():
    model = tensorflow.keras.models.Sequential([
        tensorflow.keras.layers.Dense(units=427, kernel_initializer='uniform', activation='relu', input_dim=853),
        tensorflow.keras.layers.Dense(units=427, kernel_initializer='uniform',activation='relu'),
        tensorflow.keras.layers.Dense(units=1, kernel_initializer='uniform', activation='sigmoid')])
    model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
    model.fit(X_train, y_train, batch_size=1000, epochs=100)

英伟达-smi:

Fri Nov 22 09:26:21 2019       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 435.21       Driver Version: 435.21       CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  TITAN Xp COLLEC...  Off  | 00000000:0A:00.0 Off |                  N/A |
| 24%   47C    P2    81W / 250W |  11733MiB / 12196MiB |     98%      Default |
+-------------------------------+----------------------+----------------------+
|   1  TITAN Xp COLLEC...  Off  | 00000000:41:00.0  On |                  N/A |
| 28%   51C    P2    64W / 250W |  11736MiB / 12187MiB |      3%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      2506      C   python3                                    11721MiB |
|    1      1312      G   /usr/lib/xorg/Xorg                            18MiB |
|    1      1353      G   /usr/bin/gnome-shell                          51MiB |
|    1      1620      G   /usr/lib/xorg/Xorg                           108MiB |
|    1      1751      G   /usr/bin/gnome-shell                          72MiB |
|    1      2506      C   python3                                    11473MiB |
+-----------------------------------------------------------------------------+

标签: python-3.xtensorflowgpu

解决方案


推荐阅读