首页 > 解决方案 > 在 TPU 上使用全局批量大小进行训练(张量流)

问题描述

我最近在 Google Colab 上启动了一个神经网络项目,我发现我可以使用 TPU。我一直在研究如何使用它,我发现了 tensorflow TPUStrategy(我使用的是 tensorflow 2.2.0),并且能够成功定义模型并在 TPU 上运行训练步骤。

但是,我不确定这意味着什么。可能是我没有充分阅读 Google 的 TPU 指南,但我的意思是我不知道在训练步骤中到底发生了什么。

该指南要求您定义一个GLOBAL_BATCH_SIZE,并且每个 TPU 核心采用的批量大小由 给出per_replica_batch_size = GLOBAL_BATCH_SIZE / strategy.num_replicas_in_sync,这意味着每个 TPU 的批量大小小于您开始时的批量大小。在 Colab 上,strategy.num_replicas_in_sync = 8,这意味着如果我从GLOBAL_BATCH_SIZE64 开始,per_replica_batch_size则为 8。

现在,我不明白的是,当我计算一个训练步骤时,优化器是否会计算 8 个不同的批量大小的步骤per_replica_batch_size,更新模型的权重 8 次不同的时间,或者它只是以这种方式并行化训练步骤的计算最后在一批 size 上只计算 1 个优化器步骤GLOBAL_BATCH_SIZE。谢谢。

标签: tensorflowneural-networktensorflow2.0tpubatchsize

解决方案


这是一个很好的问题,并且与Distribution Strategy.

在阅读了这个Tensorflow 文档TPU 策略文档和这个同步和异步训练的解释之后,

我可以这么说

> the optimizer computes 8 different steps on batches of size
> per_replica_batch_size, updating the weights of the model 8 different
> times

Tensorflow 文档的以下解释应阐明:

> So, how should the loss be calculated when using a
> tf.distribute.Strategy?
> 
> For an example, let's say you have 4 GPU's and a batch size of 64. One
> batch of input is distributed across the replicas (4 GPUs), each
> replica getting an input of size 16.
> 
> The model on each replica does a forward pass with its respective
> input and calculates the loss. Now, instead of dividing the loss by
> the number of examples in its respective input (BATCH_SIZE_PER_REPLICA
> = 16), the loss should be divided by the GLOBAL_BATCH_SIZE (64).

在下面提供其他链接的解释(以防它们将来不起作用):

TPU 策略文档指出:

> In terms of distributed training architecture, `TPUStrategy` is the
> same `MirroredStrategy` - it implements `synchronous` distributed
> training. `TPUs` provide their own implementation of efficient
> `all-reduce` and other collective operations across multiple `TPU`
> cores, which are used in `TPUStrategy`.

同步和异步训练的解释如下:

> `Synchronous vs asynchronous training`: These are two common ways of
> `distributing training` with `data parallelism`. In `sync training`, all
> `workers` train over different slices of input data in `sync`, and
> **`aggregating gradients`** at each step. In `async` training, all workers are
> independently training over the input data and updating variables
> `asynchronously`. Typically sync training is supported via all-reduce
> and `async` through parameter server architecture.

您还可以通过此MPI 教程详细了解 All_Reduce 的概念。

下面的屏幕截图显示了 All_Reduce 的工作原理:

在此处输入图像描述


推荐阅读