首页 > 解决方案 > TF.Keras中自定义model.fit的梯度累积?

问题描述

请对您的想法添加最低限度的评论,以便我可以改进我的查询。谢谢你。-)


我正在尝试使用梯度累积(GA)训练tf.keras模型。但我不想在自定义训练循环(like)中使用它,而是通过覆盖来自定义方法。可能吗?如何做到这一点?原因是如果我们想获得内置功能的好处,例如,我们不想使用自定义训练循环,但同时如果我们出于某种原因(如 GA 或其他)想要覆盖,我们可以自定义方法,仍然可以利用这些内置函数。.fit()train_stepkerasfitcallbackstrain_stepfit

而且,我知道使用GA的优点,但使用它的主要缺点是什么?为什么它不是默认的,而是框架的可选功能?

# overriding train step 
# my attempt 
# it's not appropriately implemented 
# and need to fix 
class CustomTrainStep(tf.keras.Model):
    def __init__(self, n_gradients, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.n_gradients = n_gradients
        self.gradient_accumulation = [tf.zeros_like(this_var) for this_var in \
                                           self.trainable_variables]

    def train_step(self, data):
        x, y = data
        batch_size = tf.cast(tf.shape(x)[0], tf.float32)  
        # Gradient Tape
        with tf.GradientTape() as tape:
            y_pred = self(x, training=True)
            loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses)
        # Calculate batch gradients
        gradients = tape.gradient(loss, self.trainable_variables)
        # Accumulate batch gradients
        accum_gradient = [(acum_grad+grad) for acum_grad, grad in \
               zip(self.gradient_accumulation, gradients)]
        accum_gradient = [this_grad/batch_size for this_grad in accum_gradient]
        # apply accumulated gradients
        self.optimizer.apply_gradients(zip(accum_gradient, self.trainable_variables))
        # TODO: reset self.gradient_accumulation 
        # update metrics
        self.compiled_metrics.update_state(y, y_pred)
        return {m.name: m.result() for m in self.metrics}

请运行并检查以下玩具设置。

# Model 
size = 32
input = tf.keras.Input(shape=(size,size,3))
efnet = tf.keras.applications.DenseNet121(weights=None,
                                          include_top = False, 
                                          input_tensor = input)
base_maps = tf.keras.layers.GlobalAveragePooling2D()(efnet.output) 
base_maps = tf.keras.layers.Dense(units=10, activation='softmax', 
                                             name='primary')(base_maps) 
custom_model = CustomTrainStep(n_gradients=10, inputs=[input], outputs=[base_maps])

# bind all
custom_model.compile(
    loss = tf.keras.losses.CategoricalCrossentropy(),
    metrics = ['accuracy'],
    optimizer = tf.keras.optimizers.Adam() )
# data 
(x_train, y_train), (_, _) = tf.keras.datasets.mnist.load_data()
x_train = tf.expand_dims(x_train, -1)
x_train = tf.repeat(x_train, 3, axis=-1)
x_train = tf.divide(x_train, 255)
x_train = tf.image.resize(x_train, [size,size]) # if we want to resize 
y_train = tf.one_hot(y_train , depth=10) 

# customized fit 
custom_model.fit(x_train, y_train, batch_size=64, epochs=3, verbose = 1)

更新

我发现其他一些人也试图实现这一目标并最终遇到了同样的问题。一个有一些解决方法,here,但它太乱了,我认为应该有一些更好的方法。

标签: pythontensorflowmachine-learningkerasdeep-learning

解决方案


是的,可以.fit()通过在没有自定义训练循环的情况下覆盖来自定义方法train_step,下面的简单示例将向您展示如何训练具有梯度累积的简单 mnist 分类器:

import tensorflow as tf

# overriding train step 
# my attempt 
# it's not appropriately implemented 
# and need to fix 
class CustomTrainStep(tf.keras.Model):
    def __init__(self, n_gradients, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.n_gradients = tf.constant(n_gradients, dtype=tf.int32)
        self.n_acum_step = tf.Variable(0, dtype=tf.int32, trainable=False)
        self.gradient_accumulation = [tf.Variable(tf.zeros_like(v, dtype=tf.float32), trainable=False) for v in self.trainable_variables]

    def train_step(self, data):
        self.n_acum_step.assign_add(1)

        x, y = data
        # Gradient Tape
        with tf.GradientTape() as tape:
            y_pred = self(x, training=True)
            loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses)
        # Calculate batch gradients
        gradients = tape.gradient(loss, self.trainable_variables)
        # Accumulate batch gradients
        for i in range(len(self.gradient_accumulation)):
            self.gradient_accumulation[i].assign_add(gradients[i])
 
        # If n_acum_step reach the n_gradients then we apply accumulated gradients to update the variables otherwise do nothing
        tf.cond(tf.equal(self.n_acum_step, self.n_gradients), self.apply_accu_gradients, lambda: None)

        # update metrics
        self.compiled_metrics.update_state(y, y_pred)
        return {m.name: m.result() for m in self.metrics}

    def apply_accu_gradients(self):
        # apply accumulated gradients
        self.optimizer.apply_gradients(zip(self.gradient_accumulation, self.trainable_variables))

        # reset
        self.n_acum_step.assign(0)
        for i in range(len(self.gradient_accumulation)):
            self.gradient_accumulation[i].assign(tf.zeros_like(self.trainable_variables[i], dtype=tf.float32))

# Model 
input = tf.keras.Input(shape=(28, 28))
base_maps = tf.keras.layers.Flatten(input_shape=(28, 28))(input)
base_maps = tf.keras.layers.Dense(128, activation='relu')(base_maps)
base_maps = tf.keras.layers.Dense(units=10, activation='softmax', name='primary')(base_maps) 
custom_model = CustomTrainStep(n_gradients=10, inputs=[input], outputs=[base_maps])

# bind all
custom_model.compile(
    loss = tf.keras.losses.CategoricalCrossentropy(),
    metrics = ['accuracy'],
    optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3) )

# data 
(x_train, y_train), (_, _) = tf.keras.datasets.mnist.load_data()
x_train = tf.divide(x_train, 255)
y_train = tf.one_hot(y_train , depth=10) 

# customized fit 
custom_model.fit(x_train, y_train, batch_size=6, epochs=3, verbose = 1)

输出:

Epoch 1/3
10000/10000 [==============================] - 13s 1ms/step - loss: 0.5053 - accuracy: 0.8584
Epoch 2/3
10000/10000 [==============================] - 13s 1ms/step - loss: 0.1389 - accuracy: 0.9600
Epoch 3/3
10000/10000 [==============================] - 13s 1ms/step - loss: 0.0898 - accuracy: 0.9748

优点:

梯度累积是一种将用于训练神经网络的样本批量拆分为几个小批量样本的机制,这些样本将按顺序运行

在此处输入图像描述

因为 GA 在每个 mini-batch 之后计算损失和梯度,而不是更新模型参数,而是等待并累积连续批次的梯度,因此它可以克服内存限制,即使用更少的内存来训练模型,就像使用大批量大小。

示例:如果您以 5 步和 4 幅图像的批量大小运行梯度累积,它的目的几乎与以 20 幅图像的批量大小运行相同。

我们还可以在使用 GA 时并行训练,即聚合来自多台机器的梯度。

需要考虑的事项:

这种技术效果很好,因此被广泛使用,在使用它之前需要考虑的事情很少,我认为它不应该被称为缺点,毕竟 GA 所做的一切都是4 + 4转向2 + 2 + 2 + 2.

如果你的机器有足够的内存来容纳已经足够大的batch size,那么就没有必要使用它,因为众所周知,太大的batch size会导致泛化能力差,如果你使用GA,它肯定会运行得更慢以达到您的机器内存已经可以处理的相同批量大小。

参考:

什么是深度学习中的梯度累积?


推荐阅读