首页 > 解决方案 > GPU 上的 Keras 模型:在自定义损失函数中使用 Pandas

问题描述

我正在尝试在 Keras 中定义以下(玩具)自定义损失函数:

def flexed_distance_loss(y_true, y_pred):
    y_true_df = pd.DataFrame(y_true, columns=my_columns)

    # do something with y_true_df

    return categorical_crossentropy(y_true_df.values, y_pred)

我在 GPU 上使用tf.distribute.MirroredStrategy().

编译模型不会产生错误,但是运行时model.fit()会出现以下错误:

>>> y_true_df = pd.DataFrame(y_true, columns=my_columns)

OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed:
AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.

似乎 Pandas 正在尝试迭代 tensor y_true,这在图形模式(在 GPU 上训练时的首选模式)中是被禁止的。

我必须明白在 GPU 上训练时不可能在损失函数中使用 Pandas 吗?

除了直接在 TensorFlow 本身中进行所有操作之外,还有什么可行的替代方案?我正在做一些繁重的重新索引和合并,我无法想象在原生 TensorFlow 代码中做这一切的痛苦。

笔记:

作为参考,这是我正在尝试进行的操作:

def flexed_distance_loss(y_true, y_pred):
    y_true_df = pd.DataFrame(y_true, columns=my_columns)
    y_true_custom = y_true_df.idxmax(axis=1).to_frame(name='my_name')

    y_true_df = pd.concat([y_true_custom, y_true_df], axis=1)

    y_true_df = y_true_df.where(y_true_df != 0, np.NaN)
    y_true_df = y_true_df.reset_index().set_index('my_name')

    nearby = y_true_df.fillna(pivoted_df.reindex(y_true_df.index)) \
                            .fillna(0) \
                            .set_index('index').sort_index()

    nearby = np.expm1(nearby).div(np.sum(np.expm1(nearby), axis=1), axis=0)

    y_true_flexed = nearby.values

    return categorical_crossentropy(y_true_flexed, y_pred)

标签: pythonpandastensorflowkerasgpu

解决方案


Actually I realised that all I'm doing within the custom loss function is transforming y_true. In the real case, I'm transforming it based on some random number (if random.random() > 0.1 then apply the transformation).

The most appropriate place to do this is not in a loss function, but in the batch generator instead.

class BatchGenerator(tf.keras.utils.Sequence):

    def __init__(self, indices, batch_size, mode):
        self.indices = indices
        self.batch_size = batch_size
        self.mode = mode

    def __len__(self):
        return math.ceil(len(self.indices) / self.batch_size)

    def __getitem__(self, idx):
        batch = self.indices[idx * self.batch_size:(idx + 1) * self.batch_size]
        X_batch = X[batch, :]
        y_batch = y[batch, :]

        if self.mode == 'train' and random.random() > 0.3:
            # pick y from regular batch
            return X_batch, y_batch
        else:
            # apply flex-distancing to y
            return X_batch, flex_distance_batch(y_batch)

batch_size = 512*4

train_generator = BatchGenerator(range(0, test_cutoff), batch_size, 'train')
test_generator = BatchGenerator(range(test_cutoff, len(y_df)), batch_size, 'test')

This way the transformations are applied directly from the batch generator, and Pandas is perfectly allowed here as we're dealing only with NumPy array on the CPU.


推荐阅读