首页 > 解决方案 > Tensorflow TPU 错误:随机缓冲区已满?

问题描述

我一直在尝试使用 tensorflow 的 TPU 来训练计算机视觉模型,但是当我在 kaggle 的环境中提交笔记本时不断出错。

这真的很奇怪,因为当我手动运行笔记本时,它工作正常,并在 <20 分钟内完成,但是当我提交笔记本 8/10 次时,它会卡住并在笔记本死前 9 小时说出以下内容。

Kaggle 笔记本

卡住以下消息:

2021-01-08 00:28:59.042056:我 tensorflow/core/kernels/data/shuffle_dataset_op.cc:221] 已填充随机缓冲区。

我试过的

  1. 降低 buffer_size
  2. 改变加载函数的顺序
  3. Batch_size 调优

如果有人知道发生了什么,请告诉我!

数据管道

AUTOTUNE = tf.data.experimental.AUTOTUNE
GCS_PATH = KaggleDatasets().get_gcs_path('cassava-leaf-disease-tfrecords-center-512x512')
BATCH_SIZE = 16 * strategy.num_replicas_in_sync
IMAGE_SIZE = [512, 512]
TARGET_SIZE = 512
CLASSES = ['0', '1', '2', '3', '4']
EPOCHS = 20

def decode_image(image_data):
    image = tf.image.decode_jpeg(image_data, channels=3) #decoding jpeg-encoded img to uint8 tensor
    image = tf.cast(image, tf.float32) / 255.0 #cast int val to float so we can normalize it
    image = tf.image.resize(image, [*IMAGE_SIZE]) #added this back seeing if it does anything
    image = tf.reshape(image, [*IMAGE_SIZE, 3]) #resizing to proper shape
    return image


def read_tfrecord(example, labeled=True):
    if labeled:
        TFREC_FORMAT = {
            'image': tf.io.FixedLenFeature([], tf.string), 
            'target': tf.io.FixedLenFeature([], tf.int64), 
        }
    else:
        TFREC_FORMAT = {
            'image': tf.io.FixedLenFeature([], tf.string), 
            'image_name': tf.io.FixedLenFeature([], tf.string), 
        }
    example = tf.io.parse_single_example(example, TFREC_FORMAT)
    image = decode_image(example['image'])
    if labeled:
        label_or_name = tf.cast(example['target'], tf.int32)
    else:
        label_or_name =  example['image_name']
    return image, label_or_name

def load_dataset(filenames, labeled=True, ordered=False):
    ignore_order = tf.data.Options()
    if not ordered:
        ignore_order.experimental_deterministic = False

    dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=AUTOTUNE)
    dataset = dataset.with_options(ignore_order)
    dataset = dataset.map(lambda x: read_tfrecord(x, labeled=labeled), num_parallel_calls=AUTOTUNE)
    return dataset

def get_training_dataset():
    dataset = load_dataset(TRAINING_FILENAMES, labeled=True)
    dataset = dataset.map(transform, num_parallel_calls=AUTOTUNE)
    dataset = dataset.repeat() # the training dataset must repeat for several epochs
    dataset = dataset.shuffle(2048) #set higher than input?
    dataset = dataset.batch(BATCH_SIZE)
    dataset = dataset.prefetch(AUTOTUNE) # prefetch next batch while training (autotune prefetch buffer size)
    return dataset

拟合模型:

history = model.fit(x=get_training_dataset(),
                    epochs=EPOCHS,
                    steps_per_epoch = STEPS_PER_EPOCH,
                    validation_steps=VALID_STEPS,
                    validation_data=get_validation_dataset(),
                    callbacks = [lr_callback, model_save, my_early_stopper],
                    verbose=1,
                   )

标签: pythonpython-3.xtensorflowtensorflow-datasetstfrecord

解决方案


推荐阅读