首页 > 解决方案 > 使用 ImageDataGenerator 将 Imagenet 数据加载到多个 GPU 上

问题描述

我正在尝试从文件夹加载 Imagenet 数据集以在 ResNet18 模型上对其进行训练。由于 Imagenet 是一个大型数据集,我试图将数据样本分布在多个 GPU 上。当我检查是否使用 nvidia-smi 进行训练时,它显示训练已在上述 GPU 上开始。然而,训练精度并没有随着时间的推移而提高,而且损失似乎也没有减少。我怀疑这可能是因为我的 x_train 和 y_train 在 GPU 上分发时的加载方式。

  1. 我很想知道 x_train, y_train = next(train_generator) 是否实际上在每个时期迭代所有数据集。如果不是,那么我只训练 125(batch_size=125) 个数据样本。

  2. 在多个 GPU 上分布数据时,如何有效地将张量数据提供给 from_tensor_slices()。

     strategy = tensorflow.distribute.MirroredStrategy()
     init_lr = 0.1
     epochs =60
     batch_size = 125
     My_wd=0.0001
     Loss = 'categorical_crossentropy'
     Optimizer = SGD(lr=init_lr,decay=0.0005, momentum=0.9, nesterov=False)
    
     def get_dataset():
         train_data_dir = 'Datasets/Imagenet/ILSVRC2012_img_train/ILSVRC2012_img_train'
         validation_data_dir = 'Datasets/Imagenet/ILSVRC2012_img_train/ILSVRC2012_img_train'
         datagen = ImageDataGenerator(rescale=1./255, horizontal_flip=True, validation_split = 0.2)
         val_datagen = ImageDataGenerator(rescale=1./255)
         train_generator = datagen.flow_from_directory(train_data_dir,target_size=(224,224),color_mode='rgb',batch_size= batch_size, subset= "training",class_mode='categorical', shuffle=True, seed=42) 
         val_generator = datagen.flow_from_directory(validation_data_dir,target_size=(224,224),color_mode='rgb',batch_size= batch_size, subset = "validation",class_mode='categorical', shuffle=True, seed=42) 
    
         x_train, y_train = next(train_generator)
         x_val,y_val = next(val_generator)
    
     return (
             tensorflow.data.Dataset.from_tensor_slices((x_train, y_train)).shuffle(1024977).repeat().batch(global_batch_size),
             tensorflow.data.Dataset.from_tensor_slices((x_train, y_train)).shuffle(256190).repeat().batch(global_batch_size),
    
         )
    train_generator, val_generator = get_dataset()
     with strategy.scope():
         model=resnet(input_shape=input_shape,num_classes=1000)
         model.compile(loss=catcross_entropy_logits_loss() ,optimizer = Optimizer, metrics=['acc'])
    
     model.summary()
     history = model.fit(train_generator,
                                   validation_data=val_generator,
                                   epochs=epochs,
                                   verbose=1,
                                   use_multiprocessing=False,
                                   workers=1,
                                   callbacks=callbacks,
                                   validation_steps=val_generator.n // batch_size,
                                   steps_per_epoch =train_generator.n // batch_size)
    

标签: pythontensorflowdeep-learningimagenetmultiple-gpu

解决方案


推荐阅读