首页 > 解决方案 > 回调分配 GPU 内存

问题描述

我编写了一个自定义回调BatchHistory

我将 BatchHistory 对象存储为 pickle 文件,以便以后访问确切的训练历史。然而我观察到

1)回调对象的泡菜是我只泡菜时的10倍,logs并且

2) 解压 BatchHistory 对象时,正在分配 GPU 内存。

我不明白为什么会这样。我查看了回调的源代码,这些基本上是简单的类,没有与 keras 模型相关的逻辑。那么 GPU 内存分配从何而来,为什么 pickle 文件如此之大,与实际记录的数据无关?模型中必须有一些数据是使用与回调对象相关联的回调进行训练的,该回调对象会被它腌制,从而导致大的腌制文件。是这样吗?如果是这样:源代码中的原因和位置是负责的代码。

这是当 GPU 已经在大量使用时取消回调时出现的 OOM 错误:

---------------------------------------------------------------------------
ResourceExhaustedError                    Traceback (most recent call last)
~/anaconda3/envs/neucores/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1333     try:
-> 1334       return fn(*args)
   1335     except errors.OpError as e:

~/anaconda3/envs/neucores/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run_fn(feed_dict, fetch_list, target_list, options, run_metadata)
   1318       return self._call_tf_sessionrun(
-> 1319           options, feed_dict, fetch_list, target_list, run_metadata)
   1320 

~/anaconda3/envs/neucores/lib/python3.6/site-packages/tensorflow/python/client/session.py in _call_tf_sessionrun(self, options, feed_dict, fetch_list, target_list, run_metadata)
   1406         self._session, options, feed_dict, fetch_list, target_list,
-> 1407         run_metadata)
   1408 

ResourceExhaustedError: OOM when allocating tensor with shape[8704,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
     [[{{node training/Adam/Variable_30/Assign}} = Assign[T=DT_FLOAT, _grappler_relax_allocator_constraints=true, use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](training/Adam/Variable_30, training/Adam/zeros_12)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

这是我的回调类。但我不认为我的代码与他的有任何关系。它必须是关于基类的。但正如我所说,在源代码中我找不到任何可能导致 GPU 内存分配的东西。

class BatchHistory(Callback):

    def __init__(self):
        super().__init__()
        self.logs =  {'loss' : [],
                      'acc' : [],
                      'val_acc' : [],
                      'val_loss' : [],
                      'epoch_cnt' : 0,
                      'epoch_ends' : [],
                      'time_elapsed' : 0 # seconds
                     }
        self.start_time = time.time() 

    def on_train_begin(self, logs={}):
        pass

    def on_batch_end(self, batch, logs={}):
        self.logs['acc'].append(logs.get('acc'))
        self.logs['loss'].append(logs.get('loss'))
        self.logs['time_elapsed']=int(time.time()-self.start_time)

    def on_epoch_end(self, epochs, logs=None):
        self.logs['epoch_cnt']+=1
        self.logs['epoch_ends'].append(len(self.logs['loss']))
        self.logs['val_acc'].append(logs.get('val_acc'))
        self.logs['val_loss'].append(logs.get('val_loss'))

标签: pythontensorflowmemorykerasallocation

解决方案


推荐阅读