首页 > 解决方案 > 在 deeplab 中运行测试文本“local_test.sh”时出现问题。但 modle_test.py 工作正常

问题描述

我在github上复制了deeplab的源代码,并按要求配置了所有文件。并且对model_test.py运行良好。但是当我尝试运行local_tset.sh测试文件时,出现了一系列问题。

我无法阅读错误消息,所以我不知道出了什么问题,我不知道从哪里开始

2019-08-23 10:39:16.486931: W tensorflow/core/common_runtime/bfc_allocator.cc:319] *************************************************____********___****____************************xxxx
2019-08-23 10:39:16.487253: W tensorflow/core/framework/op_kernel.cc:1502] OP_REQUIRES failed at depthwise_conv_op.cc:365 : Resource exhausted: OOM when allocating tensor with shape[4,128,257,257] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
Traceback (most recent call last):
  File "E:\anaconda\lib\site-packages\tensorflow\python\client\session.py", line 1356, in _do_call
    return fn(*args)
  File "E:\anaconda\lib\site-packages\tensorflow\python\client\session.py", line 1341, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "E:\anaconda\lib\site-packages\tensorflow\python\client\session.py", line 1429, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[4,128,257,257] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node xception_65/entry_flow/block1/unit_1/xception_module/separable_conv2_depthwise/depthwise}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[gradients/AddN_56/_12764]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

  (1) Resource exhausted: OOM when allocating tensor with shape[4,128,257,257] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[{{node xception_65/entry_flow/block1/unit_1/xception_module/separable_conv2_depthwise/depthwise}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "E:/models-master/research/deeplab/train.py", line 517, in <module>
    tf.app.run()
  ...
  ...
  File "E:\anaconda\lib\site-packages\tensorflow\python\client\session.py", line 1370, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
  (0) Resource exhausted: OOM when allocating tensor with shape[4,128,257,257] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node xception_65/entry_flow/block1/unit_1/xception_module/separable_conv2_depthwise/depthwise (defined at \models-master\research\deeplab\core\xception.py:175) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

         [[gradients/AddN_56/_12764]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

  (1) Resource exhausted: OOM when allocating tensor with shape[4,128,257,257] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
         [[node xception_65/entry_flow/block1/unit_1/xception_module/separable_conv2_depthwise/depthwise (defined at \models-master\research\deeplab\core\xception.py:175) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

0 successful operations.
0 derived errors ignored.

Errors may have originated from an input operation.
Input Source operations connected to node xception_65/entry_flow/block1/unit_1/xception_module/separable_conv2_depthwise/depthwise:
 xception_65/entry_flow/block1/unit_1/xception_module/Relu_1 (defined at \models-master\research\deeplab\core\xception.py:274)

Input Source operations connected to node xception_65/entry_flow/block1/unit_1/xception_module/separable_conv2_depthwise/depthwise:
 xception_65/entry_flow/block1/unit_1/xception_module/Relu_1 (defined at \models-master\research\deeplab\core\xception.py:274)

Original stack trace for 'xception_65/entry_flow/block1/unit_1/xception_module/separable_conv2_depthwise/depthwise':
  File "/models-master/research/deeplab/train.py", line 517, in <module>
    tf.app.run()
  ...
  ...
  File "\anaconda\lib\site-packages\tensorflow\python\framework\ops.py", line 2005, in __init__
    self._traceback = tf_stack.extract_stack()

标签: python-3.xtensorflowdeeplab

解决方案


您使用的批量大小或图像大小比您的机器上可以计算的更大,这导致“资源耗尽:分配张量时 OOM”。尝试以较小的批量大小和图像大小运行模型。


推荐阅读