首页 > 解决方案 > TypeError:训练 Yolo 模型时无法将符号 Keras 输入/输出转换为 numpy 数组

问题描述

我遵循著名的 Yolo 开源示例(链接如下)和 Colab 上的一类图像和注释文件,并尝试训练一个 Yolo 对象检测模型。但是,在开始运行train函数后,出现下面的错误。任何人都可以帮助指出如何修复错误的方向或如何调试这个问题的方向吗?提前致谢。

参考源码,我只对代码做了非常小的修改: https ://github.com/experiencor/keras-yolo2/blob/master/examples/Blood%20Cell%20Detection.ipynb

最后一行代码:

model.fit(train_batch, steps_per_epoch  = len(train_batch), \
                epochs           = 100, \
                verbose          = 1,\
                validation_data  = valid_batch,\
                validation_steps = len(valid_batch),\
                callbacks        = [early_stop, checkpoint, tensorboard], \
                max_queue_size   = 3)
Epoch 1/100

TypeError                                 Traceback (most recent call last)
    <ipython-input-28-184363817955> in <module>()
         15 model.compile(loss=custom_loss, optimizer=optimizer)
         16 
    ---> 17 model.fit(train_batch,                     steps_per_epoch  = len(train_batch),                     
    epochs           = 100,                     verbose          = 1,                    
    validation_data  = valid_batch,                    validation_steps = len(valid_batch),                    
    callbacks        = [early_stop, checkpoint, tensorboard],                     max_queue_size   = 3)

    9 frames
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py in fit(self, x, 
    y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
       1098                 _r=1):
       1099               callbacks.on_train_batch_begin(step)
    -> 1100               tmp_logs = self.train_function(iterator)
       1101               if data_handler.should_sync:
       1102                 context.async_wait()

    /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
        826     tracing_count = self.experimental_get_tracing_count()
        827     with trace.Trace(self._name) as tm:
    --> 828       result = self._call(*args, **kwds)
        829       compiler = "xla" if self._experimental_compile else "nonXla"
        830       new_tracing_count = self.experimental_get_tracing_count()

    /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
        869       # This is the first call of __call__, so we have to initialize.
        870       initializers = []
    --> 871       self._initialize(args, kwds, add_initializers_to=initializers)
        872     finally:
        873       # At this point we know that the initialization is complete (or less

    /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
        724     self._concrete_stateful_fn = (
        725         self._stateful_fn._get_concrete_function_internal_garbage_collected(  # pylint: disable=protected-access
    --> 726             *args, **kwds))
        727 
        728     def invalid_creator_scope(*unused_args, **unused_kwds):

    /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
       2967       args, kwargs = None, None
       2968     with self._lock:
    -> 2969       graph_function, _ = self._maybe_define_function(args, kwargs)
       2970     return graph_function
       2971 

    /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
       3359 
       3360           self._function_cache.missed.add(call_context_key)
    -> 3361           graph_function = self._create_graph_function(args, kwargs)
       3362           self._function_cache.primary[cache_key] = graph_function
       3363 

    /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
       3204             arg_names=arg_names,
       3205             override_flat_arg_shapes=override_flat_arg_shapes,
    -> 3206             capture_by_value=self._capture_by_value),
       3207         self._function_attributes,
       3208         function_spec=self.function_spec,

    /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
        988         _, original_func = tf_decorator.unwrap(python_func)
        989 
    --> 990       func_outputs = python_func(*func_args, **func_kwargs)
        991 
        992       # invariant: `func_outputs` contains only Tensors, CompositeTensors,

    /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
        632             xla_context.Exit()
        633         else:
    --> 634           out = weak_wrapped_fn().__wrapped__(*args, **kwds)
        635         return out
        636 

    /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
       975           except Exception as e:  # pylint:disable=broad-except
       976             if hasattr(e, "ag_error_metadata"):
    --> 977               raise e.ag_error_metadata.to_exception(e)
        978             else:
        979               raise

    TypeError: in user code:

        /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:805 train_function  *
            return step_function(self, iterator)
        /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:795 step_function  **
            outputs = model.distribute_strategy.run(run_step, args=(data,))
        /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:1259 run
            return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
        /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica
            return self._call_for_each_replica(fn, args, kwargs)
        /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica
            return fn(*args, **kwargs)
        /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:788 run_step  **
            outputs = model.train_step(data)
        /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:756 train_step
            y, y_pred, sample_weight, regularization_losses=self.losses)
        /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/compile_utils.py:238 __call__
            total_loss_metric_value, sample_weight=batch_dim)
         /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/utils/metrics_utils.py:90 decorated
            update_op = update_state_fn(*args, **kwargs)
    /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/metrics.py:177 update_state_fn
            return ag_update_state(*args, **kwargs)
        /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/metrics.py:364 update_state  **
    sample_weight, values)
        /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/weights_broadcast_ops.py:155 broadcast_weights
            values = ops.convert_to_tensor(values, name="values")
        /usr/local/lib/python3.7/dist-packages/tensorflow/python/profiler/trace.py:163 wrapped
            return func(*args, **kwargs)
        /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py:1540 convert_to_tensor
            ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
        /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/constant_op.py:339 _constant_tensor_conversion_function
            return constant(v, dtype=dtype, name=name)
        /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/constant_op.py:265 constant
    allow_broadcast=True)
        /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/constant_op.py:283 _constant_impl
            allow_broadcast=allow_broadcast))
        /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/tensor_util.py:435 make_tensor_proto
            values = np.asarray(values)
        /usr/local/lib/python3.7/dist-packages/numpy/core/_asarray.py:83 asarray
             return array(a, dtype, copy=False, order=order)
        /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/keras_tensor.py:274 __array__
            'Cannot convert a symbolic Keras input/output to a numpy array. '

         TypeError: Cannot convert a symbolic Keras input/output to a numpy array. 

         This error may indicate that you're trying to pass a symbolic value to a NumPy call, which is not supported. Or, you may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a 

函数模型中的 lambda 层。

标签: pythontensorflowkerasobject-detectionyolo

解决方案


您的问题的解决方案是禁用急切执行模式。您需要禁用急切执行模式,如下所示

tf.compat.v1.disable_eager_execution()

急切执行模式尝试立即执行 TensorFlow 操作,而图形模式(关闭急切执行模式)创建 TensorFlow 操作图,一旦数据可用,该图将被执行。

在您的自定义损失函数中,您将获得一个不包含任何数据的符号张量,该张量似乎正在转换为 numpy 数组(可能是您在自定义损失函数中执行的一些算术运算),以及该转换操作由于符号张量不包含任何数据而引发错误

TypeError: Cannot convert a symbolic Keras input/output to a numpy array.

禁用急切模式允许创建一个图,当数据可用时,该图将被执行,然后返回张量(不是符号张量),然后将其转换为 numpy 数组,因此您不会收到任何错误。


推荐阅读