首页 > 解决方案 > Pre_trained model work well on ResNet, InceptionNet but unable to run on VGG16 and VGG19

问题描述

I got this trouble when applying object classification with some pre-trained model. This code works on ResNet and Inception, however it turned to have some problem with cudnn when I used VGG16 or VGG19.

I run my code in conda virtual environment which has tensorflow-gpu=2.2.0, cuda=10.1, cudnn=7.6.5.

My cudnn of my OS is 8.0.4. Could this be a problem??? I worked well for many models with this system but not this case.

Here is my code:

ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True,
    help="path to the input image")
ap.add_argument("-model", "--model", type=str, default="vgg16",
    help="name of pre-trained network to use")
args = vars(ap.parse_args())

MODELS = {
    "vgg16": VGG16,
    "vgg19": VGG19,
    "inception": InceptionV3,
    "xception": Xception, # TensorFlow ONLY
    "resnet": ResNet50
}

if args["model"] not in MODELS.keys():
    raise AssertionError("The --model command line argument should "
        "be a key in the `MODELS` dictionary")
    
inputShape = (224, 224)
preprocess = imagenet_utils.preprocess_input

if args["model"] in ("inception", "xception"):
    inputShape = (299, 299)
    preprocess = preprocess_input
    

Network = MODELS[args["model"]]
model = Network(weights="imagenet")
#model = Network()
model.summary()

image = load_img(args["image"], target_size=inputShape)
image = img_to_array(image)

image = np.expand_dims(image, axis=0)
image = preprocess(image)


preds = model.predict(image)
P = imagenet_utils.decode_predictions(preds)

for (i, (imagenetID, label, prob)) in enumerate(P[0]):
    print("{}. {}: {:.2f}%".format(i + 1, label, prob * 100))

Here is the log message:

2020-11-08 11:14:31.324751: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
2020-11-08 11:14:31.334392: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
Traceback (most recent call last):
  File "Classify_keras_applications.py", line 92, in <module>
    preds = model.predict(image)
  File "/home/phat/anaconda3/envs/DL/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 88, in _method_wrapper
    return method(self, *args, **kwargs)
  File "/home/phat/anaconda3/envs/DL/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1268, in predict
    tmp_batch_outputs = predict_function(iterator)
  File "/home/phat/anaconda3/envs/DL/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 580, in __call__
    result = self._call(*args, **kwds)
  File "/home/phat/anaconda3/envs/DL/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 650, in _call
    return self._concrete_stateful_fn._filtered_call(canon_args, canon_kwds)  # pylint: disable=protected-access
  File "/home/phat/anaconda3/envs/DL/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 1661, in _filtered_call
    return self._call_flat(
  File "/home/phat/anaconda3/envs/DL/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 1745, in _call_flat
    return self._build_call_outputs(self._inference_function.call(
  File "/home/phat/anaconda3/envs/DL/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 593, in call
    outputs = execute.execute(
  File "/home/phat/anaconda3/envs/DL/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute
    tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.UnknownError:  Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
     [[node vgg19/block1_conv1/Conv2D (defined at Classify_keras_applications.py:92) ]] [Op:__inference_predict_function_763]

Function call stack:
predict_function

标签: tensorflowkerasdeep-learningclassification

解决方案


Have you checked this issue: https://github.com/tensorflow/tensorflow/issues/34888

They mention to add this code on the top of your code:

 import tensorflow as tf
 gpus= tf.config.experimental.list_physical_devices('GPU')
 tf.config.experimental.set_memory_growth(gpus[0], True)

This will not allocate all memory of your GPU at once, but it will increase as your model grow. BUT, I bet that VGGx doesn't fit into your GPU memory, and even with this extra code, I don't think it will fit anyway.

For reference, check this doc:

  • VGG16: 528 MB
  • VGG19: 549 MB

And:

  • ResNet50: 98MB
  • InceptionV3: 92MB

VGGx are 5 times bigger than the other ones


推荐阅读