首页 > 解决方案 > Heatmap on custom model with transfer learning

问题描述

While trying to get a Grad-CAM for my custom model, I ran into a problem. I am trying to fine-tune a model for image classification, using resnet50. My model is defined in the following way:

IMG_SHAPE = (img_height,img_width) + (3,)

base_model = tf.keras.applications.ResNet50(input_shape=IMG_SHAPE, include_top=False, weights='imagenet')

and,

preprocess_input = tf.keras.applications.resnet50.preprocess_input

and finnaly,

input_layer = tf.keras.Input(shape=(img_height, img_width, 3),name="input_layer")
x = preprocess_input(input_layer)
x = base_model(x, training=False)
x = tf.keras.layers.GlobalAveragePooling2D(name="global_average_layer")(x)
x = tf.keras.layers.Dropout(0.2,name="dropout_layer")(x)
x = tf.keras.layers.Dense(4,name="training_layer")(x)
outputs = tf.keras.layers.Dense(4,name="prediction_layer")(x)
model = tf.keras.Model(input_layer, outputs)

Now, I was following the tutorial at https://keras.io/examples/vision/grad_cam/ in order to get a heatmap. But, while the tutorial recommends using model.summary() to get the last convolutional layer and classifier layers, I am not sure how to do it for my model. If I run model.summary(), i get:

__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_layer (InputLayer)        [(None, 224, 224, 3)] 0                                            
__________________________________________________________________________________________________
tf.operators.getitem_11       (None, 224, 224, 3)  0                             
__________________________________________________________________________________________________
tf.nn.bias_add_11 (TFOpLambd  [(None, 224, 224, 3)] 0
__________________________________________________________________________________________________
resnet50 (Functional)          (None, 7, 7, 2048)   23587712
__________________________________________________________________________________________________
global_average (GlobalAverag    (None, 2048)    0
__________________________________________________________________________________________________
dropout_layer (Dropout)       (None, 2048)     0
__________________________________________________________________________________________________
hidden_layer (Dense)         (None, 4)        8196
__________________________________________________________________________________________________
predict_layer (Dense)         (None, 4)      20
==================================================================================================

However, if I run base_model.summary(), i get:

Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_29 (InputLayer)           [(None, 224, 224, 3) 0                                            
__________________________________________________________________________________________________
conv1_pad (ZeroPadding2D)       (None, 230, 230, 3)  0           input_29[0][0]                   
__________________________________________________________________________________________________
conv1_conv (Conv2D)             (None, 112, 112, 64) 9472        conv1_pad[0][0]                  
__________________________________________________________________________________________________
conv1_bn (BatchNormalization)   (None, 112, 112, 64) 256         conv1_conv[0][0]                 
__________________________________________________________________________________________________
...   ...   ...           ...                                 
__________________________________________________________________________________________________
conv5_block3_3_bn (BatchNormali (None, 7, 7, 2048)   8192        conv5_block3_3_conv[0][0]        
__________________________________________________________________________________________________
conv5_block3_add (Add)          (None, 7, 7, 2048)   0           conv5_block2_out[0][0]           
                                                                 conv5_block3_3_bn[0][0]          
__________________________________________________________________________________________________
conv5_block3_out (Activation)   (None, 7, 7, 2048)   0           conv5_block3_add[0][0]           
==================================================================================================

If i follow the tutorial using 'resnet50' as the last convolutional layer, i get the following error:

Graph disconnected: cannot obtain value for tensor KerasTensor(type_spec=TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name='input_29'), name='input_29', description="created by layer 'input_29'") at layer "conv1_pad". The following previous layers were accessed without issue: []

but if I use 'conv5_block3_out', the program cannot find that layer on the model. How can I acess the layers that seem to be hidden on the resnet50 layer?

标签: tensorflowmachine-learningkeras

解决方案


I managed to find a solution to this problem. When defining "make-gradcam_heatmap", I added the line

input_layer = model.get_layer('resnet50').get_layer('input_1').input

and changed the next line to

last_conv_layer = model.get_layer(last_conv_layer_name).get_layer("conv5_block3_out")

推荐阅读