首页 > 解决方案 > InceptionResnetV2 STEM block keras implementation 与原始论文中的不匹配?

问题描述

我一直在尝试将Keras 实现中的InceptionResnetV2模型摘要与他们论文中指定的模型摘要进行比较,但对于 filter_concat 块,它似乎并没有太多相似之处。

模型的第一行summary()如下所示。(对于我的情况,输入改为 512x512,但据我所知,它不会影响每层的过滤器数量,因此我们也可以使用它们来跟进纸质代码翻译):

Model: "inception_resnet_v2"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to
==================================================================================================
input_1 (InputLayer)            (None, 512, 512, 3)  0
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, 255, 255, 32) 864         input_1[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 255, 255, 32) 96          conv2d_1[0][0]
__________________________________________________________________________________________________
activation_1 (Activation)       (None, 255, 255, 32) 0           batch_normalization_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, 253, 253, 32) 9216        activation_1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 253, 253, 32) 96          conv2d_2[0][0]
__________________________________________________________________________________________________
activation_2 (Activation)       (None, 253, 253, 32) 0           batch_normalization_2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D)               (None, 253, 253, 64) 18432       activation_2[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 253, 253, 64) 192         conv2d_3[0][0]
__________________________________________________________________________________________________
activation_3 (Activation)       (None, 253, 253, 64) 0           batch_normalization_3[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D)  (None, 126, 126, 64) 0           activation_3[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D)               (None, 126, 126, 80) 5120        max_pooling2d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 126, 126, 80) 240         conv2d_4[0][0]
__________________________________________________________________________________________________
activation_4 (Activation)       (None, 126, 126, 80) 0           batch_normalization_4[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D)               (None, 124, 124, 192 138240      activation_4[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 124, 124, 192 576         conv2d_5[0][0]
__________________________________________________________________________________________________
activation_5 (Activation)       (None, 124, 124, 192 0           batch_normalization_5[0][0]
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D)  (None, 61, 61, 192)  0           activation_5[0][0]
__________________________________________________________________________________________________
conv2d_9 (Conv2D)               (None, 61, 61, 64)   12288       max_pooling2d_2[0][0]
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, 61, 61, 64)   192         conv2d_9[0][0]
__________________________________________________________________________________________________
activation_9 (Activation)       (None, 61, 61, 64)   0           batch_normalization_9[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D)               (None, 61, 61, 48)   9216        max_pooling2d_2[0][0]
__________________________________________________________________________________________________
conv2d_10 (Conv2D)              (None, 61, 61, 96)   55296       activation_9[0][0]
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 61, 61, 48)   144         conv2d_7[0][0]
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, 61, 61, 96)   288         conv2d_10[0][0]
__________________________________________________________________________________________________
activation_7 (Activation)       (None, 61, 61, 48)   0           batch_normalization_7[0][0]
__________________________________________________________________________________________________
activation_10 (Activation)      (None, 61, 61, 96)   0           batch_normalization_10[0][0]
__________________________________________________________________________________________________
average_pooling2d_1 (AveragePoo (None, 61, 61, 192)  0           max_pooling2d_2[0][0]
__________________________________________________________________________________________________
.
.
. 
many more lines

他们论文的图 3 (下图)中,显示了 InceptionV4 和 InceptionResnetV2 的 STEM 块是如何形成的。在图 3 中,STEM 块中有三个过滤器串联,但在我上面向您展示的输出中,串联似乎是顺序 maxpooling 或类似的混合(第一个串联应该出现在 之后max_pooling2d_1)。它增加了连接应该做的过滤器的数量,但没有进行连接。过滤器似乎是按顺序放置的!任何人都知道此输出中发生了什么?它的作用与论文中描述的相同吗?

作为比较,我找到了一个InceptionV4 keras implementation,他们似乎确实concatenate_1为 STEM 块中的第一个串联做了一个 filter_concat。这是第一行的输出summary()

Model: "inception_v4"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to
==================================================================================================
input_1 (InputLayer)            (None, 512, 512, 3)  0
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, 255, 255, 32) 864         input_1[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 255, 255, 32) 96          conv2d_1[0][0]
__________________________________________________________________________________________________
activation_1 (Activation)       (None, 255, 255, 32) 0           batch_normalization_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, 253, 253, 32) 9216        activation_1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 253, 253, 32) 96          conv2d_2[0][0]
__________________________________________________________________________________________________
activation_2 (Activation)       (None, 253, 253, 32) 0           batch_normalization_2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D)               (None, 253, 253, 64) 18432       activation_2[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 253, 253, 64) 192         conv2d_3[0][0]
__________________________________________________________________________________________________
activation_3 (Activation)       (None, 253, 253, 64) 0           batch_normalization_3[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D)               (None, 126, 126, 96) 55296       activation_3[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 126, 126, 96) 288         conv2d_4[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D)  (None, 126, 126, 64) 0           activation_3[0][0]
__________________________________________________________________________________________________
activation_4 (Activation)       (None, 126, 126, 96) 0           batch_normalization_4[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate)     (None, 126, 126, 160 0           max_pooling2d_1[0][0]
                                                                 activation_4[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D)               (None, 126, 126, 64) 10240       concatenate_1[0][0]
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 126, 126, 64) 192         conv2d_7[0][0]
__________________________________________________________________________________________________
activation_7 (Activation)       (None, 126, 126, 64) 0           batch_normalization_7[0][0]
__________________________________________________________________________________________________
conv2d_8 (Conv2D)               (None, 126, 126, 64) 28672       activation_7[0][0]
__________________________________________________________________________________________________
.
.
.
and many more lines

因此,如本文所示,两种架构都应该具有相同的第一层。或者我错过了什么?

编辑:我发现,来自 Keras 的 InceptionResnetV2 的实现不遵循InceptionResnetV2 的 STEM 块,而是遵循InceptionResnetV1的实现(他们的论文中的图 14,附在下面)。在 STEM 块之后,它似乎很好地遵循了 InceptionResnetV2 的其他块。

InceptionResnetV1 的性能不如 InceptionResnetV2(图 25),因此我对使用来自 V1 的块而不是来自 keras 的完整 V2 持怀疑态度。我将尝试从我找到的 InceptionV4 中删除 STEM,并继续使用 InceptionResnetV2。

同样的问题在 tf-models github 中没有解释就关闭了。如果有人感兴趣,我把它留在这里:https ://github.com/tensorflow/models/issues/1235

编辑 2:出于某种原因,GoogleAI(Inception 架构的创建者)在发布代码时在他们的博客中展示了一张图片,即“inception-resnet-v2”。但是 STEM 块是来自 InceptionV3 的块,而不是 InceptionV4 中的块,正如论文中指定的那样。因此,要么论文是错误的,要么代码由于某种原因没有遵循论文。

图 3 原论文。 纯 Inception-v4 和 Inception-ResNet-v2 网络的 stem 模式。 [...]

原始论文的图 14。 Inception-ResNet-v1 网络的主干。

图 25.所有四个模型(单一模型,单一作物)的前 5 个错误演变。 由于更大的模型尺寸显示了改进。虽然残差版本收敛更快,但最终的准确度似乎主要取决于模型尺寸

标签: kerasdeep-learningneural-networkconv-neural-network

解决方案


它达到了类似的结果

我刚刚收到一封电子邮件,确认了来自 Google 高级研究科学家和博客文章的原始发布者 Alex Alemi 关于发布 InceptionResnetV2 代码的错误。似乎在内部实验期间,STEM 模块被切换并且释放就保持这样。

引用:

丹尼·阿泽马尔,

看来你是对的。不完全确定发生了什么,但代码显然是事实的来源,因为已发布的检查点适用于也已发布的代码。当我们开发架构时,我们进行了一系列内部实验,我想在某些时候会切换主干。目前不确定我是否有时间深入挖掘,但就像我说的,发布的检查点是发布代码的检查点,因为您可以通过运行评估管道来验证自己。我同意你的观点,这似乎是在使用原始的 Inception V1 词干。此致,

亚历克斯·阿莱米

我将通过有关此主题的更改来更新此帖子。

更新:Christian Szegedy,也是原论文的出版商,刚刚在推特上给我发了一条消息:

最初的实验和模型是在 DistBelief 中创建的,这是一个与 Tensorflow 完全不同的框架。

TF 版本是在一年后添加的,可能与原始模型存在差异,但确保可以实现类似的结果。

因此,由于它达到了相似的结果,因此您的实验将大致相同。


推荐阅读