首页 > 解决方案 > 向预训练模型 Yolo v1 添加 dropout

问题描述

我从https://github.com/lovish1234/YOLOv1获得了 Yolo v1 代码(如果添加此链接有许可证问题,请联系我)

据我所知,与 Yolo v1 论文不同,该代码不包含 dropout。同样根据原始论文,他们在第一个连接层之后添加了一个 rate=0.5 的 dropout 层,以防止层之间的共同适应。

所以我将代码更改为

def build_graph(self):
    """Build the computational graph for the network"""
    # Print
    if self.verbose:
        print('Building Yolo Graph....')
    # Reset default graph
    tf.reset_default_graph()
    # Input placeholder
    self.x = tf.placeholder('float32', [None, 448, 448, 3])
    self.label_batch = tf.placeholder('float32', [None, 73])
    self.keep_prob = tf.placeholder('float32')

    # conv1, pool1
    self.conv1 = self.conv_layer(1, self.x, 64, 7, 2)
    self.pool1 = self.maxpool_layer(2, self.conv1, 2, 2)
    # size reduced to 64x112x112
    # conv2, pool2
    self.conv2 = self.conv_layer(3, self.pool1, 192, 3, 1)
    self.pool2 = self.maxpool_layer(4, self.conv2, 2, 2)

    # size reduced to 192x56x56
    # conv3, conv4, conv5, conv6, pool3
    self.conv3 = self.conv_layer(5, self.pool2, 128, 1, 1)
    self.conv4 = self.conv_layer(6, self.conv3, 256, 3, 1)
    self.conv5 = self.conv_layer(7, self.conv4, 256, 1, 1)
    self.conv6 = self.conv_layer(8, self.conv5, 512, 3, 1)
    self.pool3 = self.maxpool_layer(9, self.conv6, 2, 2)

    # size reduced to 512x28x28
    # conv7 - conv16, pool4
    self.conv7 = self.conv_layer(10, self.pool3, 256, 1, 1)
    self.conv8 = self.conv_layer(11, self.conv7, 512, 3, 1)
    self.conv9 = self.conv_layer(12, self.conv8, 256, 1, 1)
    self.conv10 = self.conv_layer(13, self.conv9, 512, 3, 1)
    self.conv11 = self.conv_layer(14, self.conv10, 256, 1, 1)
    self.conv12 = self.conv_layer(15, self.conv11, 512, 3, 1)
    self.conv13 = self.conv_layer(16, self.conv12, 256, 1, 1)
    self.conv14 = self.conv_layer(17, self.conv13, 512, 3, 1)
    self.conv15 = self.conv_layer(18, self.conv14, 512, 1, 1)
    self.conv16 = self.conv_layer(19, self.conv15, 1024, 3, 1)
    self.pool4 = self.maxpool_layer(20, self.conv16, 2, 2)

    # size reduced to 1024x14x14
    # conv17 - conv24
    self.conv17 = self.conv_layer(21, self.pool4, 512, 1, 1)
    self.conv18 = self.conv_layer(22, self.conv17, 1024, 3, 1)
    self.conv19 = self.conv_layer(23, self.conv18, 512, 1, 1)
    self.conv20 = self.conv_layer(24, self.conv19, 1024, 3, 1)
    self.conv21 = self.conv_layer(25, self.conv20, 1024, 3, 1)
    self.conv22 = self.conv_layer(26, self.conv21, 1024, 3, 2)
    self.conv23 = self.conv_layer(27, self.conv22, 1024, 3, 1)
    self.conv24 = self.conv_layer(28, self.conv23, 1024, 3, 1)

    # size reduced to 1024x7x7
    # fc1, fc2, fc3
    self.fc1 = self.fc_layer(29, self.conv24, 512,
                             flatten=True, linear=False)
    self.dropout = tf.nn.dropout(self.fc1, self.keep_prob)
    self.fc2 = self.fc_layer(
        30, self.dropout, 4096, flatten=False, linear=False)
    self.fc3 = self.fc_layer(
        31, self.fc2, 1470, flatten=False, linear=True)

我期待一个积极的结果。但是添加 dropout 后的训练会降低它的功能。有些甚至没有显示框,而对于那些显示的框,框是错误的,并且信心较低。

我找不到我得到这些结果的原因。(我的猜测是经过训练的模型在某处包含 dropout,或者在预训练模型中添加 dropout 层会降低其功能。)

这是我必须做的一个重要项目。但我是 TensorFlow 的新手。如果这是一个简单的问题,请原谅我。另外,如果有人知道答案,请告诉我。谢谢你。

标签: tensorflowconvolutional-neural-networkpre-trained-modelyolodropout

解决方案


Dropout是一个超参数;有时它有帮助,有时它没有。它是否有帮助取决于其他因素,例如其他超参数值(例如批量大小、学习率等)和数据集的属性。


推荐阅读