首页 > 解决方案 > 如何在 tf.keras 1.14 中使用 ReLU 和连接层克隆模型?

问题描述

此代码由 NeRF(2020) 官方团队使用 tensorflow 1.14 提供。我正在尝试使用 tf.keras.clone_model() 克隆这个模型。我在当前代码中面临的问题是“ReLU”对象没有属性“_ name _”。该模型是使用以下代码创建的。该模型的摘要可以在帖子的末尾找到。

def init_nerf_model(D=8, W=256, input_ch=3, input_ch_views=3, output_ch=4, skips=[4], use_viewdirs=False):
    relu = tf.keras.layers.ReLU()
    def dense(W, act=relu): return tf.keras.layers.Dense(W, activation=act)

    print('MODEL', input_ch, input_ch_views, type(
        input_ch), type(input_ch_views), use_viewdirs)
    input_ch = int(input_ch)
    input_ch_views = int(input_ch_views)
    inputs = tf.keras.Input(shape=(input_ch + input_ch_views))
    inputs_pts, inputs_views = tf.split(inputs, [input_ch, input_ch_views], -1)
    inputs_pts.set_shape([None, input_ch])
    inputs_views.set_shape([None, input_ch_views])

    print(inputs.shape, inputs_pts.shape, inputs_views.shape)
    outputs = inputs_pts
    for i in range(D):
        outputs = dense(W)(outputs)
        if i in skips:
            outputs = tf.concat([inputs_pts, outputs], -1)

    if use_viewdirs:
        alpha_out = dense(1, act=None)(outputs)
        bottleneck = dense(256, act=None)(outputs)
        inputs_viewdirs = tf.concat(
            [bottleneck, inputs_views], -1)  # concat viewdirs
        outputs = inputs_viewdirs
        # The supplement to the paper states there are 4 hidden layers here, but this is an error since
        # the experiments were actually run with 1 hidden layer, so we will leave it as 1.
        for i in range(1):
            outputs = dense(W//2)(outputs)
        outputs = dense(3, act=None)(outputs)
        outputs = tf.concat([outputs, alpha_out], -1)
    else:
        outputs = dense(output_ch, act=None)(outputs)

    model = tf.keras.Model(inputs=inputs, outputs=outputs)
    return model

当我搜索时,我发现的唯一解决方案是创建一个如下所示的顺序模型,但我想知道如何制作相同结构的顺序模型 - 比如,如何制作输入层和连接层?您可以在底部看到摘要。

model = Sequential()
model.add(Dense(W))
model.add(ReLU())
...

所以我尝试了其他三种解决方案:

  1. 以当前的构造方式制作一个独立的 ReLU 层:
    ...
    outputs = tf.keras.layers.Dense(W)(outputs)
    outputs = tf.keras.layers.ReLU()(outputs)
  1. 请改用 tf.keras.activations.relu。我尝试将其设置为密集层的激活,然后设置为单独的层,但都不起作用。错误信息是一样的。\
  2. 用“=”复制模型。但是,当我更改 new_grad_vars 时,原始模型的权重也会发生变化,这是不希望的。
    new_model = model
    new_grad_vars = new_model.trainable_variables

现在,我正在寻找任何一种答案
:如何以当前的构建方式创建可克隆的 ReLU 层
b. 如何使用 Sequential() 构建具有输入和连接层的当前模型。
非常感谢您的帮助!

__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 90)]         0                                            
__________________________________________________________________________________________________
tf_op_layer_split (TensorFlowOp [(None, 63), (None,  0           input_1[0][0]                    
__________________________________________________________________________________________________
dense (Dense)                   (None, 256)          16384       tf_op_layer_split[0][0]          
__________________________________________________________________________________________________
dense_1 (Dense)                 (None, 256)          65792       dense[0][0]                      
__________________________________________________________________________________________________
dense_2 (Dense)                 (None, 256)          65792       dense_1[0][0]                    
__________________________________________________________________________________________________
dense_3 (Dense)                 (None, 256)          65792       dense_2[0][0]                    
__________________________________________________________________________________________________
dense_4 (Dense)                 (None, 256)          65792       dense_3[0][0]                    
__________________________________________________________________________________________________
tf_op_layer_concat (TensorFlowO [(None, 319)]        0           tf_op_layer_split[0][0]          
                                                                 dense_4[0][0]                    
__________________________________________________________________________________________________
dense_5 (Dense)                 (None, 256)          81920       tf_op_layer_concat[0][0]         
__________________________________________________________________________________________________
dense_6 (Dense)                 (None, 256)          65792       dense_5[0][0]                    
__________________________________________________________________________________________________
dense_7 (Dense)                 (None, 256)          65792       dense_6[0][0]                    
__________________________________________________________________________________________________
dense_9 (Dense)                 (None, 256)          65792       dense_7[0][0]                    
__________________________________________________________________________________________________
tf_op_layer_concat_1 (TensorFlo [(None, 283)]        0           dense_9[0][0]                    
                                                                 tf_op_layer_split[0][1]          
__________________________________________________________________________________________________
dense_10 (Dense)                (None, 128)          36352       tf_op_layer_concat_1[0][0]       
__________________________________________________________________________________________________
dense_11 (Dense)                (None, 3)            387         dense_10[0][0]                   
__________________________________________________________________________________________________
dense_8 (Dense)                 (None, 1)            257         dense_7[0][0]                    
__________________________________________________________________________________________________
tf_op_layer_concat_2 (TensorFlo [(None, 4)]          0           dense_11[0][0]                   
                                                                 dense_8[0][0]                    
==================================================================================================
Total params: 595,844
Trainable params: 595,844

标签: pythontensorflowkeras

解决方案


推荐阅读