首页 > 解决方案 > 如何将 CNN 模型输入张量从形状 (? , 128, 128, 3 ) 转换为 (? , ? , ? , 3)?

问题描述

我正在尝试使用 keras 可视化 CNN 模型过滤器可视化。这是我关注的代码链接https://keras.io/examples/conv_filter_visualization/。注意:我是 keras 和学习 CNN 的新手。

该代码适用于具有输入形状 (?, ?, ?, 3) 的 VGG-16 模型。我想让此代码适用于具有定义宽度和高度输入的 CNN 模型(例如:(?、128、128、3)。我尝试将模型输入从(?、128、128、3)重塑为( ? , ?, ? ,3). 但是最后出现了错误。

背景:我想将其重塑为 (?, ?, ?, 3) 以便我可以运行渐进式放大和调整张量大小以改善图像可视化。

这是我的笔记本代码

# these are the parameters from other part of the code:
# input_img = model.inputs[0]
# layer_output = layer_dict[layer_name].output
# filter_index = 13 ( can be any number between bounds)
# layer_name = 'conv2d_8'
# step=1.
# epochs=10
# upscaling_steps=9
# upscaling_factor=1.2
# output_dim=(180, 180)
# filter_range=(0, 2)
def _generate_filter_image(input_img,
                               layer_output,
                               filter_index):
        """Generates image for one particular filter.

        # Arguments
            input_img: The input-image Tensor.
            layer_output: The output-image Tensor.
            filter_index: The to be processed filter number.
                          Assumed to be valid.

        #Returns
            Either None if no image could be generated.
            or a tuple of the image (array) itself and the last loss.
        """
        s_time = time.time()
        input_img = tf.reshape(input_img,[-1,-1,-1,3]) 
        print("input image shape after reshape", input_img.shape)

        # we build a loss function that maximizes the activation
        # of the nth filter of the layer considered

        if K.image_data_format() == 'channels_first':
            loss = K.mean(layer_output[:, filter_index, :, :])
        else:
            loss = K.mean(layer_output[:, :, :, filter_index])

        # we compute the gradient of the input picture wrt this loss
        grads = K.gradients(loss, [input_img])[0]

        # normalization trick: we normalize the gradient
        grads = normalize(grads)

        # this function returns the loss and grads given the input picture
        iterate = K.function([input_img], [loss, grads])

        # we start from a gray image with some random noise
        intermediate_dim = tuple(
            int(x / (upscaling_factor ** upscaling_steps)) for x in 
        output_dim)
        if K.image_data_format() == 'channels_first':
            input_img_data = np.random.random(
                (1, 3, intermediate_dim[0], intermediate_dim[1]))
        else:
            input_img_data = np.random.random(
                (1, intermediate_dim[0], intermediate_dim[1], 3))

        input_img_data = np.uint8(np.random.uniform(150, 180, (1,128, 128, 
        3)))/255

        # Slowly upscaling towards the original size prevents
        # a dominating high-frequency of the to visualized structure
        # as it would occur if we directly compute the 412d-image.
        # Behaves as a better starting point for each following dimension
        # and therefore avoids poor local minima
        for up in reversed(range(upscaling_steps)):
            # we run gradient ascent for e.g. 20 steps
            t1= time.time()
            for _ in range(epochs):

                loss_value, grads_value = iterate([input_img_data])
                input_img_data += grads_value * step


            # Calulate upscaled dimension
            intermediate_dim = tuple(
                int(x / (upscaling_factor ** up)) for x in output_dim)
            # Upscale
            img = deprocess_image(input_img_data[0])
            img = np.array(pil_image.fromarray(img).resize(intermediate_dim, pil_image.BICUBIC))
            input_img_data = [process_image(img, input_img_data[0])]

        t2 = time.time()

我收到了这个错误:

ValueError: Tried to convert 'x' to a tensor and failed. Error: None values not supported.

标签: tensorflowkerasdeep-learning

解决方案


当对不同形状值的梯度进行归一化时,就会出现这个问题。

问题在于:

grads = normalize(K.gradients(loss, conv_output)[0])

将其更改为:

grads = normalize(_compute_gradients(loss, [conv_output])[0])

如果这行得通,那么一切都很好,否则
如果你得到一个错误:zip argument #1 must support iteration,然后使用

    grads = normalize(K.gradients(loss, conv_output)[0])
    # grads = normalize(_compute_gradients(loss, conv_output)[0])    
    gradient_function = K.function([model.inputs[0]], [conv_output, grads]) 

检查此问题以获取更多信息!


推荐阅读