首页 > 解决方案 > TensorFlow 实验 - 使用密集网络将灰度转换为 bgr

问题描述

最近我在思考纯 nn 的性质并问自己一个问题——密集的 nn 可以仅使用回归将灰度图像转换为 bgr 吗?想法很简单——我给出一个像素值,然后 nn 返回对应于 bgr 的 3 个值。
因此,我开始使用功能 API 实现非常简单的密集架构:

 def test_model(input_shape, output_nodes):
    input = tf.keras.layers.Input(shape = input_shape)
    x = tf.keras.layers.Dense(64, activation = 'relu')(input)
    x = tf.keras.layers.Dense(128, activation = 'relu')(x)
    x = tf.keras.layers.Dense(512, activation = 'relu')(x)
    dropout = tf.keras.layers.Dropout(0.3)(x)
    x = tf.keras.layers.Dense(128, activation = 'relu')(dropout)
    output = tf.keras.layers.Dense(output_nodes, activation = 'relu')(x)
    model = tf.keras.Model(inputs = input, outputs = output)
    return model

model = test_model(1, 3)

作为训练数据,我从朋友那里获取海报: 我将其转换为灰度用于训练像素,颜色将作为标签。 我的预处理只是重塑的标准化:
在此处输入图像描述在此处输入图像描述

@tf.function
def gen_dataset(img, img_gray):
    train_img = tf.cast(img_gray, tf.float32) / 255.
    res_img = tf.cast(img, tf.float32) / 255.
    return train_img, res_img

train_im, test_im = gen_dataset(img, img_gray)

train_im_rb = tf.reshape(train_im, (99160, 1))
test_im_rb = tf.reshape(test_im , (99160, 3))

优化器和损失:

model.compile(optimizer=tf.keras.optimizers.Adam(),
              loss = tf.keras.losses.MeanSquaredError(),
              metrics = ['accuracy'])

然后 .fit() 方法:

h3 = model.fit(train_im_rb, test_im_rb, 
               batch_size = 32,
               epochs = 10)
Epoch 1/10
3099/3099 [==============================] - 10s 3ms/step - loss: 0.0115 - accuracy: 0.6274
Epoch 2/10
3099/3099 [==============================] - 10s 3ms/step - loss: 0.0104 - accuracy: 0.6384
Epoch 3/10
3099/3099 [==============================] - 10s 3ms/step - loss: 0.0103 - accuracy: 0.6413
Epoch 4/10
3099/3099 [==============================] - 10s 3ms/step - loss: 0.0102 - accuracy: 0.6404
Epoch 5/10
3099/3099 [==============================] - 10s 3ms/step - loss: 0.0102 - accuracy: 0.6417
Epoch 6/10
3099/3099 [==============================] - 10s 3ms/step - loss: 0.0102 - accuracy: 0.6423
Epoch 7/10
3099/3099 [==============================] - 10s 3ms/step - loss: 0.0102 - accuracy: 0.6423
Epoch 8/10
3099/3099 [==============================] - 10s 3ms/step - loss: 0.0101 - accuracy: 0.6395
Epoch 9/10
3099/3099 [==============================] - 10s 3ms/step - loss: 0.0101 - accuracy: 0.6399
Epoch 10/10
3099/3099 [==============================] - 10s 3ms/step - loss: 0.0101 - accuracy: 0.6378

结果图像是: 我知道我解决此任务的方式通常可能是错误的,但我只是对实验感兴趣。我的问题是缓慢变化的准确性和损失 - 我该如何解决?
在此处输入图像描述

标签: pythontensorflowcolors

解决方案


推荐阅读