首页 > 解决方案 > 去噪自编码器模型的模型精度和验证精度保持在恒定值

问题描述

我正在构建一个降噪自动编码器 (DAE) 来降噪呼吸信号。我通过信号的噪声和干净版本的模型(帧大小为 1024 的倍数)。

我的模型设置如下:

class NoiseReducer(tf.keras.Model):
    
    def __init__(self):
        super().__init__()
        
        self.encoder = tf.keras.Sequential([
#             Input(shape=(window_size, 1)),
            Masking(mask_value=np.nan, input_shape=(window_size, 1)),
            Conv1D(filters=128, kernel_size=32, strides=1, kernel_constraint=max_norm(max_norm_value), padding='same', kernel_initializer='glorot_normal', activation='elu'),
            Dense(128, activation='elu'),
            Conv1D(filters=32, kernel_size=16, strides=1, kernel_constraint=max_norm(max_norm_value), padding='same', kernel_initializer='glorot_normal', activation='elu'),
            Conv1D(filters=16, kernel_size=8, strides=1, kernel_constraint=max_norm(max_norm_value), padding='same', kernel_initializer='glorot_normal', activation='elu')
        ])
        
        self.decoder = tf.keras.Sequential([
            Conv1DTranspose(filters=16, kernel_size=8, strides=1, kernel_constraint=max_norm(max_norm_value), padding='same', kernel_initializer='glorot_normal', activation='elu')
            Conv1DTranspose(filters=32, kernel_size=16, strides=1, kernel_constraint=max_norm(max_norm_value), padding='same', kernel_initializer='glorot_normal', activation='elu'),
            Dense(128, activation='elu'),
            Conv1DTranspose(filters=128, kernel_size=32, strides=1, kernel_constraint=max_norm(max_norm_value), padding='same', kernel_initializer='glorot_normal', activation='elu'),
            Conv1D(filters=1, kernel_size=2, strides=1, kernel_constraint=max_norm(max_norm_value), padding='same', activation='sigmoid')
        ])
        
    def call(self, x): 
        encoded = self.encoder(x) 
        decoded = self.decoder(encoded)
        return decoded

dae = NoiseReducer()

adam_optimizer=tf.keras.optimizers.Adam(
    learning_rate=learning_rate, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
sgd_optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate)
dae.compile(optimizer=sgd_optimizer, loss='mean_squared_error', metrics='accuracy')

history = dae.fit(X_noisy_train, 
        X_clean_train,
        epochs=epochs,
        batch_size=batch_size,
        shuffle=False,
        validation_split=0.3,
        callbacks=[tb_callback]
)

结果:

 13/13 [==============================] - 16s 1s/step - loss: 0.2185 - accuracy: 0.8272 - val_loss: 0.2143 - val_accuracy: 0.8288
    Epoch 2/100
    13/13 [==============================] - 12s 898ms/step - loss: 0.2120 - accuracy: 0.8272 - val_loss: 0.2082 - val_accuracy: 0.8288
    Epoch 3/100
    13/13 [==============================] - 12s 908ms/step - loss: 0.2057 - accuracy: 0.8272 - val_loss: 0.2017 - val_accuracy: 0.8288
    Epoch 4/100
    13/13 [==============================] - 12s 906ms/step - loss: 0.1997 - accuracy: 0.8272 - val_loss: 0.1956 - val_accuracy: 0.8288
    Epoch 5/100
    13/13 [==============================] - 12s 907ms/step - loss: 0.1938 - accuracy: 0.8272 - val_loss: 0.1898 - val_accuracy: 0.8288

运行模型时,两者的准确率和验证准确率都停留在 0.827 左右,并且在整个 epoch(总共 100 个)中根本没有变化,这表明模型没有学习任何东西。然而,MSE 随着时代的推移而递减。

对于我的数据集,我已将任何 nan 值设置为 0

在解决方案方面,我对我的模型进行了以下更改,但没有成功:

这些似乎都没有改变准确性。在模型完成并重建信号(从噪声中)后,我得到一条穿过 0.345 的直线,说明模型没有学到任何东西,也无法重建信号。

我应该围绕这个探索哪些其他策略/小巷?

标签: pythontensorflowtraining-dataautoencoder

解决方案


推荐阅读