首页 > 解决方案 > 构建 LSTM VAE 模型时出现不兼容问题

问题描述

我尝试VAE LSTMkeras. 输入形状是(sample_number,20,31)

同时,发生了一些不兼容的问题。

我不确定我的代码的哪一部分是错误的,请原谅我发布所有这些。

我的进口:

from keras.models import Sequential, Model
from keras.objectives import mse
from keras.layers import Dense, Dropout, Activation, Flatten, LSTM, TimeDistributed, RepeatVector, Input, Lambda
from keras.layers.normalization import BatchNormalization

首先,我创建了一个正态分布采样函数。

def sampling(args):
     z_mean, z_log_var = args
     batch = K.shape(z_mean)[0]
     dim = K.int_shape(z_mean)[1]
     epsilon = K.random_normal(shape=(batch,dim))
     return z_mean + K.exp(0.5 * z_log_var) * epsilon

然后构建编码器和解码器

"======Encoer====="
inputs = Input(shape=(20,31,), name='encoder_input')
x = LSTM(30,activation='relu',return_sequences=True) (inputs)
x = LSTM(60,activation='relu') (x)
z_mean = Dense(60, name='z_mean')(x)
z_log_var = Dense(60, name='z_log_var')(x)
z = Lambda(sampling, output_shape=(60,), name='z')([z_mean, z_log_var])
z = RepeatVector(20)(z)
encoder = Model(inputs, [z_mean, z_log_var, z], name='encoder')

"=====Decoder======="
latent_inputs = Input(shape=(20,60), name='z_sampling')
x_2 = LSTM(60, activation='relu',return_sequences= True)(latent_inputs)
x_2 = LSTM(31, activation='relu')(x_2)
decoder = Model(latent_inputs, x_2, name='decoder')
outputs = decoder(encoder(inputs)[2])
vae = Model(inputs, outputs)

最后自定义损失函数并拟合模型

reconstruction_loss = mse(inputs, outputs)
kl_loss = 1 + z_log_var - K.square(z_mean) - K.exp(z_log_var)
kl_loss = K.sum(kl_loss, axis=-1)
kl_loss *= -0.5
vae_loss = K.mean(reconstruction_loss + kl_loss)
vae.add_loss(vae_loss) 
vae.compile(optimizer='adam')
vae.fit(train,validation_data=(val,None),epochs=100)

它会出现此错误,但我找不到形状所在的任何地方 [32,31][32,20,31]

    InvalidArgumentError                      Traceback (most recent call last)
~\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in _do_call(self, fn, *args)
   1326     try:
-> 1327       return fn(*args)
   1328     except errors.OpError as e:

~\Anaconda3\lib\site-packages\tensorflow\python\client\session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
   1305                                    feed_dict, fetch_list, target_list,
-> 1306                                    status, run_metadata)
   1307 

~\Anaconda3\lib\contextlib.py in __exit__(self, type, value, traceback)
     87             try:
---> 88                 next(self.gen)
     89             except StopIteration:

~\Anaconda3\lib\site-packages\tensorflow\python\framework\errors_impl.py in raise_exception_on_not_ok_status()
    465           compat.as_text(pywrap_tensorflow.TF_Message(status)),
--> 466           pywrap_tensorflow.TF_GetCode(status))
    467   finally:

InvalidArgumentError: Incompatible shapes: [32,20] vs. [32]

感谢您的回答。

标签: pythonkeraslstmautoencoder

解决方案


(32,20,31)编码器的输入形状32是默认batch_size的,解码器的输出是(32,31)mse功能抱怨这两种形状。

问题应该通过替换来解决 x_2 = LSTM(31, activation='relu')(x_2)x_2 = LSTM(31, activation='relu',return_sequences=True)(x_2)

PS:您也可以尝试运行encoder.summary()decoder.summary()获得每一层的形状。

编辑: kl_loss = K.sum(kl_loss, axis=-1)kl_loss = K.sum(kl_loss)


推荐阅读