首页 > 解决方案 > Error while concatenating keras layers: A `Concatenate` layer requires inputs with matching shapes

问题描述

I have this model, which called Hierarchical Attention Networks: enter image description here

Which proposed for document classification. I use word2vec embedding for the sentences words, and I want to concatenate another sentence-level embedding at point A (see the figure).

I used it with documents contain 3 sentences; the model summary: enter image description here

word_input = Input(shape=(self.max_senten_len,), dtype='float32')
word_sequences = self.get_embedding_layer()(word_input)
word_lstm = Bidirectional(self.hyperparameters['rnn'](self.hyperparameters['rnn_units'], return_sequences=True, kernel_regularizer=kernel_regularizer))(word_sequences)
word_dense = TimeDistributed(Dense(self.hyperparameters['dense_units'], kernel_regularizer=kernel_regularizer))(word_lstm)
word_att = AttentionWithContext()(word_dense)
wordEncoder = Model(word_input, word_att)
sent_input = Input(shape=(self.max_senten_num, self.max_senten_len), dtype='float32')
sent_encoder = TimeDistributed(wordEncoder)(sent_input)

""" I added these following 2 lines. The dimension of self.training_features is (number of training rows, 3, 512). 512 is the dimension of the sentence-level embedding.  """
USE = Input(shape=(self.training_features.shape[1], self.training_features.shape[2]), name='USE_branch')
merge = concatenate([sent_encoder, USE], axis=1)

sent_lstm = Bidirectional(self.hyperparameters['rnn'](self.hyperparameters['rnn_units'], return_sequences=True, kernel_regularizer=kernel_regularizer))(merge)
sent_dense = TimeDistributed(Dense(self.hyperparameters['dense_units'], kernel_regularizer=kernel_regularizer))(sent_lstm)
sent_att = Dropout(dropout_regularizer)(AttentionWithContext()(sent_dense))
preds = Dense(len(self.labelencoder.classes_))(sent_att)
self.model = Model(sent_input, preds)

When I compile the above code, I get the following error:

ValueError: A Concatenate layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 3, 128), (None, 3, 514)]

I specified the concatenation axis=1, to concatenate on (3) the number of sentences, but I don't know why I'm still getting the error.

标签: python-3.xkerasdeep-learningconcatenation

解决方案


错误是由于两行:

merge = concatenate([sent_encoder, USE], axis=1)
# should be:
merge = concatenate([sent_encoder, USE], axis=2) # or -1 as @mlRocks suggested

和行:

self.model = Model(sent_input, preds)
# should be:
self.model = Model([sent_input, USE], preds) # to define both inputs

推荐阅读