首页 > 解决方案 > 检查目标时出错:预期 activation_29 的形状为 (1,),但数组的形状为 (3,)

问题描述

我正在尝试使用 bAbI 数据集修改 Keras 的记忆神经网络,从输出单个单词到输出多个单词(本例中为 3 个)。对于上下文,这是一个使用 LSTM 进行问​​答的 NLP 模型。

这是模型结构的片段:

# placeholders
input_sequence = Input((story_maxlen,))
question = Input((query_maxlen,))

# encoders
# embed the input sequence into a sequence of vectors
input_encoder_m = Sequential()
input_encoder_m.add(Embedding(input_dim=vocab_size,
                              output_dim=64))
input_encoder_m.add(Dropout(0.3))
# output: (samples, story_maxlen, embedding_dim)

# embed the input into a sequence of vectors of size query_maxlen
input_encoder_c = Sequential()
input_encoder_c.add(Embedding(input_dim=vocab_size,
                              output_dim=query_maxlen))
input_encoder_c.add(Dropout(0.3))
# output: (samples, story_maxlen, query_maxlen)

# embed the question into a sequence of vectors
question_encoder = Sequential()
question_encoder.add(Embedding(input_dim=vocab_size,
                               output_dim=64,
                               input_length=query_maxlen))
question_encoder.add(Dropout(0.3))
# output: (samples, query_maxlen, embedding_dim)

# encode input sequence and questions (which are indices)
# to sequences of dense vectors
input_encoded_m = input_encoder_m(input_sequence)
input_encoded_c = input_encoder_c(input_sequence)
question_encoded = question_encoder(question)

# compute a 'match' between the first input vector sequence
# and the question vector sequence
# shape: `(samples, story_maxlen, query_maxlen)`
match = dot([input_encoded_m, question_encoded], axes=(2, 2))
match = Activation('softmax')(match)

# add the match matrix with the second input vector sequence
response = add([match, input_encoded_c])  # (samples, story_maxlen, query_maxlen)
response = Permute((2, 1))(response)  # (samples, query_maxlen, story_maxlen)

# concatenate the match matrix with the question vector sequence
answer = concatenate([response, question_encoded])

# the original paper uses a matrix multiplication for this reduction step.
# we choose to use a RNN instead.
answer = LSTM(32)(answer)  # (samples, 32)

# one regularization layer -- more would probably be needed.
answer = Dropout(0.3)(answer)
answer = Dense(vocab_size)(answer)  # (samples, vocab_size)
# we output a probability distribution over the vocabulary
answer = Activation('softmax')(answer)

这就是它的编译和训练方式:

model = Model([input_sequence, question], answer)
model.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit([inputs_train, queries_train], answers_train,
          batch_size=32,
          epochs=num_epochs,
          validation_data=([inputs_test, queries_test], answers_test))

在上面的示例中,answers_train 变量是一个 1xn 矩阵,其中每个项目是一个问题的值。因此,例如,前三个答案:

print(answers_train[:3])

输出:

[16 16 19]

我的问题

这是我对 answer_train 变量所做的更改,其中:

print(answers_train[:3])

输出:

[[ 0  0 16]
 [ 0  0 27]
 [ 0  0 16]]

基本上,我试图预测最多三个单词而不是一个。

当我这样做并尝试训练模型时,我收到此错误:

ValueError:检查目标时出错:预期activation_29 的形状为(1,),但得到的数组的形状为(3,)

这是 model.summary() 的输出:

__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            (None, 552)          0                                            
__________________________________________________________________________________________________
input_2 (InputLayer)            (None, 5)            0                                            
__________________________________________________________________________________________________
sequential_1 (Sequential)       multiple             2304        input_1[0][0]                    
__________________________________________________________________________________________________
sequential_3 (Sequential)       (None, 5, 64)        2304        input_2[0][0]                    
__________________________________________________________________________________________________
dot_1 (Dot)                     (None, 552, 5)       0           sequential_1[1][0]               
                                                                 sequential_3[1][0]               
__________________________________________________________________________________________________
activation_1 (Activation)       (None, 552, 5)       0           dot_1[0][0]                      
__________________________________________________________________________________________________
sequential_2 (Sequential)       multiple             180         input_1[0][0]                    
__________________________________________________________________________________________________
add_1 (Add)                     (None, 552, 5)       0           activation_1[0][0]               
                                                                 sequential_2[1][0]               
__________________________________________________________________________________________________
permute_1 (Permute)             (None, 5, 552)       0           add_1[0][0]                      
__________________________________________________________________________________________________
concatenate_1 (Concatenate)     (None, 5, 616)       0           permute_1[0][0]                  
                                                                 sequential_3[1][0]               
__________________________________________________________________________________________________
lstm_1 (LSTM)                   (None, 32)           83072       concatenate_1[0][0]              
__________________________________________________________________________________________________
dropout_4 (Dropout)             (None, 32)           0           lstm_1[0][0]                     
__________________________________________________________________________________________________
dense_1 (Dense)                 (None, 36)           1188        dropout_4[0][0]                  
__________________________________________________________________________________________________
activation_2 (Activation)       (None, 36)           0           dense_1[0][0]                    
==================================================================================================
Total params: 89,048
Trainable params: 89,048
Non-trainable params: 0
__________________________________________________________________________________________________

我的理解是,该模型是为确定单个单词的答案(即形状(1,))而构建的,我需要修改模型,因为现在我希望它可以确定多个单词的答案(在这种情况下,形状(3, ))。我不明白的是如何改变模型结构来实现这一点。

我在模型摘要中没有看到任何地方表明形状 (1,) 的定义位置。我只看到以单词 (552) 为单位的最大故事大小、以单词 (5) 为单位的最大查询/问题大小和以单词 (36) 为单位的词汇量的定义。

有谁能帮我弄清楚我做错了什么?


更新#1

在我继续研究这个问题的过程中,我学到了更多的东西。我可能在所有这些点上都错了,因为我不熟悉 ML 和 NN 的细节,所以如果有任何问题,请随时找我。

我想总而言之,是否可以对模型进行调整,还是我需要使用完全不同的模型?如果您还可以根据上述两点澄清我的困惑,我将不胜感激。

标签: pythontensorflowkeraslstm

解决方案


一个简单的解决方案(不承诺性能)将只是添加两个具有自己权重的“答案”层,并编译模型以输出这些。

answer = Dropout(0.3)(answer)

answer_1 = Dense(vocab_size, activation='softmax')(answer)
answer_2 = Dense(vocab_size, activation='softmax')(answer)
answer_3 = Dense(vocab_size, activation='softmax')(answer)

model = Model([input_sequence, question], [answer_1, answer_2, answer_3])

然后将您的标签作为三维数组传递,list(samples,1)只需传递即可

first, second, third = answers_train.T

作为你的标签。这对于您的应用程序可能不够好,您可能希望查看其他序列到序列模型。


推荐阅读