首页 > 解决方案 > keras:样本权重多重不平衡输出

问题描述

我有以下具有 28 个输出的模型。每个输出有 3 个非常不平衡的类。sample_weight所以我在fit函数中使用样本权重的字典——每个输出一个样本权重数组。y_train 和 y_valid 是 28 个 one-hot 编码数组的列表。

无论我是否使用 sample_weight,该模型总是预测所有 28 个标签中最频繁的类。因此,我认为我错误地使用了 sample_weight。我也尝试降低或提高学习率,但这没有效果......因此我的权重应该有问题,但它是什么?

#create sample weights
from sklearn.utils import compute_sample_weight

train_weight = {}
for i in range(0,28):
    train_weight[i] = compute_sample_weight('balanced', y_train[i])

valid_weight = {}
for i in range(0,28):
    valid_weight[i] = compute_sample_weight('balanced', y_valid[i])  

#Model:  

Earlystop = EarlyStopping(monitor='val_loss', patience=20, mode='min', mverbose=1, min_delta=0.02, restore_best_weights=True)
checkpoint = ModelCheckpoint('nn_statarb', monitor='val_loss', verbose=1, save_best_only=True, mode='min', save_weights_only=False)
optimizer = optimizers.Adam(lr=0.00001, beta_1=0.9, beta_2=0.999, epsilon=1e-8, decay=5e-8, amsgrad=False)

input_ = Input(batch_shape=(batch_size, 1, 1, X_train.shape[3], X_train.shape[4])) 

cnn = Sequential()
cnn.add(Conv2D(16, kernel_size=(5, X_train.shape[4]), padding='same', 
data_format="channels_first", input_shape = (1, X_train.shape[3], 
X_train.shape[4])))
cnn.add(LeakyReLU(alpha=0.01))
cnn.add(Conv2D(16, (1, 1)))
cnn.add(LeakyReLU(alpha=0.01))
cnn.add(Conv2D(16, (10, 2), padding='same'))
cnn.add(Flatten())
rnn = Sequential()
rnn.add(LSTM(X_train.shape[4], return_sequences=False, stateful=True, batch_size=batch_size))
rnn.add(Reshape((1, -1, X_train.shape[4])))

pricemodel = TimeDistributed(cnn)(input_) 
pricemodel = rnn(pricemodel) 
pricemodel = Model(inputs=input_, outputs=pricemodel)
out_ = pricemodel(input_)

tower_1 = Conv2D(64, (1,1), padding='same', activation='relu')(out_)
tower_1 = Conv2D(64, (3,3), padding='same', activation='relu')(tower_1)
tower_2 = Conv2D(64, (1,1), padding='same', activation='relu')(out_)
tower_2 = Conv2D(64, (5,5), padding='same', activation='relu')(tower_2)
tower_3 = MaxPooling2D((3,3), strides=(1,1), padding='same')(out_)
tower_3 = Conv2D(64, (1,1), padding='same', activation='relu')(tower_3)

output = concatenate([tower_1, tower_2, tower_3], axis=1)
output = Flatten()(output)

x1 = Dense(3, activation='softmax')(output)
x2 = Dense(3, activation='softmax')(output)
x3 = Dense(3, activation='softmax')(output)
x4 = Dense(3, activation='softmax')(output)
x5 = Dense(3, activation='softmax')(output)
x6 = Dense(3, activation='softmax')(output)
x7 = Dense(3, activation='softmax')(output)
x8 = Dense(3, activation='softmax')(output)
x9 = Dense(3, activation='softmax')(output)
x10 = Dense(3, activation='softmax')(output)
x11 = Dense(3, activation='softmax')(output)
x12 = Dense(3, activation='softmax')(output)
x13 = Dense(3, activation='softmax')(output)
x14 = Dense(3, activation='softmax')(output)
x15 = Dense(3, activation='softmax')(output)
x16 = Dense(3, activation='softmax')(output)
x17 = Dense(3, activation='softmax')(output)
x18 = Dense(3, activation='softmax')(output)
x19 = Dense(3, activation='softmax')(output)
x20 = Dense(3, activation='softmax')(output)
x21 = Dense(3, activation='softmax')(output)
x22 = Dense(3, activation='softmax')(output)
x23 = Dense(3, activation='softmax')(output)
x24 = Dense(3, activation='softmax')(output)
x25 = Dense(3, activation='softmax')(output)
x26 = Dense(3, activation='softmax')(output)
x27 = Dense(3, activation='softmax')(output)
x28 = Dense(3, activation='softmax')(output)

model = Model(inputs=input_, outputs=[x1, x2, x3, x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14, x15, x16, x17, x18, x19, x20, x21, x22, x23, x24, x25, x26, x27, x28])
model.compile(loss='categorical_crossentropy', optimizer=optimizer,  metrics=['accuracy'])
history = model.fit(X_train, y_train, epochs=200, batch_size=batch_size, verbose=2, shuffle=False, validation_data=[X_valid, y_valid, valid_weight], callbacks=[Earlystop, checkpoint], sample_weight=train_weight)

标签: pythonkerasneural-networkmultilabel-classification

解决方案


我认为这就是您使用compute_sample_weighthttps://scikit-learn.org/stable/modules/generated/sklearn.utils.class_weight.compute_sample_weight.html)的方式,您似乎是按样本调用它。相反,您想一次在整个标签集上调用它,例如:

    weights = compute_sample_weight(class_weight="balanced", y=y_train)

否则,如果你对每个样本分别调用compute_sample_weight,它怎么知道分布呢?


推荐阅读