首页 > 解决方案 > Keras 为什么二进制分类不如分类分类准确

问题描述

我正在尝试创建一个模型来判断图像中是否有鸟。

我使用分类来训练模型来识别鸟与花,结果在识别这两个类别方面变得非常成功。

但是,当我将其更改为二元分类以检测图像中是否存在鸟类时,准确性急剧下降。

我改为使用二元分类的原因是,如果我向我的分类分类训练模型提供一只狗,它会将这只狗识别为一只鸟。

顺便说一句,这是我的数据集结构:

训练:5000 张鸟类图像和 2000 张非鸟类图像

验证:1000 张鸟类图像和 500 张非鸟类图像

在此处输入图像描述

有人说,不平衡的数据集也会引起问题。这是真的吗?

有人可以指出我在以下代码中出错的地方吗?

def get_num_files(path):
    if not os.path.exists(path):
        return 0
    return sum([len(files) for r, d, files in os.walk(path)])

def get_num_subfolders(path):
    if not os.path.exists(path):
        return 0
    return sum([len(d) for r, d, files in os.walk(path)])

def create_img_generator():
    return ImageDataGenerator(
        preprocessing_function=preprocess_input,
        rotation_range=30,
        width_shift_range=0.2,
        height_shift_range=0.2,
        shear_range=0.2,
        zoom_range=0.2,
        horizontal_flip=True
    )

INIT_LT = 1e-3
Image_width, Image_height = 299, 299
Training_Epochs = 30
Batch_Size = 32
Number_FC_Neurons = 1024
Num_Classes = 2

train_dir = 'to my train folder'
validate_dir = 'to my validation folder'


num_train_samples = get_num_files(train_dir)
num_classes = get_num_subfolders(train_dir)
num_validate_samples = get_num_files(validate_dir)

num_epoch = Training_Epochs
batch_size = Batch_Size

train_image_gen = create_img_generator()
test_image_gen = create_img_generator()

train_generator = train_image_gen.flow_from_directory(
    train_dir,
    target_size=(Image_width, Image_height),
    batch_size = batch_size,
    seed = 42
)

validation_generator = test_image_gen.flow_from_directory(
    validate_dir,
    target_size=(Image_width, Image_height),
    batch_size=batch_size,
    seed=42
)

Inceptionv3_model = InceptionV3(weights='imagenet', include_top=False)
print('Inception v3 model without last FC loaded')

x = Inceptionv3_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(Number_FC_Neurons, activation='relu')(x)
predictions = Dense(num_classes, activation='softmax')(x)

# model = Model(inputs=Inceptionv3_model.input, outputs=predictions)
v3model = Model(inputs=Inceptionv3_model.input, outputs=predictions)
# Use new Sequential model to add v3model and add a bath normalization layer after
model = Sequential()
model.add(v3model)
model.add(BatchNormalization()) # added normalization
print(model.summary())

print('\nFine tuning existing model')

Layers_To_Freeze = 172
for layer in model.layers[:Layers_To_Freeze]:
    layer.trainable = False
for layer in model.layers[Layers_To_Freeze:]:
    layer.trainable = True

optizer = Adam(lr=INIT_LT, decay=INIT_LT / Training_Epochs)
# optizer = SGD(lr=0.0001, momentum=0.9)
model.compile(optimizer=optizer, loss='binary_crossentropy', metrics=['accuracy'])

cbk_early_stopping = EarlyStopping(monitor='val_acc', mode='max')

history_transfer_learning = model.fit_generator(
    train_generator,
    steps_per_epoch = num_train_samples,
    epochs=num_epoch,
    validation_data=validation_generator,
    validation_steps = num_validate_samples,
    class_weight='auto',
    callbacks=[cbk_early_stopping]
)

model.save('incepv3_transfer_mini_binary.h5', overwrite=True, include_optimizer=True)

标签: tensorflowmachine-learningkerasdeep-learning

解决方案


分类的

  • 利用Num_Classes = 2
  • 使用 one-hot-encoded 目标(例如:Bird = [1, 0], Flower = [0, 1]
  • 使用'softmax'激活
  • 利用'categorical_crossentropy'

二进制

  • 利用Num_Classes = 1
  • 使用二进制目标(例如is flower = 1 | not flower = 0:)
  • 使用'sigmoid'激活
  • 利用'binary_crossentropy'

此处的详细信息:仅对两个类使用 categorical_crossentropy


推荐阅读