首页 > 解决方案 > 为什么这个keras网络不“学习”?

问题描述

我正在尝试构建一个卷积神经网络来对猫和狗进行分类(这是一个非常基本的问题,因为我想学习)。我正在尝试的一种方法是使用 2 个输出神经元来检查类别(而不是仅使用 1 并制作 0 --> 例如猫和 1--> 狗),但由于某种原因,网络没有学习,有人可以帮助我吗?

这是模型:

from keras.models import Sequential
from keras.layers import Input, Dropout, Flatten, Conv2D, MaxPooling2D, Dense, Activation
from keras.optimizers import RMSprop,Adam
from keras.callbacks import ModelCheckpoint, Callback, EarlyStopping
from keras.utils import np_utils

optimizer = Adam(lr=1e-4)
objective = 'categorical_crossentropy'


def classifier():
    
    model = Sequential()
    
    model.add(Conv2D(64, 3, padding='same',input_shape=train.shape[1:],activation='relu'))
    model.add(MaxPooling2D(pool_size=(2, 2), data_format="channels_first"))

    model.add(Conv2D(256, 3, padding='same',activation='relu'))
    model.add(MaxPooling2D(pool_size=(2, 2), data_format="channels_first"))
    
    model.add(Conv2D(256, 3, padding='same',activation='relu'))
    model.add(MaxPooling2D(pool_size=(2, 2), data_format="channels_first"))
    
    model.add(Conv2D(256, 3, padding='same',activation='relu'))
    model.add(MaxPooling2D(pool_size=(2, 2), data_format="channels_first"))
    

    model.add(Flatten())
    model.add(Dense(256, activation='relu'))
    model.add(Dropout(0.5))
    
    model.add(Dense(256, activation='relu'))
    model.add(Dropout(0.5))
    
    model.add(Dense(256, activation='relu'))
    model.add(Dropout(0.5))

    model.add(Dense(2))
    model.add(Activation('softmax'))
    
    print("Compiling model...")
    model.compile(loss=objective, optimizer=optimizer, metrics=['accuracy'])
    return model

print("Creating model:")
model = classifier()

这是主循环

from keras.models import Sequential
from keras.layers import Input, Dropout, Flatten, Conv2D, MaxPooling2D, Dense, Activation
from keras.optimizers import RMSprop
from keras.callbacks import ModelCheckpoint, Callback, EarlyStopping
from keras.utils import np_utils

epochs = 5000
batch_size = 16

class LossHistory(Callback):
    def on_train_begin(self, logs={}):
        self.losses = []
        self.val_losses = []
        
    def on_epoch_end(self, batch, logs={}):
        self.losses.append(logs.get('loss'))
        self.val_losses.append(logs.get('val_loss'))

early_stopping = EarlyStopping(monitor='val_loss', patience=4, verbose=1, mode='min')        
       

def run():
    
    history = LossHistory()
    print("running model...")
    model.fit(train, labels, batch_size=batch_size, epochs=epochs,
              validation_split=0.10, verbose=2, shuffle=True, callbacks=[history, early_stopping])
    
    print("making predictions on test set...")
    predictions = model.predict(test, verbose=0)
    return predictions, history

predictions, history = run()

loss = history.losses
val_loss = history.val_losses

这是输入标签的一个示例:

array([[1, 0],
       [0, 1],
       [1, 0],
       ..., 
       [0, 1],
       [0, 1],
       [0, 1]])

PS:不要担心输入格式,因为使用相同的输入它适用于二进制分类器。

标签: pythonpython-3.xtensorflowkerasdeep-learning

解决方案


rate对 dropout 层的论点太大。Dropout 层被用作深度学习神经网络的正则化技术,并克服过拟合。您的rate参数指定在训练时从前一层的激活下降多少百分比。0.5rate意味着减少前一层激活的 50%。尽管有时这种大比例的rate论点是可行的,但有时它会阻碍神经网络的学习率。rate所以在选择dropout 层的参数时要小心。


推荐阅读