首页 > 解决方案 > Keras 损失值不变

问题描述

我正在尝试在贷款状态数据集上应用深度学习网络,以检查我是否可以获得比传统机器学习算法更好的结果。

准确度似乎非常低(甚至低于使用正常逻辑回归)。我该如何改进它?

我尝试过的事情: - 改变学习率 - 增加层数 - 增加/减少节点数**

X = df_dummies.drop('Loan_Status', axis=1).values
y = df_dummies['Loan_Status'].values
model = Sequential()

model.add(Dense(50, input_dim = 17, activation = 'relu'))
model.add(Dense(100, activation = 'relu'))
model.add(Dense(100, activation = 'relu'))
model.add(Dense(100, activation = 'relu'))
model.add(Dense(100, activation = 'relu'))
model.add(Dense(1, activation = 'sigmoid'))

sgd = optimizers.SGD(lr = 0.00001)

model.compile(optimizer=sgd, loss='binary_crossentropy', metrics=`['accuracy'])`

model.fit(X, y, epochs = 50, shuffle=True, verbose=2)
model.summary()

Epoch 1/50 - 1s - 损失:4.9835 - acc:0.6873 Epoch 2/50 - 0s - 损失:4.9830 - acc:0.6873 Epoch 3/50 - 0s - 损失:4.9821 - acc:0.6873 Epoch 4/50 - 0s - 损失: 4.9815 - acc: 0.6873 Epoch 5/50 - 0s - loss: 4.9807 - acc: 0.6873 Epoch 6/50 - 0s - loss: 4.9800 - acc: 0.6873 Epoch 7/50 - 0s - loss: 4.9713 - acc: 0.6873 Epoch 8 /50-0s-损失:8.5354-acc:0.4397 epoch 9/50-0s-损失:4.8322-acc:0.6743 epoch 10/50-0s-损失:4.9852-acc:0.6873 epoch 11/50-0s-损失:4.9852 - acc:0.6873 Epoch 12/50 - 0s - 损失:4.9852 - acc:0.6873 Epoch 13/50 - 0s - 损失:4.9852 - acc:0.6873 Epoch 14/50 - 0s - 损失:4.9852 - acc:0.6873 Epoch 15/50 - 0s - 损失:4。9852-acc:0.6873 epoch 16/50-0s-损失:4.9852-acc:0.6873 epoch 17/50-0s-损失:4.9852-acc:0.6873 epoch 18/50-0s-损失:4.9852-acc:0.6873 epoch 19/ 50 - 0s - 损失:4.9852 - acc:0.6873 Epoch 20/50 - 0s - 损失:4.9852 - acc:0.6873 Epoch 21/50 - 0s - 损失:4.9852 - acc:0.6873 Epoch 22/50 - 0s - 损失:4.9852 - acc:0.6873 Epoch 23/50 - 0s - 损失:4.9852 - acc:0.6873 Epoch 24/50 - 0s - 损失:4.9852 - acc:0.6873 Epoch 25/50 - 0s - 损失:4.9852 - acc:0.6873 Epoch 26/50 - 0s-损失:4.9852-acc:0.6873 epoch 27/50-0s-损失:4.9852-acc:0.6873 epoch 28/50-0s-损失:4.9852-acc:0.6873 epoch 29/50-0s-损失:4.9852-acc: 0。6873 Epoch 30/50 - 0s - 损失:4.9852 - acc:0.6873 Epoch 31/50 - 0s - 损失:4.9852 - acc:0.6873 Epoch 32/50 - 0s - 损失:4.9852 - acc:0.6873 Epoch 33/50 - 0s -损失:4.9852-acc:0.6873 epoch 34/50-0s-损失:4.9852-acc:0.6873 epoch 35/50-0s-损失:4.9852-acc:0.6873 epoch 36/50-0s-损失:4.9852-acc:0.6873 epoch 37/50 - 0s - 损失:4.9852 - acc:0.6873 时期 38/50 - 0s - 损失:4.9852 - acc:0.6873 时期 39/50 - 0s - 损失:4.9852 - acc:0.6873 时期 40/50 - 0s - 损失: 4.9852 - acc: 0.6873 Epoch 41/50 - 0s - loss: 4.9852 - acc: 0.6873 Epoch 42/50 - 0s - loss: 4.9852 - acc: 0.6873 Epoch 43/50 - 0s - loss: 4.9852 - acc: 0。6873 Epoch 44/50 - 0s - 损失:4.9852 - acc:0.6873 Epoch 45/50 - 0s - 损失:4.9852 - acc:0.6873 Epoch 46/50 - 0s - 损失:4.9852 - acc:0.6873 Epoch 47/50 - 0s -损失:4.9852-acc:0.6873 epoch 48/50-0s-损失:4.9852-acc:0.6873 epoch 49/50-0s-损失:4.9852-acc:0.6873 epoch 50/50-0s-损失:4.9852-acc:0.6873

Layer (type)                 Output Shape              Param #   
=================================================================
dense_19 (Dense)             (None, 50)                900       
_________________________________________________________________
dense_20 (Dense)             (None, 100)               5100      
_________________________________________________________________
dense_21 (Dense)             (None, 100)               10100     
_________________________________________________________________
dense_22 (Dense)             (None, 100)               10100     
_________________________________________________________________
dense_23 (Dense)             (None, 100)               10100     
_________________________________________________________________
dense_24 (Dense)             (None, 1)                 101       
=================================================================
Total params: 36,401
Trainable params: 36,401
Non-trainable params: 0
_________________________________________________________________

标签: pythonkeras

解决方案


**通过使网络更深并添加 dropout,我能够获得轻微的改进,但我仍然认为这可以进一步改进,因为使用正常的逻辑回归可以提供更好的准确度 (80%+)。

有谁知道实现进一步改进的方法?**

model = Sequential()

model.add(Dense(1000, input_dim = 17, activation = 'relu'))
model.add(Dropout(0.2))
model.add(Dense(1000, activation = 'relu'))
model.add(Dropout(0.2))
model.add(Dense(1000, activation = 'relu'))
model.add(Dropout(0.2))
model.add(Dense(1000, activation = 'relu'))
model.add(Dropout(0.2))
model.add(Dense(1000, activation = 'relu'))
model.add(Dropout(0.2))
model.add(Dense(1000, activation = 'relu'))
model.add(Dense(1, activation = 'sigmoid'))

sgd = optimizers.SGD(lr = 0.0001)

model.compile(optimizer=sgd, loss='binary_crossentropy', metrics=['accuracy'])



model.fit(X_train, y_train, epochs = 20, shuffle=True, verbose=2, batch_size=30)



Epoch 1/20
 - 2s - loss: 4.8965 - acc: 0.6807
Epoch 2/20
 - 1s - loss: 4.6824 - acc: 0.7063
Epoch 3/20
 - 1s - loss: 4.6091 - acc: 0.7040
Epoch 4/20
 - 1s - loss: 4.5642 - acc: 0.7040
Epoch 5/20
 - 1s - loss: 4.6937 - acc: 0.7040
Epoch 6/20
 - 1s - loss: 4.6830 - acc: 0.7063
Epoch 7/20
 - 1s - loss: 4.6824 - acc: 0.7063
Epoch 8/20
 - 1s - loss: 4.6824 - acc: 0.7063
Epoch 9/20
 - 1s - loss: 4.6824 - acc: 0.7063
Epoch 10/20
 - 1s - loss: 4.6452 - acc: 0.7086
Epoch 11/20
 - 1s - loss: 4.6824 - acc: 0.7063
Epoch 12/20
 - 1s - loss: 4.6824 - acc: 0.7063
Epoch 13/20
 - 1s - loss: 4.7200 - acc: 0.7040
Epoch 14/20
 - 1s - loss: 4.6608 - acc: 0.7063
Epoch 15/20
 - 1s - loss: 4.6940 - acc: 0.7040
Epoch 16/20
 - 1s - loss: 4.7136 - acc: 0.7040
Epoch 17/20
 - 1s - loss: 4.6056 - acc: 0.7063
Epoch 18/20
 - 1s - loss: 4.5640 - acc: 0.7016
Epoch 19/20
 - 1s - loss: 4.7009 - acc: 0.7040
Epoch 20/20
 - 1s - loss: 4.6892 - acc: 0.7040

推荐阅读