首页 > 解决方案 > 为什么在打印历史记录时 model.fit() 结果不代表相同的值?

问题描述

所以我尝试用这段代码编译我的模型:

model.compile(loss="categorical_crossentropy", optimizer=adm, metrics=["accuracy"])
es = EarlyStopping(monitor="val_loss", mode="min", patience=5, restore_best_weights = True, verbose=1)
history = model.fit(train_images, train_labels, batch_size=128,epochs=epoch, validation_data=(val_images,val_labels), callbacks=[es])
model.save("model.h5")

产生这个输出:

Epoch 1/100
63/63 [==============================] - 18s 184ms/step - loss: 2.1535 - accuracy: 0.2494 - val_loss: 1.4032 - val_accuracy: 0.2500
Epoch 2/100
63/63 [==============================] - 8s 127ms/step - loss: 1.4662 - accuracy: 0.2563 - val_loss: 1.3439 - val_accuracy: 0.3350
Epoch 3/100
63/63 [==============================] - 8s 127ms/step - loss: 1.2183 - accuracy: 0.4206 - val_loss: 0.6709 - val_accuracy: 0.7115
Epoch 4/100
63/63 [==============================] - 8s 128ms/step - loss: 0.6172 - accuracy: 0.7542 - val_loss: 0.4356 - val_accuracy: 0.8320
Epoch 5/100
63/63 [==============================] - 8s 128ms/step - loss: 0.3808 - accuracy: 0.8631 - val_loss: 0.2665 - val_accuracy: 0.9095
Epoch 6/100
63/63 [==============================] - 8s 128ms/step - loss: 0.2290 - accuracy: 0.9242 - val_loss: 0.2111 - val_accuracy: 0.9290
Epoch 7/100
63/63 [==============================] - 8s 128ms/step - loss: 0.1597 - accuracy: 0.9472 - val_loss: 0.1780 - val_accuracy: 0.9400
Epoch 8/100
63/63 [==============================] - 8s 128ms/step - loss: 0.1057 - accuracy: 0.9660 - val_loss: 0.1328 - val_accuracy: 0.9575
Epoch 9/100
63/63 [==============================] - 8s 128ms/step - loss: 0.0816 - accuracy: 0.9734 - val_loss: 0.1259 - val_accuracy: 0.9620
Epoch 10/100
63/63 [==============================] - 8s 128ms/step - loss: 0.0551 - accuracy: 0.9809 - val_loss: 0.0991 - val_accuracy: 0.9695
Epoch 11/100
63/63 [==============================] - 8s 128ms/step - loss: 0.0302 - accuracy: 0.9914 - val_loss: 0.0835 - val_accuracy: 0.9725
Epoch 12/100
63/63 [==============================] - 8s 128ms/step - loss: 0.0414 - accuracy: 0.9868 - val_loss: 0.1061 - val_accuracy: 0.9690
Epoch 13/100
63/63 [==============================] - 8s 128ms/step - loss: 0.0349 - accuracy: 0.9882 - val_loss: 0.1135 - val_accuracy: 0.9670
Epoch 14/100
63/63 [==============================] - 8s 128ms/step - loss: 0.0221 - accuracy: 0.9927 - val_loss: 0.0792 - val_accuracy: 0.9755
Epoch 15/100
63/63 [==============================] - 8s 129ms/step - loss: 0.0125 - accuracy: 0.9966 - val_loss: 0.1230 - val_accuracy: 0.9670
Epoch 16/100
63/63 [==============================] - 8s 128ms/step - loss: 0.0188 - accuracy: 0.9937 - val_loss: 0.1206 - val_accuracy: 0.9700
Epoch 17/100
63/63 [==============================] - 8s 128ms/step - loss: 0.0105 - accuracy: 0.9975 - val_loss: 0.1434 - val_accuracy: 0.9710
Epoch 18/100
63/63 [==============================] - 8s 128ms/step - loss: 0.0126 - accuracy: 0.9969 - val_loss: 0.1210 - val_accuracy: 0.9740
Epoch 19/100
63/63 [==============================] - 8s 128ms/step - loss: 0.0107 - accuracy: 0.9971 - val_loss: 0.0933 - val_accuracy: 0.9765
Restoring model weights from the end of the best epoch.
Epoch 00019: early stopping

然后我尝试使用以下代码打印历史记录:

print(history.history)

输出:

{'loss': [1.7359973192214966, 1.4568417072296143, 1.0492748022079468, 0.556835949420929, 0.3225175142288208, 0.21631713211536407, 0.15734265744686127, 0.09740655869245529, 0.08705899864435196, 0.059068288654088974, 0.037155844271183014, 0.04529304429888725, 0.032098811119794846, 0.026531102135777473, 0.014811701141297817, 0.01594153791666031, 0.007752457167953253, 0.01829242706298828, 0.010238146409392357], 

'accuracy': [0.25437501072883606, 0.26374998688697815, 0.5195000171661377, 0.7832499742507935, 0.8845000267028809, 0.9258750081062317, 0.9482499957084656, 0.96875, 0.9710000157356262, 0.9808750152587891, 0.9890000224113464, 0.9848750233650208, 0.9894999861717224, 0.9913750290870667, 0.9952499866485596, 0.9944999814033508, 0.9981250166893005, 0.9952499866485596, 0.996749997138977], 

'val_loss': [1.403160810470581, 1.3439414501190186, 0.670893132686615, 0.4356003701686859, 0.2664510905742645, 0.21105702221393585, 0.17804428935050964, 0.13279731571674347, 0.12593801319599152, 0.09910984337329865, 0.08349543809890747, 0.10606017708778381, 0.11349570006132126, 0.07923758029937744, 0.12297631800174713, 0.12059397250413895, 0.14337044954299927, 0.12097910046577454, 0.09327349811792374], 

'val_accuracy': [0.25, 0.33500000834465027, 0.7114999890327454, 0.8320000171661377, 0.909500002861023, 0.9290000200271606, 0.9399999976158142, 0.9574999809265137, 0.9620000123977661, 0.9695000052452087, 0.9725000262260437, 0.968999981880188, 0.9670000076293945, 0.9754999876022339, 0.9670000076293945, 0.9700000286102295, 0.9710000157356262, 0.9739999771118164, 0.9764999747276306]}

问题是为什么model.fit()history.history的准确性和损失在每个时期都有不同的值?

标签: pythontensorflowmatplotlibkerasartificial-intelligence

解决方案


推荐阅读