首页 > 解决方案 > 我从 Aurelie Geron 的“Hands-On ML”一书的第 10 章复制粘贴代码,但得到完全不同的损失值?

问题描述

我一直在研究“使用 Scikit-Learn、Keras 和 TensorFlow 进行机器学习:Aurelien Geron 的构建智能系统的概念、工具和技术一书”的第 10 章(第 300 页)中的加利福尼亚住房数据集。

我将以下代码从他的 Colab Notebook 复制粘贴到我的 Jupyter Notebook 中:

from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler

housing = fetch_california_housing()

X_train_full, X_test, y_train_full, y_test = train_test_split(housing.data, housing.target, random_state=42)
X_train, X_valid, y_train, y_valid = train_test_split(X_train_full, y_train_full, random_state=42)

scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_valid = scaler.transform(X_valid)
X_test = scaler.transform(X_test)
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential([
    keras.layers.Dense(30, activation="relu", input_shape=X_train.shape[1:]),
    keras.layers.Dense(1)
])
model.compile(loss="mean_squared_error", optimizer=keras.optimizers.SGD(learning_rate=1e-3))
history = model.fit(X_train, y_train, epochs=20, validation_data=(X_valid, y_valid))
mse_test = model.evaluate(X_test, y_test)
X_new = X_test[:3]
y_pred = model.predict(X_new)

我的损失是这样的:

Epoch 1/20
363/363 [==============================] - 1s 2ms/step - loss: 22974151580975104.0000 - val_loss: 14617883443200.0000
Epoch 2/20
363/363 [==============================] - 0s 1ms/step - loss: 7723970199552.0000 - val_loss: 3417093963776.0000
Epoch 3/20
363/363 [==============================] - 0s 1ms/step - loss: 1805568049152.0000 - val_loss: 798785339392.0000
Epoch 4/20
363/363 [==============================] - 0s 1ms/step - loss: 422071828480.0000 - val_loss: 186725318656.0000
Epoch 5/20
363/363 [==============================] - 0s 1ms/step - loss: 98664218624.0000 - val_loss: 43649114112.0000
Epoch 6/20
363/363 [==============================] - 1s 2ms/step - loss: 23063846912.0000 - val_loss: 10203475968.0000
Epoch 7/20
363/363 [==============================] - 1s 2ms/step - loss: 5391437312.0000 - val_loss: 2385182720.0000
Epoch 8/20
363/363 [==============================] - 1s 1ms/step - loss: 1260309632.0000 - val_loss: 557565056.0000
Epoch 9/20
363/363 [==============================] - 0s 1ms/step - loss: 294611968.0000 - val_loss: 130337808.0000
Epoch 10/20
363/363 [==============================] - 1s 1ms/step - loss: 68868872.0000 - val_loss: 30468182.0000
Epoch 11/20
363/363 [==============================] - 1s 2ms/step - loss: 16098886.0000 - val_loss: 7122408.0000
Epoch 12/20
363/363 [==============================] - 1s 2ms/step - loss: 3763294.0000 - val_loss: 1665006.2500
Epoch 13/20
363/363 [==============================] - 0s 1ms/step - loss: 879712.9375 - val_loss: 389246.1250
Epoch 14/20
363/363 [==============================] - 1s 1ms/step - loss: 205644.4062 - val_loss: 91006.7344
Epoch 15/20
363/363 [==============================] - 1s 1ms/step - loss: 48073.1602 - val_loss: 21282.1250
Epoch 16/20
363/363 [==============================] - 1s 1ms/step - loss: 11238.7031 - val_loss: 4979.3115
Epoch 17/20
363/363 [==============================] - 1s 2ms/step - loss: 2628.1484 - val_loss: 1166.6659
Epoch 18/20
363/363 [==============================] - 1s 2ms/step - loss: 615.3716 - val_loss: 274.4792
Epoch 19/20
363/363 [==============================] - 1s 2ms/step - loss: 144.8725 - val_loss: 65.5832
Epoch 20/20
363/363 [==============================] - 1s 2ms/step - loss: 34.9049 - val_loss: 16.5316
162/162 [==============================] - 0s 1ms/step - loss: 16.3210

但他的损失是这样的:

Epoch 1/20
363/363 [==============================] - 1s 2ms/step - loss: 1.6419 - val_loss: 0.8560
Epoch 2/20
363/363 [==============================] - 1s 2ms/step - loss: 0.7047 - val_loss: 0.6531
Epoch 3/20
363/363 [==============================] - 1s 2ms/step - loss: 0.6345 - val_loss: 0.6099
Epoch 4/20
363/363 [==============================] - 1s 2ms/step - loss: 0.5977 - val_loss: 0.5658
Epoch 5/20
363/363 [==============================] - 1s 2ms/step - loss: 0.5706 - val_loss: 0.5355
Epoch 6/20
363/363 [==============================] - 1s 2ms/step - loss: 0.5472 - val_loss: 0.5173
Epoch 7/20
363/363 [==============================] - 1s 2ms/step - loss: 0.5288 - val_loss: 0.5081
Epoch 8/20
363/363 [==============================] - 1s 2ms/step - loss: 0.5130 - val_loss: 0.4799
Epoch 9/20
363/363 [==============================] - 1s 2ms/step - loss: 0.4992 - val_loss: 0.4690
Epoch 10/20
363/363 [==============================] - 1s 2ms/step - loss: 0.4875 - val_loss: 0.4656
Epoch 11/20
363/363 [==============================] - 1s 2ms/step - loss: 0.4777 - val_loss: 0.4482
Epoch 12/20
363/363 [==============================] - 1s 2ms/step - loss: 0.4688 - val_loss: 0.4479
Epoch 13/20
363/363 [==============================] - 1s 2ms/step - loss: 0.4615 - val_loss: 0.4296
Epoch 14/20
363/363 [==============================] - 1s 2ms/step - loss: 0.4547 - val_loss: 0.4233
Epoch 15/20
363/363 [==============================] - 1s 2ms/step - loss: 0.4488 - val_loss: 0.4176
Epoch 16/20
363/363 [==============================] - 1s 2ms/step - loss: 0.4435 - val_loss: 0.4123
Epoch 17/20
363/363 [==============================] - 1s 2ms/step - loss: 0.4389 - val_loss: 0.4071
Epoch 18/20
363/363 [==============================] - 1s 2ms/step - loss: 0.4347 - val_loss: 0.4037
Epoch 19/20
363/363 [==============================] - 1s 2ms/step - loss: 0.4306 - val_loss: 0.4000
Epoch 20/20
363/363 [==============================] - 1s 2ms/step - loss: 0.4273 - val_loss: 0.3969
162/162 [==============================] - 0s 1ms/step - loss: 0.4212

为什么我的损失这么大???

标签: pythontensorflowkerasscikit-learnneural-network

解决方案


推荐阅读