首页 > 解决方案 > 相同代码中 TensorFlow 2.3.0 和 1.15.0 中的进度输出之间的差异不明确

问题描述

我是 ML 新手。我在两个 anaconda 环境 1.15.0 和 2.3.0 中安装了两个 tensorflow。(1.15.0 可以使用我的旧 GTX 660 显卡)并看到训练相同模​​型时输出进度信息的差异。

来自 François Chollet 的“Python 深度学习”一书中的 Сode:

import numpy as np

import os
data_dir='C:/Users/Username/_JupyterDocs/sund/data'
fname = os.path.join(data_dir, 'jena_climate_2009_2016.csv')

os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="-1"

f = open(fname)
data = f.read()
f.close()

lines = data.split('\n')
header = lines[0].split(',')
lines = lines[1:]

float_data = np.zeros((len(lines), len(header) - 1))

for i, line in enumerate(lines):
    values = [float(x) for x in line.split(',')[1:]]
    float_data[i, :] = values
    
mean = float_data[:200000].mean(axis=0)
float_data -= mean
std = float_data[:200000].std(axis=0)
float_data /= std

def generator(data, lookback, delay, min_index, max_index, shuffle=False, batch_size=128, step=6):
    if max_index is None:
        max_index = len(data) - delay - 1
    i = min_index + lookback
    while 1:
        if shuffle:
            rows = np.random.randint(min_index + lookback, max_index, size=batch_size)
        else:
            if i + batch_size >= max_index:
                i = min_index + lookback
            rows = np.arange(i, min(i + batch_size, max_index))
            i += len(rows)
        samples = np.zeros((len(rows), lookback // step, data.shape[-1]))
        targets = np.zeros((len(rows),))
        for j, row in enumerate(rows):
            indices = range(rows[j] - lookback, rows[j], step)
            samples[j] = data[indices]
            targets[j] = data[rows[j] + delay ][1]
        yield samples, targets

lookback = 1440
step = 6
delay = 144
batch_size = 128

train_gen = generator(float_data,
                      lookback=lookback,
                      delay=delay,
                      min_index=0,
                      max_index=200000,
                      shuffle=True,
                      step=step,
                      batch_size=batch_size)
val_gen = generator(float_data,
                    lookback=lookback,
                    delay=delay,
                    min_index=200001,
                    max_index=300000,
                    step=step,
                    batch_size=batch_size)
test_gen = generator(float_data,
                     lookback=lookback,
                     delay=delay,
                     min_index=300001,
                     max_index=None,
                     step=step,
                     batch_size=batch_size)

val_steps = (300000 - 200001 - lookback) // batch_size
test_steps = (len(float_data) - 300001 - lookback) // batch_size


import time

from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
from tensorflow.keras.optimizers import RMSprop

model = Sequential()
model.add(layers.GRU(32, input_shape=(None, float_data.shape[-1])))
model.add(layers.Dense(1))
model.compile(optimizer=RMSprop(), loss='mae')

start = time.perf_counter()
history = model.fit_generator(train_gen,
                              steps_per_epoch=500,
                              epochs=20,
                              validation_data=val_gen,
                              validation_steps=val_steps,
                              verbose=1)
elapsed = time.perf_counter() - start

f = open("C:/Users/Username/Desktop/log1.txt", "a")
f.write('Elapsed %.3f seconds.' % elapsed)
f.close()

print('Elapsed %.3f seconds.' % elapsed)

TF 2.3.0 进度输出:

- 关于输出中已弃用的警告:

警告:tensorflow:来自 C:\Users\Username\AppData\Local\Temp/ipykernel_10804/2601851929.py:13:Model.fit_generator(来自 tensorflow.python.keras.engine.training)已弃用,将来会被删除版本。更新说明:请使用支持生成器的 Model.fit。

-输出:

Epoch 1/20
500/500 [==============================] - 45s 89ms/step - loss: 0.3050 - val_loss: 0.2686
Epoch 2/20
500/500 [==============================] - 45s 90ms/step - loss: 0.2841 - val_loss: 0.2658
Epoch 3/20
500/500 [==============================] - 46s 92ms/step - loss: 0.2771 - val_loss: 0.2653
Epoch 4/20
500/500 [==============================] - 46s 91ms/step - loss: 0.2729 - val_loss: 0.2795
Epoch 5/20
500/500 [==============================] - 45s 90ms/step - loss: 0.2690 - val_loss: 0.2644
Epoch 6/20
500/500 [==============================] - 45s 90ms/step - loss: 0.2632 - val_loss: 0.2673
Epoch 7/20
500/500 [==============================] - 45s 90ms/step - loss: 0.2602 - val_loss: 0.2641
Epoch 8/20
500/500 [==============================] - 45s 90ms/step - loss: 0.2549 - val_loss: 0.2667
Epoch 9/20
500/500 [==============================] - 45s 91ms/step - loss: 0.2507 - val_loss: 0.2768
Epoch 10/20
500/500 [==============================] - 45s 90ms/step - loss: 0.2447 - val_loss: 0.2785
Epoch 11/20
500/500 [==============================] - 45s 90ms/step - loss: 0.2422 - val_loss: 0.2763
Epoch 12/20
500/500 [==============================] - 45s 90ms/step - loss: 0.2354 - val_loss: 0.2794
Epoch 13/20
500/500 [==============================] - 46s 92ms/step - loss: 0.2320 - val_loss: 0.2807
Epoch 14/20
500/500 [==============================] - 45s 89ms/step - loss: 0.2277 - val_loss: 0.2848
Epoch 15/20
500/500 [==============================] - 45s 90ms/step - loss: 0.2222 - val_loss: 0.2909
Epoch 16/20
500/500 [==============================] - 45s 90ms/step - loss: 0.2178 - val_loss: 0.2910
Epoch 17/20
500/500 [==============================] - 45s 89ms/step - loss: 0.2152 - val_loss: 0.2918
Epoch 18/20
500/500 [==============================] - 45s 90ms/step - loss: 0.2112 - val_loss: 0.2917
Epoch 19/20
500/500 [==============================] - 44s 89ms/step - loss: 0.2103 - val_loss: 0.2979
Epoch 20/20
500/500 [==============================] - 45s 89ms/step - loss: 0.2068 - val_loss: 0.2986
Elapsed 904.779 seconds.

TF 1.15.0 进度输出:

- 关于输出中已弃用的警告:

警告:tensorflow:来自 C:\Users\Username\anaconda3\envs\tf-gpu\lib\site-packages\tensorflow_core\python\ops\resource_variable_ops.py:1630:调用 BaseResourceVariable。不推荐使用带有约束的init(来自 tensorflow.python.ops.resource_variable_ops),并将在未来的版本中删除。更新说明:如果使用 Keras 将 *_constraint 参数传递给层。

输出:

Epoch 1/20
WARNING:tensorflow:From C:\Users\Username\anaconda3\envs\tf-gpu\lib\site-packages\tensorflow_core\python\ops\math_grad.py:1424: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
499/500 [============================>.] - ETA: 0s - loss: 0.3014Epoch 1/20
769/500 [==============================================] - 17s 22ms/step - loss: 0.2285
500/500 [==============================] - 63s 126ms/step - loss: 0.3014 - val_loss: 0.2686
Epoch 2/20
499/500 [============================>.] - ETA: 0s - loss: 0.2836Epoch 1/20
769/500 [==============================================] - 17s 22ms/step - loss: 0.2225
500/500 [==============================] - 62s 123ms/step - loss: 0.2836 - val_loss: 0.2667
Epoch 3/20
499/500 [============================>.] - ETA: 0s - loss: 0.2761Epoch 1/20
769/500 [==============================================] - 17s 22ms/step - loss: 0.3162
500/500 [==============================] - 62s 123ms/step - loss: 0.2762 - val_loss: 0.2721
Epoch 4/20
499/500 [============================>.] - ETA: 0s - loss: 0.2731Epoch 1/20
769/500 [==============================================] - 16s 21ms/step - loss: 0.2422
500/500 [==============================] - 62s 124ms/step - loss: 0.2730 - val_loss: 0.2667
Epoch 5/20
499/500 [============================>.] - ETA: 0s - loss: 0.2667Epoch 1/20
769/500 [==============================================] - 16s 21ms/step - loss: 0.3732
500/500 [==============================] - 61s 122ms/step - loss: 0.2667 - val_loss: 0.2663
Epoch 6/20
499/500 [============================>.] - ETA: 0s - loss: 0.2613Epoch 1/20
769/500 [==============================================] - 17s 22ms/step - loss: 0.2088
500/500 [==============================] - 62s 124ms/step - loss: 0.2613 - val_loss: 0.2648
Epoch 7/20
499/500 [============================>.] - ETA: 0s - loss: 0.2544Epoch 1/20
769/500 [==============================================] - 17s 22ms/step - loss: 0.3043
500/500 [==============================] - 62s 125ms/step - loss: 0.2544 - val_loss: 0.2710
Epoch 8/20
499/500 [============================>.] - ETA: 0s - loss: 0.2493Epoch 1/20
769/500 [==============================================] - 17s 22ms/step - loss: 0.2767
500/500 [==============================] - 63s 127ms/step - loss: 0.2493 - val_loss: 0.2717
Epoch 9/20
499/500 [============================>.] - ETA: 0s - loss: 0.2455Epoch 1/20
769/500 [==============================================] - 17s 22ms/step - loss: 0.2336
500/500 [==============================] - 62s 124ms/step - loss: 0.2455 - val_loss: 0.2743
Epoch 10/20
499/500 [============================>.] - ETA: 0s - loss: 0.2406Epoch 1/20
769/500 [==============================================] - 17s 22ms/step - loss: 0.3041
500/500 [==============================] - 63s 126ms/step - loss: 0.2406 - val_loss: 0.2776
Epoch 11/20
499/500 [============================>.] - ETA: 0s - loss: 0.2345Epoch 1/20
769/500 [==============================================] - 17s 22ms/step - loss: 0.2655
500/500 [==============================] - 62s 124ms/step - loss: 0.2344 - val_loss: 0.2779
Epoch 12/20
499/500 [============================>.] - ETA: 0s - loss: 0.2310Epoch 1/20
769/500 [==============================================] - 17s 22ms/step - loss: 0.3085
500/500 [==============================] - 62s 124ms/step - loss: 0.2310 - val_loss: 0.2800
Epoch 13/20
499/500 [============================>.] - ETA: 0s - loss: 0.2271Epoch 1/20
769/500 [==============================================] - 17s 22ms/step - loss: 0.3029
500/500 [==============================] - 64s 127ms/step - loss: 0.2271 - val_loss: 0.2839
Epoch 14/20
499/500 [============================>.] - ETA: 0s - loss: 0.2226Epoch 1/20
769/500 [==============================================] - 17s 22ms/step - loss: 0.3110
500/500 [==============================] - 62s 125ms/step - loss: 0.2226 - val_loss: 0.2886
Epoch 15/20
499/500 [============================>.] - ETA: 0s - loss: 0.2190Epoch 1/20
769/500 [==============================================] - 17s 22ms/step - loss: 0.3329
500/500 [==============================] - 62s 123ms/step - loss: 0.2190 - val_loss: 0.2919
Epoch 16/20
499/500 [============================>.] - ETA: 0s - loss: 0.2170Epoch 1/20
769/500 [==============================================] - 17s 22ms/step - loss: 0.3022
500/500 [==============================] - 62s 125ms/step - loss: 0.2170 - val_loss: 0.2937
Epoch 17/20
499/500 [============================>.] - ETA: 0s - loss: 0.2132Epoch 1/20
769/500 [==============================================] - 17s 22ms/step - loss: 0.2463
500/500 [==============================] - 62s 124ms/step - loss: 0.2132 - val_loss: 0.3004
Epoch 18/20
499/500 [============================>.] - ETA: 0s - loss: 0.2101Epoch 1/20
769/500 [==============================================] - 17s 22ms/step - loss: 0.3423
500/500 [==============================] - 62s 124ms/step - loss: 0.2101 - val_loss: 0.3018
Epoch 19/20
499/500 [============================>.] - ETA: 0s - loss: 0.2072Epoch 1/20
769/500 [==============================================] - 17s 23ms/step - loss: 0.2689
500/500 [==============================] - 62s 125ms/step - loss: 0.2073 - val_loss: 0.3045
Epoch 20/20
499/500 [============================>.] - ETA: 0s - loss: 0.2066Epoch 1/20
769/500 [==============================================] - 17s 22ms/step - loss: 0.2809
500/500 [==============================] - 62s 124ms/step - loss: 0.2066 - val_loss: 0.2978
Elapsed 1245.008 seconds.

TF 1.15.0 输出中每个 epoch 中的两个附加进度条是什么?

标签: tensorflowkeras

解决方案


从文档:

详细:整数。0、1 或 2。详细程度模式。0 = 无声,1 = 进度条,2 = 每个 epoch 一行

默认值为 1。

这是 TensorFlow 内部警告,您可以放心地忽略。这是告诉我们有关 TensorFlow 未来版本的信息,不需要您采取任何行动。


推荐阅读