首页 > 解决方案 > Pycharm:所做的更改没有效果,如何解释这种行为?

问题描述

这个问题只发生在 Pycharm 中:

我做了一个非常简单的基于 TF2.0 的 NN 网站教程。奇怪的是,当我更改 batch_size 时,它​​会继续使用旧的,就好像我什么都没做一样。事实上,我所做的一切都是无关紧要的。

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255


class Prototype(tf.keras.models.Model):
    def __init__(self, **kwargs):
        super(Prototype, self).__init__(**kwargs)
        self.l1 = layers.Dense(64, activation='relu', name='dense_1')
        self.l2 = layers.Dense(64, activation='relu', name='dense_2')
        self.l3 = layers.Dense(10, activation='softmax', name='predictions')
    def call(self, ip):
        x = self.l1(ip)
        x = self.l2(x)
        return self.l3(x)

model = Prototype()
model.build(input_shape=(None, 784,))
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
loss_fn = keras.losses.SparseCategoricalCrossentropy()
batch_size = 250
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(batch_size)
def train_one_epoch():
  for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
    print(x_batch_train.shape)
    with tf.GradientTape() as tape:
      logits = model(x_batch_train)  # Logits for this minibatch
      loss_value = loss_fn(y_batch_train, logits)
    grads = tape.gradient(loss_value, model.trainable_weights)
    optimizer.apply_gradients(zip(grads, model.trainable_weights))

我运行 train_one_epoch(),它训练了一个 epoch。然后我更改批量大小,因此更改数据集对象以提供新的块大小,但是当我再次运行 train_one_epoch() 时,它继续使用旧的 batch_size。

证明: 在此处输入图像描述

标签: tensorflowkeraspycharm

解决方案


推荐阅读