python - 训练准确度和验证准确度从第一个 epoch 开始就保持不变
问题描述
这是我第一次发布问题,如果写得不好或结构不好,请原谅我。
数据集由 TIF 格式的图像组成。这意味着我正在运行 3D CNN。
这些图像是模拟的 X 射线图像,数据集有 2 个类别;正常和异常。它们的标签将是“正常”的“0”和“异常”的“1”。
我的文件夹树如下所示:
火车
普通的
异常
验证
- 普通的
- 异常
我所做的是初始化 2 个数组; 火车和y_train。
我运行了一个 FOR 循环,它将Normal图像导入并附加到train中,并为每个附加的图像附加一个'0'到y_train中。因此,如果我在 train 中有 10 个 Normal 图像,我也会在y_train中有 10 个“0”。
对异常图像重复此操作,并将它们附加到train中,并且'1'也将附加到y_train中。这意味着火车由正常图像和异常图像组成。并且y_train由'0'组成,然后是'1'。
为验证文件夹执行另一个 FOR 循环,其中我的数组是test和y_test。
这是我的神经网络代码:
def vgg1():
model = Sequential()
model.add(Conv3D(16, (3, 3, 3), activation="relu", padding="same", name="block1_conv1", input_shape=(128, 128, 128, 1), data_format="channels_last")) # 64
model.add(Conv3D(16, (3, 3, 3), activation="relu", padding="same", name="block1_conv2", data_format="channels_last")) # 64
model.add(MaxPooling3D((2,2, 2), strides=(2,2, 2),padding='same', name='block1_pool'))
model.add(Dropout(0.5))
model.add(Conv3D(32, (3, 3, 3), activation="relu", padding="same", name="block2_conv1", data_format="channels_last")) # 128
model.add(Conv3D(32, (3, 3, 3), activation="relu", padding="same", name="block2_conv2", data_format="channels_last")) # 128
model.add(MaxPooling3D((2,2, 2), strides=(2,2, 2),padding='same', name='block2_pool'))
model.add(Dropout(0.5))
model.add(Conv3D(64, (3, 3, 3), activation="relu", padding="same", name="block3_conv1", data_format="channels_last")) # 256
model.add(Conv3D(64, (3, 3, 3), activation="relu", padding="same", name="block3_conv2", data_format="channels_last")) # 256
model.add(Conv3D(64, (3, 3, 3), activation="relu", padding="same", name="block3_conv3", data_format="channels_last")) # 256
model.add(MaxPooling3D((2,2, 2), strides=(2,2, 2),padding='same', name='block3_pool'))
model.add(Dropout(0.5))
model.add(Conv3D(128, (3, 3, 3), activation="relu", padding="same", name="block4_conv1", data_format="channels_last")) # 512
model.add(Conv3D(128, (3, 3, 3), activation="relu", padding="same", name="block4_conv2", data_format="channels_last")) # 512
model.add(Conv3D(128, (3, 3, 3), activation="relu", padding="same", name="block4_conv3", data_format="channels_last")) # 512
model.add(MaxPooling3D((2,2, 2), strides=(2,2, 2),padding='same', name='block4_pool'))
model.add(Dropout(0.5))
model.add(Conv3D(128, (3, 3, 3), activation="relu", padding="same", name="block5_conv1", data_format="channels_last")) # 512
model.add(Conv3D(128, (3, 3, 3), activation="relu", padding="same", name="block5_conv2", data_format="channels_last")) # 512
model.add(Conv3D(128, (3, 3, 3), activation="relu", padding="same", name="block5_conv3", data_format="channels_last")) # 512
model.add(MaxPooling3D((2,2, 2), strides=(2,2, 2),padding='same', name='block5_pool'))
model.add(Dropout(0.5))
model.add(Flatten(name='flatten'))
model.add(Dense(4096, activation='relu',name='fc1'))
model.add(Dense(4096, activation='relu',name='fc2'))
model.add(Dense(2, activation='softmax', name='predictions'))
print(model.summary())
return model
以下代码是x_train、y_train、x_test、y_test和 model.compile 的初始化。
我还将我的标签转换为单热编码。
from keras.utils import to_categorical
model = vgg1()
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
x_train = np.load('/content/drive/My Drive/3D Dataset v2/x_train.npy')
y_train = np.load('/content/drive/My Drive/3D Dataset v2/y_train.npy')
y_train = to_categorical(y_train)
x_test = np.load('/content/drive/My Drive/3D Dataset v2/x_test.npy')
y_test = np.load('/content/drive/My Drive/3D Dataset v2/y_test.npy')
y_test = to_categorical(y_test)
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
这就是我要强调的问题,即恒定的训练准确率和验证准确率。
Train on 127 samples, validate on 31 samples
Epoch 1/25
127/127 [==============================] - 1700s 13s/step - loss: 1.0030 - accuracy: 0.7480 - val_loss: 0.5842 - val_accuracy: 0.7419
Epoch 2/25
127/127 [==============================] - 1708s 13s/step - loss: 0.5813 - accuracy: 0.7480 - val_loss: 0.5728 - val_accuracy: 0.7419
Epoch 3/25
127/127 [==============================] - 1693s 13s/step - loss: 0.5758 - accuracy: 0.7480 - val_loss: 0.5720 - val_accuracy: 0.7419
Epoch 4/25
127/127 [==============================] - 1675s 13s/step - loss: 0.5697 - accuracy: 0.7480 - val_loss: 0.5711 - val_accuracy: 0.7419
Epoch 5/25
127/127 [==============================] - 1664s 13s/step - loss: 0.5691 - accuracy: 0.7480 - val_loss: 0.5785 - val_accuracy: 0.7419
Epoch 6/25
127/127 [==============================] - 1666s 13s/step - loss: 0.5716 - accuracy: 0.7480 - val_loss: 0.5710 - val_accuracy: 0.7419
Epoch 7/25
127/127 [==============================] - 1676s 13s/step - loss: 0.5702 - accuracy: 0.7480 - val_loss: 0.5718 - val_accuracy: 0.7419
Epoch 8/25
127/127 [==============================] - 1664s 13s/step - loss: 0.5775 - accuracy: 0.7480 - val_loss: 0.5718 - val_accuracy: 0.7419
Epoch 9/25
127/127 [==============================] - 1660s 13s/step - loss: 0.5753 - accuracy: 0.7480 - val_loss: 0.5711 - val_accuracy: 0.7419
Epoch 10/25
127/127 [==============================] - 1681s 13s/step - loss: 0.5756 - accuracy: 0.7480 - val_loss: 0.5714 - val_accuracy: 0.7419
Epoch 11/25
127/127 [==============================] - 1679s 13s/step - loss: 0.5675 - accuracy: 0.7480 - val_loss: 0.5710 - val_accuracy: 0.7419
Epoch 12/25
127/127 [==============================] - 1681s 13s/step - loss: 0.5779 - accuracy: 0.7480 - val_loss: 0.5741 - val_accuracy: 0.7419
Epoch 13/25
127/127 [==============================] - 1682s 13s/step - loss: 0.5763 - accuracy: 0.7480 - val_loss: 0.5723 - val_accuracy: 0.7419
Epoch 14/25
127/127 [==============================] - 1685s 13s/step - loss: 0.5732 - accuracy: 0.7480 - val_loss: 0.5714 - val_accuracy: 0.7419
Epoch 15/25
127/127 [==============================] - 1685s 13s/step - loss: 0.5701 - accuracy: 0.7480 - val_loss: 0.5710 - val_accuracy: 0.7419
Epoch 16/25
127/127 [==============================] - 1678s 13s/step - loss: 0.5704 - accuracy: 0.7480 - val_loss: 0.5733 - val_accuracy: 0.7419
Epoch 17/25
127/127 [==============================] - 1663s 13s/step - loss: 0.5692 - accuracy: 0.7480 - val_loss: 0.5710 - val_accuracy: 0.7419
Epoch 18/25
127/127 [==============================] - 1657s 13s/step - loss: 0.5731 - accuracy: 0.7480 - val_loss: 0.5717 - val_accuracy: 0.7419
Epoch 19/25
127/127 [==============================] - 1674s 13s/step - loss: 0.5708 - accuracy: 0.7480 - val_loss: 0.5712 - val_accuracy: 0.7419
Epoch 20/25
127/127 [==============================] - 1666s 13s/step - loss: 0.5795 - accuracy: 0.7480 - val_loss: 0.5730 - val_accuracy: 0.7419
Epoch 21/25
127/127 [==============================] - 1671s 13s/step - loss: 0.5635 - accuracy: 0.7480 - val_loss: 0.5753 - val_accuracy: 0.7419
Epoch 22/25
127/127 [==============================] - 1672s 13s/step - loss: 0.5713 - accuracy: 0.7480 - val_loss: 0.5718 - val_accuracy: 0.7419
Epoch 23/25
127/127 [==============================] - 1672s 13s/step - loss: 0.5666 - accuracy: 0.7480 - val_loss: 0.5711 - val_accuracy: 0.7419
Epoch 24/25
127/127 [==============================] - 1669s 13s/step - loss: 0.5695 - accuracy: 0.7480 - val_loss: 0.5724 - val_accuracy: 0.7419
Epoch 25/25
127/127 [==============================] - 1663s 13s/step - loss: 0.5675 - accuracy: 0.7480 - val_loss: 0.5721 - val_accuracy: 0.7419
出了什么问题,我能做些什么来纠正这个问题?我认为具有恒定的精度是不可取的。
解决方案
推荐阅读
- html - css transform translate 在y轴上不起作用
- java - 相机意图打开相机应用程序(仅第一次)
- javascript - js target.extend中的函数声明
- html - 当我收到 403 错误时,如何使用邮递员下载网页内容?
- python - 如何根据上下粉红色范围查找 roi_corners,以便在 python 中使用 opencv 对其进行模糊处理
- twitter-bootstrap - 如何让 jsGrid 使用 Bootstrap 样式?
- php - 在php中创建递归树失败
- android - LibGDX - 如何将坐标从(Android)屏幕转换为 TiledMap?
- c# - 如何从 EPPlus 中捕获文件不存在的错误?
- javascript - 如何按对象的值对对象数组进行分组并将结果求和到新属性中