首页 > 解决方案 > 简单的神经网络给出了 NaN 损失

问题描述

所以我正在练习深度学习,这是我的第一个练习项目之一,我从 kaggle 中获取了数据集,其中包含 2 个文件夹,一个是带面具的人的图像,另一个是不带面具的人的图像。我尝试构建一个简单的 NN,但在拟合数据时它给出了损失:NaN,从一开始所有时期的准确度为 0.4903,有人可以帮我找出代码的哪一部分出错

这是代码

#!/usr/bin/env python
# coding: utf-8

# In[1]:


import pandas as pd
import numpy as np
import sklearn.model_selection as train_test_split
import sklearn
import matplotlib.pyplot as plt
import tensorflow as tf
import glob
import tensorflow.keras.layers as layers
import cv2


# In[2]:


# Mapping Mask images

filenames_mask = [i for i in glob.glob('with_mask\*.jpg')]
filenames_mask = np.array(filenames_mask)
filenames_mask = pd.DataFrame(filenames_mask,columns=['image_path'])
filenames_mask.head()
is_mask = np.ones(filenames_mask.shape[0])
filenames_nomask = [i for i in glob.glob('without_mask\*.jpg')]
filenames_nomask = np.array(filenames_nomask)
filenames_nomask = pd.DataFrame(filenames_nomask,columns=['image_path'])
filenames_nomask.head()
not_mask = np.zeros(filenames_nomask.shape[0])

data = pd.concat([filenames_mask,filenames_nomask],ignore_index=True)
is_mask = pd.DataFrame(is_mask,columns=['is_mask'])
no_mask = pd.DataFrame(not_mask,columns=['is_mask'])
mask = pd.concat([is_mask,no_mask],ignore_index=True)
mask=mask.astype(int)
Data = data.join(mask)
Data = Data.sample(frac=1)


# In[3]:


# Mapping Images to X
X=[]
for i in Data['image_path']:
    img = cv2.imread(i)
    img = cv2.resize(img,(32,32))
#     img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
#     img = img/255
    X.append(img)
X = np.array(X)
X.shape


# In[4]:


# Mapping Image Prediction to Y
Y = Data['is_mask']


# In[5]:


X_train,X_test,Y_train,Y_test = sklearn.model_selection.train_test_split(X,Y,random_state=2)


# In[ ]:





# In[6]:





# In[7]:





# In[23]:


# Building Simple Neural Network

SNN_Model = tf.keras.Sequential([
#     layers.Conv2D(38,(3,3),activation='relu',input_shape=(32,32,3)),
#     layers.MaxPooling2D(2,2),
#     layers.Conv2D(64,(3,3),activation='relu'),
#     layers.MaxPooling2D(2,2),
#     layers.Conv2D(64,(3,3),activation='relu'),
    
    layers.Flatten(),
    layers.Dense(1024,activation='relu'),
    layers.Dense(512,activation='relu'),
    layers.Dense(126,activation='relu'),
    layers.Dense(1,activation='softmax'),
])


# In[24]:


SNN_Model.compile(optimizer='sgd',loss='CategoricalCrossentropy',metrics=['accuracy'])


# In[25]:


SNN_Model.fit(X_train,Y_train,epochs=5)


# In[ ]:

标签: pythontensorflowdeep-learning

解决方案


也许错误是因为您使用的是softmaxfunction 而不是sigmoid,请尝试以下代码:

SNN_Model = tf.keras.Sequential([
    layers.Conv2D(32,(3,3),activation='relu',input_shape=(32,32,3)),
    layers.MaxPooling2D(2,2),
    layers.Conv2D(64,(3,3),activation='relu'),
    layers.MaxPooling2D(2,2),

    layers.Flatten(),
    layers.Dense(1024, activation='relu'),
    layers.Dense(512, activation='relu'),
    layers.Dense(128, activation='relu'),
    layers.Dense(1, activation='sigmoid'),
])

binary_crossentropy这种情况下工作效率更高。

SNN_Model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])

推荐阅读