首页 > 解决方案 > 历代持续损失

问题描述

我对这个神经网络进行了编码以进行高斯回归,但我不明白为什么我的损失不会随着时期而改变。我将学习率设置为 1 以查看损失减少,但事实并非如此。我选择用 2000 个点来训练我的神经网络。我在这个网站上看了几个算法,我真的不明白为什么我的算法没有达到我的预期。我已经导入了所有需要的库。

谢谢您的帮助


def f(x):
    return x * np.sin(x) # function to predict


m =2000 
X_bis = np.zeros((1,m),dtype = float)
X_bis=np.random.random(m)*10

        ## Create my training,validation and test set

X_train = X_bis[0:600]
X_val = X_bis[600:800]
X_test = X_bis[800:]

y_train = f(X_train) 
y_val = f(X_val)
y_test = f(X_test)
    
mean_X_train = np.mean(X_train)
std_X_train = np.std(X_train)

mean_y_train = np.mean(y_train)
std_y_train =np.std(y_train)


class MyDataset(data.Dataset):

    def __init__(self, data_feature, data_target):
        self.data_feature = data_feature
        self.data_target = data_target
       
    def __len__(self):
        return len(self.data_feature)
    
    def __getitem__(self, index):
        X_train_normalized = (self.data_feature[index] - mean_X_train) / std_X_train
        y_train_normalized = (self.data_target[index] - mean_y_train) / std_y_train
        return torch.from_numpy(np.array(X_train_normalized,ndmin=1)).float(), torch.from_numpy(np.array(y_train_normalized, ndmin = 1)).float()
                    
  
training_set  = MyDataset(X_train,y_train) 
train_loading = torch.utils.data.DataLoader(training_set, batch_size= 100)
    
val_set = MyDataset(X_val, y_val)  
val_loading = torch.utils.data.DataLoader(val_set, batch_size= 10)
    
test_set  = MyDataset(X_test,y_test) 
test_loading = torch.utils.data.DataLoader(test_set, batch_size= 100)


class Net(nn.Module):
  def __init__(self):
    super(Net, self).__init__()
    self.FC1 = nn.Linear(1,10)
    self.FC2 = nn.Linear(10, 1)
  def forward(self, x):
    x = F.relu(self.FC1(x)) 
    x = self.FC2(x)
    return x

model = Net()

criterion = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(),
lr=1, weight_decay= 0.01, momentum = 0.9)



def train(net, train_loader, optimizer, epoch):
    net.train()
    total_loss=0
    for idx,(data, target) in enumerate(train_loader, 0):
        outputs = net(data)
        loss = criterion(outputs,target)
        total_loss +=loss.cpu().item()
        optimizer.step()
    print('Epoch:', epoch , 'average training loss ', total_loss/ len(train_loader))


def test(net,test_loader):
    net.eval()
    total_loss = 0
    for idx,(data, target) in enumerate(test_loader,0):
        outputs = net(data)
        outputs = outputs * std_X_train + mean_X_train
        target = target * std_y_train + mean_y_train
        loss = criterion(outputs,target)
        total_loss += sqrt(loss.cpu().item())
    print('average testing loss', total_loss/len(test_loader))
    
        
for epoch in range(50): 
    train(model,train_loading,optimizer,epoch)
    test(model,val_loading)    

'''
   

标签: neural-networkpytorch

解决方案


我想知道为什么在训练片段中loss.backward()计算损失(即 )的那一行之后没有。loss = criterion(outputs,target)这将有助于反向传播并最终更新您的网络参数optimizer.step()。此外,请尝试使用较低的学习率,因为lr=1通常在训练此类网络时太过分了。尝试使用介于 0.001-0.01 之间的学习率来查看您的网络是否正在学习输入 X 和目标 Y 之间的映射。


推荐阅读