首页 > 解决方案 > XOR 神经网络不学习

问题描述

我正在尝试解决非常简单的非线性问题。是XOR门。我我的学校知识。XOR可以通过使用 2 个输入节点、2 个隐藏层节点来解决。和1个输出。这是二分类问题。

我生成它是的1000随机整数,然后进行反向传播。但是由于某种未知的原因,我的网络没有学到任何东西。训练准确率恒定在。0150

# coding: utf-8
import matplotlib
import torch
import torch.nn as nn
from torch.autograd import Variable

matplotlib.use('TkAgg')  # My buggy OSX 10.13.6 requires this
import matplotlib.pyplot as plt
from torch.utils.data import Dataset
from tqdm import tqdm
import random

N = 1000
batch_size = 10
epochs = 40
hidden_size = 2
output_size = 1
lr = 0.1


def return_xor(N):
    tmp_x = []
    tmp_y = []
    for i in range(N):
        a = (random.randint(0, 1) == 1)
        b = (random.randint(0, 1) == 1)
        if (a and not b) or (not a and b):
            q = True
        else:
            q = False
        input_features = (a, b)
        output_class = q
        tmp_x.append(input_features)
        tmp_y.append(output_class)
    return tmp_x, tmp_y


# In[495]:


# Training set
x, y = return_xor(N)
x = torch.tensor(x, dtype=torch.float, requires_grad=True)
y = torch.tensor(y, dtype=torch.float, requires_grad=True)
# Test dataset
x_test, y_test = return_xor(100)
x_test = torch.tensor(x_test)
y_test = torch.tensor(y_test)


class MyDataset(Dataset):
    """Define my own `Dataset` in order to use `Variable` with `autograd`"""

    def __init__(self, x, y):
        self.x = x
        self.y = y

    def __getitem__(self, index):
        return self.x[index], self.y[index]

    def __len__(self):
        return len(self.x)


dataset = MyDataset(x, y)
test_dataset = MyDataset(x_test, y_test)

print(dataset.x.shape)
print(dataset.y.shape)

# Make data iterable by loading to a loader. Shuffle, batch_size kwargs put them here in order to remind I myself
train_loader = torch.utils.data.DataLoader(dataset=dataset, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False)

print(f"They are {len(train_loader)} batches in the dataset")
shown = 0
for (x, y) in train_loader:
    if shown == 1:
        break
    print(f"{x.shape} {x.dtype}")
    print(f"{y.shape} {y.dtype}")
    shown += 1


class MyModel(nn.Module):
    """
    Binary classification
    2 input nodes
    2 hidden nodes
    1 output node
    """

    def __init__(self, input_size, hidden_size, output_size):
        super().__init__()
        self.fc1 = torch.nn.Linear(input_size, hidden_size)
        self.fc2 = torch.nn.Linear(hidden_size, output_size)
        self.sigmoid = torch.nn.Sigmoid()

    def forward(self, out):
        out = self.fc1(out)
        out = self.fc2(out)
        out = self.sigmoid(out)
        return out


# Create my network
net = MyModel(dataset.x.shape[1], hidden_size, output_size)
CUDA = torch.cuda.is_available()
if CUDA:
    net = net.cuda()
criterion = torch.nn.BCELoss(reduction='elementwise_mean')
optimizer = torch.optim.SGD(net.parameters(), lr=lr)

# Train the network
correct_train = 0
total_train = 0
for epoch in range(epochs):
    for i, (batches, labels) in enumerate(train_loader):
        batcesh = Variable(batches.float())
        labels = Variable(labels.float())
        output = net(batches)  # Forward pass
        optimizer.zero_grad()

        loss = criterion(output, labels.view(10, 1))
        loss.backward()
        optimizer.step()
        total_train += labels.size(0)
        correct_train += (predicted == labels.long()).sum()
        if (i + 1) % 10 == 0:
            print(f"""
                Epoch {epoch+1}/{epochs}, 
                Iteration {i+1}/{len(dataset)//batch_size}, 
                Training Loss: {loss.item()},
                Training Accuracy: {100*correct_train/total_train}
              """)

解决方案:
我做了初始化权重,自适应学习率 https://github.com/elcolie/nnbootcamp/blob/master/Study-XOR.ipynb

标签: pythonneural-networkpytorchxorbackpropagation

解决方案


我不确定您会得到什么结果,因为您在问题中发布的代码不起作用(它给出了 pytorch 0.4.1 的错误,例如预测未定义等)。但除了语法问题,还有其他问题。

您的模型实际上不是两层,因为它在第一次输出后不使用非线性。实际上这是一层网络,要解决这个问题,您可以forward按如下方式修改模型:

def forward(self, out):
    out = torch.nn.functional.relu(self.fc1(out))
    out = self.fc2(out)
    out = self.sigmoid(out)
    return out

您也可以尝试 sigmoid 或 tanh 非线性......但非线性是必须的。这应该可以解决问题。

我还看到您只使用了 2 个隐藏单元。这可能是限制性的,您可能希望将其增加到 5 或 10。


推荐阅读