neural-network - 将 ANN 拟合到函数的反向传播错误
问题描述
我打算在这里阅读 Michael Nielson 的人工神经网络教程:http: //neuralnetworksanddeeplearning.com/
我在搞乱他最后提供的代码。我试图为函数 y=1/x 拟合一个浅 ANN,它起作用了。但是,每当我尝试创建一个 ANN 架构,使得输入层有 1 个神经元并且总层数大于 3 时,反向传播代码中似乎存在错误。
这是我正在运行的代码:
import random
import numpy as np
class Network(object):
def __init__(self, sizes):
"""The list ``sizes`` contains the number of neurons in the
respective layers of the network. For example, if the list
was [2, 3, 1] then it would be a three-layer network, with the
first layer containing 2 neurons, the second layer 3 neurons,
and the third layer 1 neuron. The biases and weights for the
network are initialized randomly, using a Gaussian
distribution with mean 0, and variance 1. Note that the first
layer is assumed to be an input layer, and by convention we
won't set any biases for those neurons, since biases are only
ever used in computing the outputs from later layers."""
self.num_layers = len(sizes)
self.sizes = sizes
self.biases = [np.random.randn(y, 1) for y in sizes[1:]]
self.weights = [np.random.randn(y, x)
for x, y in zip(sizes[:-1], sizes[1:])]
def feedforward(self, a):
"""Return the output of the network if ``a`` is input."""
for b, w in zip(self.biases, self.weights):
a = sigmoid(np.dot(w, a)+b)
return a
def SGD(self, training_data, epochs, mini_batch_size, eta,
test_data=None):
"""Train the neural network using mini-batch stochastic
gradient descent. The ``training_data`` is a list of tuples
``(x, y)`` representing the training inputs and the desired
outputs. The other non-optional parameters are
self-explanatory. If ``test_data`` is provided then the
network will be evaluated against the test data after each
epoch, and partial progress printed out. This is useful for
tracking progress, but slows things down substantially."""
if test_data: n_test = len(test_data)
n = len(training_data)
for j in range(epochs):
random.shuffle(training_data)
mini_batches = [
training_data[k:k+mini_batch_size]
for k in range(0, n, mini_batch_size)]
for mini_batch in mini_batches:
self.update_mini_batch(mini_batch, eta)
def update_mini_batch(self, mini_batch, eta):
"""Update the network's weights and biases by applying
gradient descent using backpropagation to a single mini batch.
The ``mini_batch`` is a list of tuples ``(x, y)``, and ``eta``
is the learning rate."""
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
for x, y in mini_batch:
delta_nabla_b, delta_nabla_w = self.backprop(x, y)
nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
self.weights = [w-(eta/len(mini_batch))*nw
for w, nw in zip(self.weights, nabla_w)]
self.biases = [b-(eta/len(mini_batch))*nb
for b, nb in zip(self.biases, nabla_b)]
def backprop(self, x, y):
"""Return a tuple ``(nabla_b, nabla_w)`` representing the
gradient for the cost function C_x. ``nabla_b`` and
``nabla_w`` are layer-by-layer lists of numpy arrays, similar
to ``self.biases`` and ``self.weights``."""
nabla_b = [np.zeros(b.shape) for b in self.biases]
nabla_w = [np.zeros(w.shape) for w in self.weights]
# feedforward
activation = x
activations = [x] # list to store all the activations, layer by layer
zs = [] # list to store all the z vectors, layer by layer
for b, w in zip(self.biases, self.weights):
z = np.dot(w, activation)+b
zs.append(z)
activation = sigmoid(z)
activations.append(activation)
# backward pass
delta = self.cost_derivative(activations[-1], y) * \
sigmoid_prime(zs[-1])
nabla_b[-1] = delta
nabla_w[-1] = np.dot(delta, activations[-2].transpose())
# Note that the variable l in the loop below is used a little
# differently to the notation in Chapter 2 of the book. Here,
# l = 1 means the last layer of neurons, l = 2 is the
# second-last layer, and so on. It's a renumbering of the
# scheme in the book, used here to take advantage of the fact
# that Python can use negative indices in lists.
for l in range(2, self.num_layers):
z = zs[-l]
sp = sigmoid_prime(z)
delta = np.dot(self.weights[-l+1].transpose(), delta) * sp
nabla_b[-l] = delta
nabla_w[-l] = np.dot(delta, activations[-l-1])
return (nabla_b, nabla_w)
def cost_derivative(self, output_activations, y):
"""Return the vector of partial derivatives \partial C_x /
\partial a for the output activations."""
return (output_activations-y)
#### Miscellaneous functions
def sigmoid(z):
"""The sigmoid function."""
return 1.0/(1.0+np.exp(-z))
def sigmoid_prime(z):
"""Derivative of the sigmoid function."""
return sigmoid(z)*(1-sigmoid(z))
training_data=[(1, 1), (2, 1/2), (3, 1/3), (4, 1/4), (5, 1/5), (7, 1/7), (8, 1/8), (10, 1/10), (11, 1/11), (12, 1/12), (12, 1/13), (15, 1/15), (30, 1/30)]
net=Network([1, 3, 3, 1])
net.SGD(training_data, 500, len(training_data), 5)
for i in [6, 9, 14]:
print(net.feedforward(i))
print(abs(0.16-net.feedforward(6))+abs(0.11-net.feedforward(9))+abs(0.07-net.feedforward(14)))
input()
import matplotlib.pyplot as plt
x=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]
print(x)
y=[float(net.feedforward(i)) for i in x]
print(y)
plt.plot(x, y)
x=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]
y=[float(1/i)for i in x]
plt.plot(x, y)
plt.show()
input()
错误出现在行中和nabla_w[-l] = np.dot(delta, activations[-l-1])
的尺寸似乎有错误。它说delta
activations[-l-1]
解决方案
推荐阅读
- java - HPROF 每天进行转储
- reactjs - 如何在要测试的 React 组件中模拟自定义钩子?
- azure - 仅限注册页面的 Azure AD B2C 自定义策略
- vert.x - Vertx 中的集群事件总线如何工作?
- android - 无法将位图图像保存到 Android 中的新文件夹
- mapstruct - mapstruct - 更新现有 bean - 忽略所有子/嵌套 bean(数组列表、集合等)中的“id”字段
- php - PHP 警告:在 php 7.3 中使用 password_hash() 时使用未定义的常量 PASSWORD_ARGON2ID
- r - 使用 R 的 for 循环中可能存在的错误
- python - 在连接两个纬度和经度的线上的任意位置找到纬度和经度
- kotlin - 任务“:cinteropAFNetworkingIOS”执行失败。> 无法为 AFNetworking 执行 cinterop 处理:无法确定标头位置