python - 具有正弦功能的神经网络:误差不会减少
问题描述
我从 Tariq Rashid 那里获取了一个 Python 神经网络代码来预测手写数字,并尝试调整它来预测窦函数。它适用于书面数字。但是使用 sinus 函数,误差并没有减少。
我对 sinus 函数的设置如下:
- 具有反向传播的 1-10-1 层的 NN。
- 输入数据为 0 直到 2*pi。
- 目标函数是正弦函数。
- 2880 个输入数据,经过混洗用于训练目的。输入数据和函数数据在 [0, 1] 之间进行归一化。
- 1 到 100 之间的时期
- 激活函数是 sigmoid
- 输入权重是正态分布的,均值 = 0,方差在 -0.31 和 0.31 之间(1/sqr(10) 隐藏神经元数)
- 学习率在 0,00001 和 0,5 之间(没有改善)
在这里,您可以看到带有手写预测的误差函数,因为它随着时间的推移而减小,工作正常:
在这里你可以看到我的窦曲线预测的误差函数。它只是没有减少。我尝试了很多不同的设置,但我不知道如何找到问题。
有人知道如何定位错误吗?我花了几天时间,但我不知道可能出了什么问题。我在示例中看到,使用 sigmoid 函数原则上它应该可以工作。
代码来了。这是训练数据:https ://www.dropbox.com/s/c9xe1wq7b0i9h2y/train-sin-pi-norm_shuffle.csv?dl=0
# python notebook for Make Your Own Neural Network
# original code for a 3-layer neural network, and code for learning the MNIST dataset
# (c) Tariq Rashid, 2016
# license is GPLv2
# code was changed by a random user for learning the sinus function
# numpy provides arrays and useful functions for working with them
import numpy
# scipy.special for the sigmoid function expit()
import scipy.special
import csv
import matplotlib.pyplot as plt
# neural network class definition
class neuralNetwork:
# initialise the neural network
def __init__(self, inputnodes, hiddennodes, outputnodes, learningrate):
# set number of nodes in each input, hidden, output layer
self.inodes = inputnodes
self.hnodes = hiddennodes
self.onodes = outputnodes
# link weight matrices, wih and who
# weights inside the arrays are w_i_j, where link is from node i to node j in the next layer
# w11 w21
# w12 w22 etc
self.wih = numpy.random.normal(0.0, pow(self.hnodes, -0.5), (self.hnodes, self.inodes))
self.who = numpy.random.normal(0.0, pow(self.onodes, -0.5), (self.onodes, self.hnodes))
# learning rate
self.lr = learningrate
# activation function is the sigmoid function
self.activation_function = lambda x: scipy.special.expit(x)
pass
# train the neural network
def train(self, inputs_list, targets_list):
# convert inputs list to 2d array (probably not necessary for the sinus curve model)
inputs = numpy.array(inputs_list, ndmin=2).T
targets = numpy.array(targets_list, ndmin=2).T
# calculate signals into hidden layer
hidden_inputs = numpy.dot(self.wih, inputs)
# calculate the signals emerging from hidden layer
hidden_outputs = self.activation_function(hidden_inputs)
# calculate signals into final output layer
final_inputs = numpy.dot(self.who, hidden_outputs)
# calculate the signals emerging from final output layer
final_outputs = self.activation_function(final_inputs)
# output layer error is the (target - actual)
output_errors = targets - final_outputs
error_array.append(float(output_errors))
# hidden layer error is the output_errors, split by weights, recombined at hidden nodes
hidden_errors = numpy.dot(self.who.T, output_errors)
# update the weights for the links between the hidden and output layers
self.who += self.lr * numpy.dot((output_errors * final_outputs * (1.0 - final_outputs)), numpy.transpose(hidden_outputs))
# update the weights for the links between the input and hidden layers
self.wih += self.lr * numpy.dot((hidden_errors * hidden_outputs * (1.0 - hidden_outputs)), numpy.transpose(inputs))
pass
# query the neural network
def query(self, inputs_list):
# convert inputs list to 2d array
inputs = numpy.array(inputs_list, ndmin=2).T
# calculate signals into hidden layer
hidden_inputs = numpy.dot(self.wih, inputs)
# calculate the signals emerging from hidden layer
hidden_outputs = self.activation_function(hidden_inputs)
# calculate signals into final output layer
final_inputs = numpy.dot(self.who, hidden_outputs)
# calculate the signals emerging from final output layer
final_outputs = self.activation_function(final_inputs)
return final_outputs
# number of input, hidden and output nodes
input_nodes = 1
hidden_nodes = 10
output_nodes = 1
learning_rate = 0.1
error_array = []
# create instance of neural network
n = neuralNetwork(input_nodes,hidden_nodes,output_nodes, learning_rate)
# load the training data CSV file into a list
training_data_file = open("train-sin-pi-norm_shuffle.csv", 'r')
training_data_list = training_data_file.readlines()
training_data_file.close()
# train the neural network
# epochs is the number of times the training data set is used for training
epochs =10
for e in range(epochs):
# go through all records in the training data set
for record in training_data_list:
# split the record by the ';' commas
all_values = record.split(';')
inputs = numpy.asfarray(all_values[0])
targets = float(all_values[1])
n.train(inputs, targets)
pass
pass
# normally here would be the testing of a test data set, but this is not important at the moment.
plt.plot(error_array, label="Error")
plt.legend()
plt.show()
解决方案
推荐阅读
- sockets - Nativescript 套接字重新连接
- angular - @ngrx/effects API 请求处理
- ssrs-2012 - SSRS 将 Total 引入 Matrix/Tablix
- html - 垂直 div 和对齐之间的空间
- msbuild - VSTS 中 DataRoot 的路径,用于 USQL 构建定义,USQLTargetType=SyntaxCheck 用于托管代理
- python - 强制 Keras 创建自定义层的不同实例
- java - 实现授权码流授权类型 OAuth2
- logstash - jdbc_streaming 过滤器插件性能
- microsoft-graph-api - Microsoft Graph Sharepoint 搜索结果不一致
- c++ - 与系统和正常路径相同的路径包括 CMake