c++ - 使用 OpenNN 开发神经网络:尝试向回归神经网络添加层后程序崩溃
问题描述
我使用OpenNN为回归任务开发神经网络。我的神经网络层有问题。几周前,我从 OpenNN 的主分支克隆了出来。每次我尝试添加图层时,我的程序都会崩溃而没有给出错误消息。我目前正在为回归问题实施神经网络,因此我查看了 OpenNN 中的Yacht_hydrodynamics_design示例。但是将此代码复制到我的代码后,我遇到了这个问题。到目前为止,我尝试添加一个 Scaling 层和一个 Upscaling 层,但这些都不起作用。到目前为止,这是我的代码:
bool NNetwork::preparationForTraining(const string& filedata) {
int inputLayerSize = 5;
int outputLayerSize = 1;
int hiddenLayerSize = round(sqrt((inputLayerSize * inputLayerSize) + (outputLayerSize * outputLayerSize)));
int layers = 3;
try {
Tensor<Index, 1> neural_network_architecture(layers);
neural_network_architecture.setValues({inputLayerSize, hiddenLayerSize, outputLayerSize});
neuralnetwork = NeuralNetwork(NeuralNetwork::Approximation, neural_network_architecture);
}
catch(...) {
cerr << "Failed to initialize Neural Network" << endl;
return false;
}
try {
dataset = DataSet(filedata, ';', true);
}
catch(...) {
cerr << "Can not read Feature File" << endl;
return false;
}
if (dataset.get_input_variables_number() != inputLayerSize) {
cerr << "Wrong size of input layer" << endl;
return false;
}
if (dataset.get_target_variables_number() != outputLayerSize) {
cerr << "Wrong size of output layer" << endl;
return false;
}
//prepare Dataset
//get the information of the variables, such as names and statistical descriptives
Tensor<string, 1> inputs_names = dataset.get_input_variables_names();
Tensor<string, 1> targets_names = dataset.get_target_variables_names();
//instances are divided into a training, a selection and a testing subsets
dataset.split_samples_random();
//get the input variables number and target variables number
Index input_variables_number = dataset.get_input_variables_number();
Index target_variables_number = dataset.get_target_variables_number();
//scale the data set with the minimum-maximum scaling method
Tensor<string, 1> scaling_inputs_methods(input_variables_number);
scaling_inputs_methods.setConstant("MinimumMaximum");
Tensor<Descriptives, 1> inputs_descriptives = dataset.scale_input_variables(scaling_inputs_methods);
Tensor<string, 1> scaling_target_methods(target_variables_number);
scaling_target_methods.setConstant("MinimumMaximum");
Tensor<Descriptives, 1> targets_descriptives = dataset.scale_target_variables(scaling_target_methods);
//prepare Neural Network
//introduce information in the layers for a more precise calibration
neuralnetwork.set_inputs_names(inputs_names);
neuralnetwork.set_outputs_names(targets_names);
cout << "inputs names: " << inputs_names << endl;
cout << "targets names: " << targets_names << endl;
//add scaling layer to neural network
ScalingLayer* scaling_layer_pointer = neuralnetwork.get_scaling_layer_pointer(); //Program crashes here
scaling_layer_pointer->set_scaling_methods(ScalingLayer::MinimumMaximum);
scaling_layer_pointer->set_descriptives(inputs_descriptives);
//add the unscaling layer to neural network
UnscalingLayer* unscaling_layer_pointer = neuralnetwork.get_unscaling_layer_pointer();
unscaling_layer_pointer->set_unscaling_methods(UnscalingLayer::MinimumMaximum);
unscaling_layer_pointer->set_descriptives(targets_descriptives);
return true;
}
如您所见,我有一个名为 NNetwork 的类,其构造如下(头文件):
using namespace OpenNN;
using namespace Eigen;
namespace covid {
class NNetwork {
public:
explicit NNetwork();
~NNetwork() = default;
bool preparationForTraining(const string& filedata);
bool training();
bool testing();
bool predict(const string &filedata, std::vector<double> &prediction);
bool loadNN();
private:
OpenNN::NeuralNetwork neuralnetwork;
OpenNN::DataSet dataset;
};
}
当我删除函数preparationForTraining中的最后6行代码时,程序会继续运行,直到函数训练中发生下一次崩溃,它在preparationForTraining之后立即被调用:
bool NNetwork::training() {
//set the training strategy, which is composed by Loss index and Optimization algorithm
// Training strategy object
TrainingStrategy training_strategy(&neuralnetwork, &dataset); //Program crashes here next
training_strategy.set_loss_method(TrainingStrategy::NORMALIZED_SQUARED_ERROR);
training_strategy.set_optimization_method(TrainingStrategy::ADAPTIVE_MOMENT_ESTIMATION);
// optimization
AdaptiveMomentEstimation* adam = training_strategy.get_adaptive_moment_estimation_pointer();
adam->set_loss_goal(1.0e-3);
adam->set_maximum_epochs_number(10000);
adam->set_display_period(1000);
try {
// start the training process
const OptimizationAlgorithm::Results optimization_algorithm_results = training_strategy.perform_training();
optimization_algorithm_results.save("E:/vitalib/vitalib/optimization_algorithm_results.dat");
}
catch(...) {
return false;
}
return true;
}
我感觉我错过了一些东西,可能是关键的代码行或类似的东西。如果任何有 OpenNN 经验的人可以帮助我,那就太好了。
更新:我将整个代码从函数 prepareForTraining 移到 main 中,现在程序没有崩溃。但这不是我要找的,因为我宁愿在函数中这样做。
解决方案
推荐阅读
- android - 在应用程序开发中缩放字体大小的理想方法是什么?
- sql-server - 使用不同的 db-schema 创建身份(成员资格)数据库
- android - 对象在位置 android studio Kotlin 不存在
- node.js - 为什么 Axios 在 Chrome、Safari 和新的 Microsoft Edge 中会超时,但在 Firefox 中不会?
- .net - System.Data.dll 中出现“System.AccessViolationException”类型的未处理异常
- javascript - 获取滑块的最后一个值
- python - 如何制作自运行文件?
- android - 无法保持图像纵横比
- javascript - Sweet Alert 在 django 的 ajax 调用中不起作用
- javascript - 使用 Node JS 从 CSV 文件中删除某些列具有空白值的行