首页 > 解决方案 > Tensorflow:从经过训练的密集层预测数据标签

问题描述

我已经使用 tf.layers.dense 训练了一个简单的前馈神经网络但是,在使用优化和训练训练层之后,我不知道如何使用训练过的层来预测我想要评估的新数据的标签.我已经搜索了stackoverflow和google如何做到这一点,我找到的最接近的答案是使用创建的tensorflow模型进行预测 但是,除了保存该层并再次调用它之外,没有更简单的方法来使用经过训练的层吗?

该神经网络经过训练以预测化学反应的输出浓度。化学反应输出可以用耦合 ODE 建模并轻松求解,但我正在尝试使用神经网络来给出近似解。

#Making Data
training_features=pd.DataFrame(data=np.random.random_sample([500,3]),columns=['ca','t','T'])
training_labels=conc_out(training_features,trans)
validation_features=pd.DataFrame(data=np.random.random_sample([30,3]),columns=['ca','t','T'])
validation_labels=conc_out(training_features,trans)

def my_input_fn(features, targets, batch_size=1,num_epochs=None,shuffle=True):
    #Creating Dataset importing function, returning get_next from iterator
    #features=features.to_dict('list')

    ds=tf.data.Dataset.from_tensor_slices((features,targets))
    if shuffle:
        ds=ds.shuffle(buffer_size=10000)

    ds=ds.batch(batch_size).repeat(num_epochs)

    features,labels=ds.make_one_shot_iterator().get_next()
    return features,labels

def nn(input_featurs,hidden_layers=[10]):
    net=tf.layers.dense(input_featurs,hidden_layers[0],activation=tf.nn.sigmoid)
    if len(hidden_layers)>1:
        for i in range(len(hidden_layers)-1):
            net=tf.layers.dense(net,hidden_layers[i+1],activation=tf.nn.sigmoid)
    logits=tf.layers.dense(net,2,activation=None)
    return logits

def train_nn_regression_model(
        learning_rate,
        epochs,
        batch_size,
        hidden_units,
        training_examples,
        training_targets,
        validation_examples,
        validation_targets):

    # Create input functions.

    features,labels=my_input_fn(training_examples,training_targets,batch_size=batch_size)
    predictions=nn(features,hidden_units)
    loss=tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=predictions,labels=labels))
    train_op=tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)

    with tf.Session() as sess:
        #Training neural network
        sess.run(tf.global_variables_initializer())
        training_predictions_before = pd.DataFrame(data=sess.run(training_predictions), columns=['TP_Ca', 'TP_Cb'])
        print(training_predictions_before.head(10))
        for epoch in range(epochs):
            epoch_loss=0
            for _ in range(int(training_examples.shape[0]/batch_size)):
                _,loss_value=sess.run([train_op,loss])
                epoch_loss+=loss_value
            print('Epoch : ',epoch+1, ' out of ', epochs,' . Epoch loss = ',epoch_loss)


        training_predictions=pd.DataFrame(data=sess.run(training_predictions),columns=['TP_Ca','TP_Cb'])
        print(training_predictions.head(10))

train_nn_regression_model(
        learning_rate=0.002,
        epochs=10,
        batch_size=20,
        hidden_units=[100],
        training_examples=training_features,
        training_targets=training_labels,
        validation_examples=validation_features,
        validation_targets=validation_labels)

当我运行代码时, training_predictions_before 和 training_predictions 给出了完全相同的答案。但是,training_predictions 不应该在训练操作之后执行时给出不同的答案吗?

谢谢!

编辑:我已经编辑了代码以确保所有内容都在同一个会话中运行。

标签: tensorflowmachine-learningneural-networklayer

解决方案


推荐阅读