首页 > 解决方案 > 使用许多多项式的梯度下降不收敛

问题描述

背景:我正在尝试创建一个通用函数来优化使用多项式回归(任何指定度数)的任何回归问题的成本。我正在尝试使我的模型适合 load_boston 数据集(以房价作为标签和 13 个特征)。

我使用了多项式多项式,以及多个学习率和时期(使用梯度下降),即使在训练数据集上,MSE 也变得如此之高(我使用 100% 的数据来训练模型,我是检查相同数据的成本,但 MSE 成本仍然很高)。

import tensorflow as tf
from sklearn.datasets import load_boston

def polynomial(x, coeffs):
    y = 0
    for i in range(len(coeffs)):
        y += coeffs[i]*x**i
    return y

def initial_parameters(dimensions, data_type, degree): # list number of dims/features and degree
    thetas = [tf.Variable(0, dtype=data_type)] # the constant theta/bias
    for i in range(degree):
        thetas.append(tf.Variable( tf.zeros([dimensions, 1], dtype=data_type)))
    return thetas

def regression_error(x, y, thetas):
    hx = thetas[0] # constant thetas - no need to have 1 for each variable (e.g x^0*th + y^0*th...)
    for i in range(1, len(thetas)):
        hx = tf.add(hx, tf.matmul( tf.pow(x, i), thetas[i]))
    return tf.reduce_mean(tf.squared_difference(hx, y))

def polynomial_regression(x, y, data_type, degree, learning_rate, epoch): #features=dimensions=variables
    thetas = initial_parameters(x.shape[1], data_type, degree)
    cost = regression_error(x, y, thetas)
    init = tf.initialize_all_variables()
    optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
    with tf.Session() as sess:
        sess.run(init)
        for epoch in range(epoch): 
            sess.run(optimizer)
        return cost.eval()

x, y = load_boston(True) # yes just use the entire dataset
for deg in range(1, 2):
    for lr in range(-8, -5):
        error = polynomial_regression(x, y, tf.float64, deg, 10**lr, 100 )
        print (deg, lr, error)

即使大多数标签都在 30 左右(度数 = 1,学习率 = 10^-6),它也会输出 97.3。代码有什么问题?

标签: pythonpython-3.xtensorflowmachine-learningdata-science

解决方案


问题是不同的特征在不同的数量级上,因此与所有特征都相同的学习率不兼容。更重要的是,当使用非零变量初始化时,必须确保这些初始值与特征值兼容。

In [1]: from sklearn.datasets import load_boston

In [2]: x, y = load_boston(True)

In [3]: x.std(axis=0)
Out[3]: 
array([8.58828355e+00, 2.32993957e+01, 6.85357058e+00, 2.53742935e-01,
       1.15763115e-01, 7.01922514e-01, 2.81210326e+01, 2.10362836e+00,
       8.69865112e+00, 1.68370495e+02, 2.16280519e+00, 9.12046075e+01,
       7.13400164e+00])

In [4]: x.mean(axis=0)
Out[4]: 
array([3.59376071e+00, 1.13636364e+01, 1.11367787e+01, 6.91699605e-02,
       5.54695059e-01, 6.28463439e+00, 6.85749012e+01, 3.79504269e+00,
       9.54940711e+00, 4.08237154e+02, 1.84555336e+01, 3.56674032e+02,
       1.26530632e+01])

一种常见的方法是对输入数据进行归一化(例如,使均值和单位方差为零)并随机选择初始权重(例如,正态分布,std.dev. = 1)。sklearn.preprocessing为这些情况提供各种功能。

然后该polynomial_regression函数简化为:

pipeline = Pipeline([
    ('poly', PolynomialFeatures(degree)),
    ('scaler', StandardScaler())
])
x = pipeline.fit_transform(x)
thetas = tf.Variable(tf.random_normal([x.shape[1], 1], dtype=data_type))
cost = tf.reduce_mean(tf.squared_difference(tf.matmul(x, thetas), y))

# Perform variable initialization and optimizer instantiation here.
# Run optimization over epochs.

推荐阅读