首页 > 解决方案 > python numpy上的梯度下降函数

问题描述

def gradientDescent(X,y,theta,alpha,num_iters):
    print(X.shape,y.shape,theta.shape)
    m = len(y)
    for iter in range(num_iters):
        hypothesis = np.dot(X,theta)
        loss = hypothesis - y
        print("loss {}".format(loss[0]))
        gradient = np.dot(X.transpose(),loss)/m
        theta = theta - alpha*gradient
    return(theta)

我已经打印了 X、y、theta 和损失的形状以进行澄清,alpha 输入 = 0.01,num_iters 输入 = 150。结果在第 6 步之后出现分歧,如下所示:

(97, 2) (97, 1) (2, 1)
loss [-17.592]
loss [-13.5419506]
loss [-12.82427147]
loss [-12.69896095]
loss [-12.67894766]
loss [-12.67764826]
loss [-12.67967143]
loss [-12.68228117]
loss [-12.68499113]
loss [-12.68771485]
loss [-12.69043697]
loss [-12.69315478]
...
loss [-13.01638377]
loss [-13.01851416]
loss [-13.0206407]
loss [-13.02276341]
loss [-13.0248823]

theta = [[-0.86287834]
 [ 0.88834569]]```
theta should have been  [[-3.6303]
  [1.1664]])

标签: pythonnumpymachine-learninggradient-descent

解决方案


推荐阅读