首页 > 解决方案 > 在python中实现梯度下降

问题描述

我试图在 python 中构建梯度下降函数。我使用二元交叉熵作为损失函数,使用 sigmoid 作为激活函数。

def sigmoid(x):
    return 1/(1+np.exp(-x))

def binary_crossentropy(y_pred,y):
    epsilon = 1e-15
    y_pred_new = np.array([max(i,epsilon) for i in y_pred])
    y_pred_new = np.array([min(i,1-epsilon) for i in y_pred_new])
    return -np.mean(y*np.log(y_pred_new) + (1-y)*np.log(1-y_pred_new))

def gradient_descent(X, y, epochs=10, learning_rate=0.5):
    features = X.shape[0]
    w = np.ones(shape=(features, 1))
    bias = 0
    n = X.shape[1]
    for i in range(epochs):
        weighted_sum = w.T@X + bias
        y_pred = sigmoid(weighted_sum)
        
        loss = binary_crossentropy(y_pred, y)
        
        d_w = (1/n)*(X@(y_pred-y).T)
        d_bias = np.mean(y_pred-y)
        
        w = w - learning_rate*d_w
        bias = bias - learning_rate*d_bias
        
        print(f'Epoch:{i}, weights:{w}, bias:{bias}, loss:{loss}')
    return w, bias

所以,作为我给的输入

X = np.array([[0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.4, 0.6, 0.2, 0.4], 
              [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.9, 0.4, 0.7]])
y = 2*X[0] - 3*X[1] + 0.4

然后w, bias = gradient_descent(X, y, epochs=100)输出是 w = array([[-20.95],[-29.95]])、 b =-55.50000017801383loss:40.406546076763014。权重正在减少(变得更-ve),并且偏差也在减少更多时期。预期输出为 w = [[2],[-3]] 和 b = 0.4。

我不知道我做错了什么,损失也没有收敛。它在所有时代都是恒定的。

标签: pythonpython-3.xmachine-learningdeep-learningneural-network

解决方案


通常,binary cross-entropy损失用于二元分类任务。但是,这里你的任务是线性回归,所以我更喜欢使用Mean Square Error损失函数。这是我的建议:

def gradient_descent(X, y, epochs=1000, learning_rate=0.5):
    w = np.ones((X.shape[0], 1))
    bias = 1
    n = X.shape[1]

    for i in range(epochs):
        y_pred = w.T @ X + bias

        mean_square_err = (1.0 / n) * np.sum(np.power((y - y_pred), 2))

        d_w = (-2.0 / n) * (y - y_pred) @ X.T
        d_bias = (-2.0 / n) * np.sum(y - y_pred)

        w -= learning_rate * d_w.T
        bias -= learning_rate * d_bias

        print(f'Epoch:{i}, weights:{w}, bias:{bias}, loss:{mean_square_err}')

    return w, bias


X = np.array([[0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.4, 0.6, 0.2, 0.4],
              [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.9, 0.4, 0.7]])
y = 2*X[0] - 3*X[1] + 0.4

w, bias = gradient_descent(X, y, epochs=5000, learning_rate=0.5)

print(f'w = {w}')
print(f'bias = {bias}')

输出:

w = [[ 1.99999999], [-2.99999999]]
bias = 0.40000000041096756

推荐阅读