首页 > 解决方案 > 仅使用系数和截距模拟 sklearn 逻辑回归 predict_proba

问题描述

我将创建虚拟数据并在其上训练 sklearn 逻辑回归。然后我想得到的输出,predict_proba但只有自己coef_intercept_计算,但结果不同。设置如下:

X = [[0,0,0], [0,1,0], [0,2,0], [1,1,1], [0,1,0], [0,2,0]]
y = [0,0,0,1,1,2] 

# Fit the classifier
clf = linear_model.LogisticRegression(C=1e5, multi_class="ovr", class_weight="balanced")
clf.fit(X, y)

那我就简单的利用sigmoid和softmax的知识来获取输出:

softmax([
         expit(np.dot([[0,2,0]], clf.coef_[0]) + clf.intercept_[0]),
         expit(np.dot([[0,2,0]], clf.coef_[1]) + clf.intercept_[1]),
         expit(np.dot([[0,2,0]], clf.coef_[2]) + clf.intercept_[2])
])

但它会返回不同的值

clf.predict_proba([[0,2,0]])

array([[0.281399 , 0.15997556, 0.55862544]])array([[0.29882052], [0.24931448], [0.451865 ]])

标签: pythonscikit-learnlogistic-regressionsoftmaxsigmoid

解决方案


您可以使用估计的参数复制预测概率的计算,如下所示:

from sklearn import linear_model
from scipy.special import expit, softmax
import numpy as np

# Data
X = [[0,0,0], [0,1,0], [0,2,0], [1,1,1], [0,1,0], [0,2,0]]
y = [0,0,0,1,1,2]

# Classifier
clf = linear_model.LogisticRegression(C=1e5, multi_class="ovr", class_weight="balanced")
clf.fit(X, y)

# Predicted probabilities
print(clf.predict_proba([[0,2,0]]))
#[[0.281399 0.15997556 0.55862544]]

# Recalculated predicted probabilities without softmax
prob1 = np.array([expit(np.dot([[0,2,0]], clf.coef_[0]) + clf.intercept_[0]),
                  expit(np.dot([[0,2,0]], clf.coef_[1]) + clf.intercept_[1]),
                  expit(np.dot([[0,2,0]], clf.coef_[2]) + clf.intercept_[2])]).reshape(1, -1)

print(prob1 / np.sum(prob1))
#[[0.281399 0.15997556 0.55862544]]

# Recalculated predicted probabilities with softmax
prob2 = np.log(prob1)

print(softmax(prob2))
#[[0.281399 0.15997556 0.55862544]]

推荐阅读