Scikit Learn: Logistic Regression model coefficients: Clarification
Asked Answered
C

2

36

I need to know how to return the logistic regression coefficients in such a manner that I can generate the predicted probabilities myself.

My code looks like this:

lr = LogisticRegression()
lr.fit(training_data, binary_labels)

# Generate probabities automatically
predicted_probs = lr.predict_proba(binary_labels)

I had assumed the lr.coeff_ values would follow typical logistic regression, so that I could return the predicted probabilities like this:

sigmoid( dot([val1, val2, offset], lr.coef_.T) )

But this is not the appropriate formulation. Does anyone have the proper format for generating predicted probabilities from Scikit Learn LogisticRegression? Thanks!

Chapa answered 24/9, 2013 at 23:38 Comment(0)
A
32

take a look at the documentations (http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html), offset coefficient isn't stored by lr.coef_

coef_ array, shape = [n_classes-1, n_features] Coefficient of the features in the decision function. coef_ is readonly property derived from raw_coef_ that follows the internal memory layout of liblinear. intercept_ array, shape = [n_classes-1] Intercept (a.k.a. bias) added to the decision function. It is available only when parameter intercept is set to True.

try:

sigmoid( dot([val1, val2], lr.coef_) + lr.intercept_ ) 
Aphesis answered 24/9, 2013 at 23:42 Comment(5)
#prgao, thanks ,but your answer only tells me how NOT to generate the probabilities. Do you know how to compute them? Thanks.Chapa
sigmoid( dot([val1, val2], lr.coef_) + lr.intercept_ )Aphesis
#prgao, that did it. Damn, I figured this would have worked sigmoid( dot([val1, val2, 1], lr.coef_.T)), but it turns out, I need to include the intercept twice, as in: sigmoid( dot([val1, val2, 1], lr.coef_.T) + lr.intercept_ ). Thanks for pointing this out.Chapa
It can be shortened to: sigmoid( dot([val1, val2, 2], lr.coef_.T) ). Note the 2 value. This effectively adds an additional intercept term.Chapa
Just in case this is useful to someone. I was trying this and I was getting different values from the predict_proba results and the solution proposed by prgao. It turns out I was not normalizing my probabilities (dividing by the sum).Amass
R
2

The easiest way is by calling coef_ attribute of LR classifier:

Definition of coef_ please check Scikit-Learn document:

See example:

from sklearn.linear_model import LogisticRegression

clf = LogisticRegression()  
clf.fit(x_train,y_train)  

weight = classifier.coef_  
Retrogression answered 15/4, 2021 at 5:28 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.