Multiple linear regression in Python
Asked Answered
V

15

156

I can't seem to find any python libraries that do multiple regression. The only things I find only do simple regression. I need to regress my dependent variable (y) against several independent variables (x1, x2, x3, etc.).

For example, with this data:

print 'y        x1      x2       x3       x4      x5     x6       x7'
for t in texts:
    print "{:>7.1f}{:>10.2f}{:>9.2f}{:>9.2f}{:>10.2f}{:>7.2f}{:>7.2f}{:>9.2f}" /
   .format(t.y,t.x1,t.x2,t.x3,t.x4,t.x5,t.x6,t.x7)

(output for above:)

      y        x1       x2       x3        x4     x5     x6       x7
   -6.0     -4.95    -5.87    -0.76     14.73   4.02   0.20     0.45
   -5.0     -4.55    -4.52    -0.71     13.74   4.47   0.16     0.50
  -10.0    -10.96   -11.64    -0.98     15.49   4.18   0.19     0.53
   -5.0     -1.08    -3.36     0.75     24.72   4.96   0.16     0.60
   -8.0     -6.52    -7.45    -0.86     16.59   4.29   0.10     0.48
   -3.0     -0.81    -2.36    -0.50     22.44   4.81   0.15     0.53
   -6.0     -7.01    -7.33    -0.33     13.93   4.32   0.21     0.50
   -8.0     -4.46    -7.65    -0.94     11.40   4.43   0.16     0.49
   -8.0    -11.54   -10.03    -1.03     18.18   4.28   0.21     0.55

How would I regress these in python, to get the linear regression formula:

Y = a1x1 + a2x2 + a3x3 + a4x4 + a5x5 + a6x6 + +a7x7 + c

Velvety answered 13/7, 2012 at 22:14 Comment(3)
not an expert, but if the variables are independent, can't you just run simple regression against each and sum the result?Caa
@HughBothwell You can't assume that the variables are independent though. In fact, if you're assuming that the variables are independent, you may potentially be modeling your data incorrectly. In other words, the responses Y may be correlated with each other, but assuming independence does not accurately model the dataset.Trimmer
@HughBothwell sorry if this a dum question, but why does it matter if the raw feature variables x_i are independent or not? How does that affect the predictor (=model)?Nisbet
J
117

sklearn.linear_model.LinearRegression will do it:

from sklearn import linear_model
clf = linear_model.LinearRegression()
clf.fit([[getattr(t, 'x%d' % i) for i in range(1, 8)] for t in texts],
        [t.y for t in texts])

Then clf.coef_ will have the regression coefficients.

sklearn.linear_model also has similar interfaces to do various kinds of regularizations on the regression.

Jejunum answered 13/7, 2012 at 22:41 Comment(9)
This returns an error with certain inputs. Any other solutions available?Velvety
@Dougal can sklearn.linear_model.LinearRegression be used for weighted multivariate regression as well?Manns
To fit a constant term: clf = linear_model.LinearRegression(fit_intercept=True)Hiedihiemal
Follow up, do you know how to get the confidence level using sklearn.linear_model.LinearRegression? Thanks.Balustrade
@HuanianZhang what do you mean by confidence level? If you want the coefficient of determination, the score method will do it; sklearn.metrics has some other model evaluation criteria. If you want the stuff like in Akavall's answer, statsmodels has some more R-like diagnostics.Jejunum
@Dougal Thank you very much. I want the stuff like in Akavall's answer, but I do not find such sklearn functions to do that. Another question, I used the above data, the regression coefficients from your method are totally different from the method of Akavall, do you know why is that?Balustrade
@HuanianZhang I just checked that I got the same regression coefficients using this approach, Akavall's, and canary_in_the_data_mine's. They all agreed.Jejunum
@Dougal Sorry, It is my bad. It depends on fit_intercept=True or False. I changed this to be False, that is why I got totally different coefficients. Thank you so much. Have a nice day.Balustrade
@Manns clf.fit accepts weights as well: fit(self, X, y, sample_weight=None)Stocker
L
67

Here is a little work around that I created. I checked it with R and it works correct.

import numpy as np
import statsmodels.api as sm

y = [1,2,3,4,3,4,5,4,5,5,4,5,4,5,4,5,6,5,4,5,4,3,4]

x = [
     [4,2,3,4,5,4,5,6,7,4,8,9,8,8,6,6,5,5,5,5,5,5,5],
     [4,1,2,3,4,5,6,7,5,8,7,8,7,8,7,8,7,7,7,7,7,6,5],
     [4,1,2,5,6,7,8,9,7,8,7,8,7,7,7,7,7,7,6,6,4,4,4]
     ]

def reg_m(y, x):
    ones = np.ones(len(x[0]))
    X = sm.add_constant(np.column_stack((x[0], ones)))
    for ele in x[1:]:
        X = sm.add_constant(np.column_stack((ele, X)))
    results = sm.OLS(y, X).fit()
    return results

Result:

print reg_m(y, x).summary()

Output:

                            OLS Regression Results                            
==============================================================================
Dep. Variable:                      y   R-squared:                       0.535
Model:                            OLS   Adj. R-squared:                  0.461
Method:                 Least Squares   F-statistic:                     7.281
Date:                Tue, 19 Feb 2013   Prob (F-statistic):            0.00191
Time:                        21:51:28   Log-Likelihood:                -26.025
No. Observations:                  23   AIC:                             60.05
Df Residuals:                      19   BIC:                             64.59
Df Model:                           3                                         
==============================================================================
                 coef    std err          t      P>|t|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
x1             0.2424      0.139      1.739      0.098        -0.049     0.534
x2             0.2360      0.149      1.587      0.129        -0.075     0.547
x3            -0.0618      0.145     -0.427      0.674        -0.365     0.241
const          1.5704      0.633      2.481      0.023         0.245     2.895

==============================================================================
Omnibus:                        6.904   Durbin-Watson:                   1.905
Prob(Omnibus):                  0.032   Jarque-Bera (JB):                4.708
Skew:                          -0.849   Prob(JB):                       0.0950
Kurtosis:                       4.426   Cond. No.                         38.6

pandas provides a convenient way to run OLS as given in this answer:

Run an OLS regression with Pandas Data Frame

Leptophyllous answered 20/2, 2013 at 2:53 Comment(5)
The reg_m function is unnecessarily complicated. x = np.array(x).T, x = sm.add_constant(x) and results = sm.OLS(endog=y, exog=x).fit() is enough.Ovariotomy
This is a nice tool. Just ask one question: in this case, the t value is outside the 95.5% confidence interval, so it means this fitting is not accurate at all, or how do you explain this?Balustrade
Just noticed that your x1, x2, x3 are in reverse order in your original predictor list, i.e., x = [x3, x2, x1]?Houlihan
@Houlihan you can just add x = x[::-1] within function definition to get in right orderGeanticlinal
@HuanianZhang "t value" is just how many standard deviations the coefficient is away from zero, while 95%CI is approximately coef +- 2 * std err (actually the Student-t distribution parameterized by degrees of freedom in the residuals). i.e. larger absolute t values imply CIs further from zero, but they shouldn't be directly compared. clarification is a bit late, but hope it's useful to somebodyOchlophobia
N
60

Just to clarify, the example you gave is multiple linear regression, not multivariate linear regression refer. Difference:

The very simplest case of a single scalar predictor variable x and a single scalar response variable y is known as simple linear regression. The extension to multiple and/or vector-valued predictor variables (denoted with a capital X) is known as multiple linear regression, also known as multivariable linear regression. Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variable y is still a scalar. Another term multivariate linear regression refers to cases where y is a vector, i.e., the same as general linear regression. The difference between multivariate linear regression and multivariable linear regression should be emphasized as it causes much confusion and misunderstanding in the literature.

In short:

  • multiple linear regression: the response y is a scalar.
  • multivariate linear regression: the response y is a vector.

(Another source.)

Natka answered 28/10, 2015 at 2:2 Comment(4)
This might be useful information, but I don't see how it answers the question.Leptophyllous
@Leptophyllous using the correct terminology is the first step to find an answer.Natka
@FranckDernoncourt but OP's Y value IS a vector?Hibernicism
@FranckDernoncourt: "using the correct terminology is the first step to find an answer". Great, so we both can agree: this, in and of itself, is not actually an answer. Users should be able to solve their problem directly from answers without having to resort to looking up other resources.Fluoric
H
36

You can use numpy.linalg.lstsq:

import numpy as np

y = np.array([-6, -5, -10, -5, -8, -3, -6, -8, -8])
X = np.array(
    [
        [-4.95, -4.55, -10.96, -1.08, -6.52, -0.81, -7.01, -4.46, -11.54],
        [-5.87, -4.52, -11.64, -3.36, -7.45, -2.36, -7.33, -7.65, -10.03],
        [-0.76, -0.71, -0.98, 0.75, -0.86, -0.50, -0.33, -0.94, -1.03],
        [14.73, 13.74, 15.49, 24.72, 16.59, 22.44, 13.93, 11.40, 18.18],
        [4.02, 4.47, 4.18, 4.96, 4.29, 4.81, 4.32, 4.43, 4.28],
        [0.20, 0.16, 0.19, 0.16, 0.10, 0.15, 0.21, 0.16, 0.21],
        [0.45, 0.50, 0.53, 0.60, 0.48, 0.53, 0.50, 0.49, 0.55],
    ]
)
X = X.T  # transpose so input vectors are along the rows
X = np.c_[X, np.ones(X.shape[0])]  # add bias term
beta_hat = np.linalg.lstsq(X, y, rcond=None)[0]
print(beta_hat)

Result:

[ -0.49104607   0.83271938   0.0860167    0.1326091    6.85681762  22.98163883 -41.08437805 -19.08085066]

You can see the estimated output with:

print(np.dot(X,beta_hat))

Result:

[ -5.97751163,  -5.06465759, -10.16873217,  -4.96959788,  -7.96356915,  -3.06176313,  -6.01818435,  -7.90878145,  -7.86720264]
Hiedihiemal answered 13/11, 2014 at 21:1 Comment(1)
may i know what is difference between print np.dot(X,beta_hat)... and mod_wls = sm.WLS(y, X, weights=weights) res = mod_wls.fit() predsY=res.predict() they all return the Y resultRf
F
13

Use scipy.optimize.curve_fit. And not only for linear fit.

from scipy.optimize import curve_fit
import scipy

def fn(x, a, b, c):
    return a + b*x[0] + c*x[1]

# y(x0,x1) data:
#    x0=0 1 2
# ___________
# x1=0 |0 1 2
# x1=1 |1 2 3
# x1=2 |2 3 4

x = scipy.array([[0,1,2,0,1,2,0,1,2,],[0,0,0,1,1,1,2,2,2]])
y = scipy.array([0,1,2,1,2,3,2,3,4])
popt, pcov = curve_fit(fn, x, y)
print popt
Forever answered 24/11, 2014 at 0:12 Comment(0)
E
11

Once you convert your data to a pandas dataframe (df),

import statsmodels.formula.api as smf
lm = smf.ols(formula='y ~ x1 + x2 + x3 + x4 + x5 + x6 + x7', data=df).fit()
print(lm.params)

The intercept term is included by default.

See this notebook for more examples.

Engineer answered 11/9, 2015 at 15:55 Comment(3)
This notebook is awesome. it shows how to regress multiple independent variables (x1,x2,x3...) on Y with just 3 lines of code and using scikit learn.Pettifogger
@Engineer thanks for the notebook. how can i plot linear regression which has multiple features? I couldn't find in the notebook. any pointers will be greatly appreciated. -- ThanksElbert
Does it add the intercept because we have to add the intercept by passing smf.add_intercept() as a parameter to ols()Myself
S
4

I think this may the most easy way to finish this work:

from random import random
from pandas import DataFrame
from statsmodels.api import OLS
lr = lambda : [random() for i in range(100)]
x = DataFrame({'x1': lr(), 'x2':lr(), 'x3':lr()})
x['b'] = 1
y = x.x1 + x.x2 * 2 + x.x3 * 3 + 4

print x.head()

         x1        x2        x3  b
0  0.433681  0.946723  0.103422  1
1  0.400423  0.527179  0.131674  1
2  0.992441  0.900678  0.360140  1
3  0.413757  0.099319  0.825181  1
4  0.796491  0.862593  0.193554  1

print y.head()

0    6.637392
1    5.849802
2    7.874218
3    7.087938
4    7.102337
dtype: float64

model = OLS(y, x)
result = model.fit()
print result.summary()

                            OLS Regression Results                            
==============================================================================
Dep. Variable:                      y   R-squared:                       1.000
Model:                            OLS   Adj. R-squared:                  1.000
Method:                 Least Squares   F-statistic:                 5.859e+30
Date:                Wed, 09 Dec 2015   Prob (F-statistic):               0.00
Time:                        15:17:32   Log-Likelihood:                 3224.9
No. Observations:                 100   AIC:                            -6442.
Df Residuals:                      96   BIC:                            -6431.
Df Model:                           3                                         
Covariance Type:            nonrobust                                         
==============================================================================
                 coef    std err          t      P>|t|      [95.0% Conf. Int.]
------------------------------------------------------------------------------
x1             1.0000   8.98e-16   1.11e+15      0.000         1.000     1.000
x2             2.0000   8.28e-16   2.41e+15      0.000         2.000     2.000
x3             3.0000   8.34e-16    3.6e+15      0.000         3.000     3.000
b              4.0000   8.51e-16    4.7e+15      0.000         4.000     4.000
==============================================================================
Omnibus:                        7.675   Durbin-Watson:                   1.614
Prob(Omnibus):                  0.022   Jarque-Bera (JB):                3.118
Skew:                           0.045   Prob(JB):                        0.210
Kurtosis:                       2.140   Cond. No.                         6.89
==============================================================================
Schmo answered 9/12, 2015 at 7:22 Comment(0)
H
4

Multiple Linear Regression can be handled using the sklearn library as referenced above. I'm using the Anaconda install of Python 3.6.

Create your model as follows:

from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X, y)

# display coefficients
print(regressor.coef_)
Helios answered 8/5, 2017 at 22:23 Comment(0)
P
2

You can use numpy.linalg.lstsq

Primalia answered 14/7, 2012 at 7:25 Comment(1)
How can you use this to get the coefficents of a multivariate regression? I only see how to do a simple regression... and don't see how to get the coefficents..Velvety
W
1

You can use the function below and pass it a DataFrame:

def linear(x, y=None, show=True):
    """
    @param x: pd.DataFrame
    @param y: pd.DataFrame or pd.Series or None
              if None, then use last column of x as y
    @param show: if show regression summary
    """
    import statsmodels.api as sm

    xy = sm.add_constant(x if y is None else pd.concat([x, y], axis=1))
    res = sm.OLS(xy.ix[:, -1], xy.ix[:, :-1], missing='drop').fit()

    if show: print res.summary()
    return res
Winton answered 26/8, 2015 at 9:37 Comment(0)
G
1

Scikit-learn is a machine learning library for Python which can do this job for you. Just import sklearn.linear_model module into your script.

Find the code template for Multiple Linear Regression using sklearn in Python:

import numpy as np
import matplotlib.pyplot as plt #to plot visualizations
import pandas as pd

# Importing the dataset
df = pd.read_csv(<Your-dataset-path>)
# Assigning feature and target variables
X = df.iloc[:,:-1]
y = df.iloc[:,-1]

# Use label encoders, if you have any categorical variable
from sklearn.preprocessing import LabelEncoder
labelencoder = LabelEncoder()
X['<column-name>'] = labelencoder.fit_transform(X['<column-name>'])

from sklearn.preprocessing import OneHotEncoder
onehotencoder = OneHotEncoder(categorical_features = ['<index-value>'])
X = onehotencoder.fit_transform(X).toarray()

# Avoiding the dummy variable trap
X = X[:,1:] # Usually done by the algorithm itself

#Spliting the data into test and train set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y, random_state = 0, test_size = 0.2)

# Fitting the model
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)

# Predicting the test set results
y_pred = regressor.predict(X_test)

That's it. You can use this code as a template for implementing Multiple Linear Regression in any dataset. For a better understanding with an example, Visit: Linear Regression with an example

Gullet answered 27/11, 2019 at 9:33 Comment(0)
J
0

Here is an alternative and basic method:

from patsy import dmatrices
import statsmodels.api as sm

y,x = dmatrices("y_data ~ x_1 + x_2 ", data = my_data)
### y_data is the name of the dependent variable in your data ### 
model_fit = sm.OLS(y,x)
results = model_fit.fit()
print(results.summary())

Instead of sm.OLS you can also use sm.Logit or sm.Probit and etc.

Jevon answered 11/2, 2019 at 21:4 Comment(0)
F
0

Finding a linear model such as this one can be handled with OpenTURNS.

In OpenTURNS this is done with the LinearModelAlgorithmclass which creates a linear model from numerical samples. To be more specific, it builds the following linear model :

Y = a0 + a1.X1 + ... + an.Xn + epsilon,

where the error epsilon is gaussian with zero mean and unit variance. Assuming your data is in a csv file, here is a simple script to get the regression coefficients ai :

from __future__ import print_function
import pandas as pd
import openturns as ot

# Assuming the data is a csv file with the given structure                          
# Y X1 X2 .. X7
df = pd.read_csv("./data.csv", sep="\s+")

# Build a sample from the pandas dataframe
sample = ot.Sample(df.values)

# The observation points are in the first column (dimension 1)
Y = sample[:, 0]

# The input vector (X1,..,X7) of dimension 7
X = sample[:, 1::]

# Build a Linear model approximation
result = ot.LinearModelAlgorithm(X, Y).getResult()

# Get the coefficients ai
print("coefficients of the linear regression model = ", result.getCoefficients())

You can then easily get the confidence intervals with the following call :

# Get the confidence intervals at 90% of the ai coefficients
print(
    "confidence intervals of the coefficients = ",
    ot.LinearModelAnalysis(result).getCoefficientsConfidenceInterval(0.9),
)

You may find a more detailed example in the OpenTURNS examples.

Fracture answered 11/9, 2020 at 14:36 Comment(0)
D
0

try a generalized linear model with a gaussian family

y = np.array([-6, -5, -10, -5, -8, -3, -6, -8, -8])
X = np.array([
    [-4.95, -4.55, -10.96, -1.08, -6.52, -0.81, -7.01, -4.46, -11.54],
    [-5.87, -4.52, -11.64, -3.36, -7.45, -2.36, -7.33, -7.65, -10.03],
    [-0.76, -0.71, -0.98, 0.75, -0.86, -0.50, -0.33, -0.94, -1.03],
    [14.73, 13.74, 15.49, 24.72, 16.59, 22.44, 13.93, 11.40, 18.18],
    [4.02, 4.47, 4.18, 4.96, 4.29, 4.81, 4.32, 4.43, 4.28],
    [0.20, 0.16, 0.19, 0.16, 0.10, 0.15, 0.21, 0.16, 0.21],
    [0.45, 0.50, 0.53, 0.60, 0.48, 0.53, 0.50, 0.49, 0.55],
])
X=zip(*reversed(X))

df=pd.DataFrame({'X':X,'y':y})
columns=7
for i in range(0,columns):
    df['X'+str(i)]=df.apply(lambda row: row['X'][i],axis=1)

df=df.drop('X',axis=1)
print(df)


#model_formula='y ~ X0+X1+X2+X3+X4+X5+X6'
model_formula='y ~ X0'

model_family = sm.families.Gaussian()
model_fit = glm(formula = model_formula, 
             data = df, 
             family = model_family).fit()

print(model_fit.summary())

# Extract coefficients from the fitted model wells_fit
#print(model_fit.params)
intercept, slope = model_fit.params

# Print coefficients
print('Intercept =', intercept)
print('Slope =', slope)

# Extract and print confidence intervals
print(model_fit.conf_int())

df2=pd.DataFrame()
df2['X0']=np.linspace(0.50,0.70,50)

df3=pd.DataFrame()
df3['X1']=np.linspace(0.20,0.60,50)

prediction0=model_fit.predict(df2)
#prediction1=model_fit.predict(df3)

plt.plot(df2['X0'],prediction0,label='X0')
plt.ylabel("y")
plt.xlabel("X0")
plt.show()
Diathesis answered 11/2, 2021 at 22:24 Comment(0)
F
-3

Linear Regression is a good example for start to Artificial Intelligence

Here is a good example for Machine Learning Algorithm of Multiple Linear Regression using Python:

##### Predicting House Prices Using Multiple Linear Regression - @Y_T_Akademi
    
#### In this project we are gonna see how machine learning algorithms help us predict house prices. Linear Regression is a model of predicting new future data by using the existing correlation between the old data. Here, machine learning helps us identify this relationship between feature data and output, so we can predict future values.

import pandas as pd

##### we use sklearn library in many machine learning calculations..

from sklearn import linear_model

##### we import out dataset: housepricesdataset.csv

df = pd.read_csv("housepricesdataset.csv",sep = ";")

##### The following is our feature set:
##### The following is the output(result) data:
##### we define a linear regression model here: 

reg = linear_model.LinearRegression()
reg.fit(df[['area', 'roomcount', 'buildingage']], df['price'])

# Since our model is ready, we can make predictions now:
# lets predict a house with 230 square meters, 4 rooms and 10 years old building..

reg.predict([[230,4,10]])

# Now lets predict a house with 230 square meters, 6 rooms and 0 years old building - its new building..
reg.predict([[230,6,0]])

# Now lets predict a house with 355 square meters, 3 rooms and 20 years old building 
reg.predict([[355,3,20]])

# You can make as many prediction as you want.. 
reg.predict([[230,4,10], [230,6,0], [355,3,20], [275, 5, 17]])

And my dataset is below:

enter image description here

Fabric answered 30/10, 2021 at 11:12 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.