Variance Inflation Factor in Python
Asked Answered
P

9

65

I'm trying to calculate the variance inflation factor (VIF) for each column in a simple dataset in python:

a b c d
1 2 4 4
1 2 6 3
2 3 7 4
3 2 8 5
4 1 9 4

I have already done this in R using the vif function from the usdm library which gives the following results:

a <- c(1, 1, 2, 3, 4)
b <- c(2, 2, 3, 2, 1)
c <- c(4, 6, 7, 8, 9)
d <- c(4, 3, 4, 5, 4)

df <- data.frame(a, b, c, d)
vif_df <- vif(df)
print(vif_df)

Variables   VIF
   a        22.95
   b        3.00
   c        12.95
   d        3.00

However, when I do the same in python using the statsmodel vif function, my results are:

a = [1, 1, 2, 3, 4]
b = [2, 2, 3, 2, 1]
c = [4, 6, 7, 8, 9]
d = [4, 3, 4, 5, 4]

ck = np.column_stack([a, b, c, d])

vif = [variance_inflation_factor(ck, i) for i in range(ck.shape[1])]
print(vif)

Variables   VIF
   a        47.136986301369774
   b        28.931506849315081
   c        80.31506849315096
   d        40.438356164383549

The results are vastly different, even though the inputs are the same. In general, results from the statsmodel VIF function seem to be wrong, but I'm not sure if this is because of the way I am calling it or if it is an issue with the function itself.

I was hoping someone could help me figure out whether I was incorrectly calling the statsmodel function or explain the discrepancies in the results. If it's an issue with the function then are there any VIF alternatives in python?

Portly answered 7/3, 2017 at 21:9 Comment(0)
I
38

I believe the reason for this is due to a difference in Python's OLS. OLS, which is used in the python variance inflation factor calculation, does not add an intercept by default. You definitely want an intercept in there however.

What you'd want to do is add one more column to your matrix, ck, filled with ones to represent a constant. This will be the intercept term of the equation. Once this is done, your values should match out properly.

Edited: replaced zeroes with ones

Inseverable answered 20/3, 2017 at 18:56 Comment(4)
subtracting the mean from all variables would be similar.Mahdi
typo: column for constant should be filled with ones (not zeros).Mahdi
Good call on my typo. Edited my original post with the fix.Inseverable
That makes sense. Adding a column of 1s did the trick. Thanks!Portly
C
75

As mentioned by others and in this post by Josef Perktold, the function's author, variance_inflation_factor expects the presence of a constant in the matrix of explanatory variables. One can use add_constant from statsmodels to add the required constant to the dataframe before passing its values to the function.

from statsmodels.stats.outliers_influence import variance_inflation_factor
from statsmodels.tools.tools import add_constant

df = pd.DataFrame(
    {'a': [1, 1, 2, 3, 4],
     'b': [2, 2, 3, 2, 1],
     'c': [4, 6, 7, 8, 9],
     'd': [4, 3, 4, 5, 4]}
)

X = add_constant(df)
>>> pd.Series([variance_inflation_factor(X.values, i) 
               for i in range(X.shape[1])], 
              index=X.columns)
const    136.875
a         22.950
b          3.000
c         12.950
d          3.000
dtype: float64

I believe you could also add the constant to the right most column of the dataframe using assign:

X = df.assign(const=1)
>>> pd.Series([variance_inflation_factor(X.values, i) 
               for i in range(X.shape[1])], 
              index=X.columns)
a         22.950
b          3.000
c         12.950
d          3.000
const    136.875
dtype: float64

The source code itself is rather concise:

def variance_inflation_factor(exog, exog_idx):
    """
    exog : ndarray, (nobs, k_vars)
        design matrix with all explanatory variables, as for example used in
        regression
    exog_idx : int
        index of the exogenous variable in the columns of exog
    """
    k_vars = exog.shape[1]
    x_i = exog[:, exog_idx]
    mask = np.arange(k_vars) != exog_idx
    x_noti = exog[:, mask]
    r_squared_i = OLS(x_i, x_noti).fit().rsquared
    vif = 1. / (1. - r_squared_i)
    return vif

It is also rather simple to modify the code to return all of the VIFs as a series:

from statsmodels.regression.linear_model import OLS
from statsmodels.tools.tools import add_constant

def variance_inflation_factors(exog_df):
    '''
    Parameters
    ----------
    exog_df : dataframe, (nobs, k_vars)
        design matrix with all explanatory variables, as for example used in
        regression.

    Returns
    -------
    vif : Series
        variance inflation factors
    '''
    exog_df = add_constant(exog_df)
    vifs = pd.Series(
        [1 / (1. - OLS(exog_df[col].values, 
                       exog_df.loc[:, exog_df.columns != col].values).fit().rsquared) 
         for col in exog_df],
        index=exog_df.columns,
        name='VIF'
    )
    return vifs

>>> variance_inflation_factors(df)
const    136.875
a         22.950
b          3.000
c         12.950
Name: VIF, dtype: float64

Per the solution of @T_T, one can also simply do the following:

vifs = pd.Series(np.linalg.inv(df.corr().to_numpy()).diagonal(), 
                 index=df.columns, 
                 name='VIF')
Chauncey answered 16/2, 2018 at 2:54 Comment(3)
I think it is safe to add X = add_constant(df.dropna()) in case of missing values.Jarrad
Thanks for this solution. I was extremely perplexed as to why I was getting such high VIF for my model's independent variables, which is how I ended up on this post. Much as I hate to do this, I'm almost tempted to complete my analysis in R.Bartlet
With some data it is not possibe to create the inverse matrix which leads to LinAlgError("Singular matrix")numpy.linalg.LinAlgError: Singular matrix. In this case replace inv() with pinv(). pinv() compute the (Moore-Penrose) pseudo-inverse of a matrix. pd.Series(np.linalg.pinv(X.corr().to_numpy()).diagonal(), index=X.columns, name='VIF') Out[13]: a 22.95 b 3.00 c 12.95 d 3.00Antheridium
I
38

I believe the reason for this is due to a difference in Python's OLS. OLS, which is used in the python variance inflation factor calculation, does not add an intercept by default. You definitely want an intercept in there however.

What you'd want to do is add one more column to your matrix, ck, filled with ones to represent a constant. This will be the intercept term of the equation. Once this is done, your values should match out properly.

Edited: replaced zeroes with ones

Inseverable answered 20/3, 2017 at 18:56 Comment(4)
subtracting the mean from all variables would be similar.Mahdi
typo: column for constant should be filled with ones (not zeros).Mahdi
Good call on my typo. Edited my original post with the fix.Inseverable
That makes sense. Adding a column of 1s did the trick. Thanks!Portly
P
28

For future comers to this thread (like me):

import numpy as np
import scipy as sp

a = [1, 1, 2, 3, 4]
b = [2, 2, 3, 2, 1]
c = [4, 6, 7, 8, 9]
d = [4, 3, 4, 5, 4]

ck = np.column_stack([a, b, c, d])
cc = sp.corrcoef(ck, rowvar=False)
VIF = np.linalg.inv(cc)
VIF.diagonal()

This code gives

array([22.95,  3.  , 12.95,  3.  ])

[EDIT]

In response to a comment, I tried to use DataFrame as much as possible (numpy is required to invert a matrix).

import pandas as pd
import numpy as np

a = [1, 1, 2, 3, 4]
b = [2, 2, 3, 2, 1]
c = [4, 6, 7, 8, 9]
d = [4, 3, 4, 5, 4]

df = pd.DataFrame({'a':a,'b':b,'c':c,'d':d})
df_cor = df.corr()
pd.DataFrame(np.linalg.inv(df.corr().values), index = df_cor.index, columns=df_cor.columns)

The code gives

       a            b           c           d
a   22.950000   6.453681    -16.301917  -6.453681
b   6.453681    3.000000    -4.080441   -2.000000
c   -16.301917  -4.080441   12.950000   4.080441
d   -6.453681   -2.000000   4.080441    3.000000

The diagonal elements give VIF.

Pliny answered 22/7, 2018 at 8:3 Comment(3)
Could you please add a solution for dataframe input instead of numpy array?Jarrad
Looks good. To just get the VIFs as a Series: vifs = pd.Series(np.linalg.inv(df.corr().values).diagonal(), index=df_cor.index)Chauncey
vif is the diagonal elements of inverse corr matrix? edited:Yes, check the link : documentation.statsoft.com/STATISTICAHelp.aspx?path=glossary/…Sunshine
J
17

In case you don't wanna deal with variance_inflation_factor and add_constant. Please consider the following two functions.

1. Use formula in statasmodels:

import pandas as pd
import statsmodels.formula.api as smf

def get_vif(exogs, data):
    '''Return VIF (variance inflation factor) DataFrame

    Args:
    exogs (list): list of exogenous/independent variables
    data (DataFrame): the df storing all variables

    Returns:
    VIF and Tolerance DataFrame for each exogenous variable

    Notes:
    Assume we have a list of exogenous variable [X1, X2, X3, X4].
    To calculate the VIF and Tolerance for each variable, we regress
    each of them against other exogenous variables. For instance, the
    regression model for X3 is defined as:
                        X3 ~ X1 + X2 + X4
    And then we extract the R-squared from the model to calculate:
                    VIF = 1 / (1 - R-squared)
                    Tolerance = 1 - R-squared
    The cutoff to detect multicollinearity:
                    VIF > 10 or Tolerance < 0.1
    '''

    # initialize dictionaries
    vif_dict, tolerance_dict = {}, {}

    # create formula for each exogenous variable
    for exog in exogs:
        not_exog = [i for i in exogs if i != exog]
        formula = f"{exog} ~ {' + '.join(not_exog)}"

        # extract r-squared from the fit
        r_squared = smf.ols(formula, data=data).fit().rsquared

        # calculate VIF
        vif = 1/(1 - r_squared)
        vif_dict[exog] = vif

        # calculate tolerance
        tolerance = 1 - r_squared
        tolerance_dict[exog] = tolerance

    # return VIF DataFrame
    df_vif = pd.DataFrame({'VIF': vif_dict, 'Tolerance': tolerance_dict})

    return df_vif


2. Use LinearRegression in sklearn:

# import warnings
# warnings.simplefilter(action='ignore', category=FutureWarning)
import pandas as pd
from sklearn.linear_model import LinearRegression

def sklearn_vif(exogs, data):

    # initialize dictionaries
    vif_dict, tolerance_dict = {}, {}

    # form input data for each exogenous variable
    for exog in exogs:
        not_exog = [i for i in exogs if i != exog]
        X, y = data[not_exog], data[exog]

        # extract r-squared from the fit
        r_squared = LinearRegression().fit(X, y).score(X, y)

        # calculate VIF
        vif = 1/(1 - r_squared)
        vif_dict[exog] = vif

        # calculate tolerance
        tolerance = 1 - r_squared
        tolerance_dict[exog] = tolerance

    # return VIF DataFrame
    df_vif = pd.DataFrame({'VIF': vif_dict, 'Tolerance': tolerance_dict})

    return df_vif


Example:

import seaborn as sns

df = sns.load_dataset('car_crashes')
exogs = ['alcohol', 'speeding', 'no_previous', 'not_distracted']

[In] %%timeit -n 100
get_vif(exogs=exogs, data=df)

[Out]
                      VIF   Tolerance
alcohol          3.436072   0.291030
no_previous      3.113984   0.321132
not_distracted   2.668456   0.374749
speeding         1.884340   0.530690

69.6 ms ± 8.96 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)

[In] %%timeit -n 100
sklearn_vif(exogs=exogs, data=df)

[Out]
                      VIF   Tolerance
alcohol          3.436072   0.291030
no_previous      3.113984   0.321132
not_distracted   2.668456   0.374749
speeding         1.884340   0.530690

15.7 ms ± 1.4 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
Jarrad answered 24/2, 2019 at 23:6 Comment(3)
The cutoff to detect multicollinearity: VIF > 10 or Tolerance < 0.1, you must change 0.2 in Tolerance < 0.2 by Tolerance < 0.1Wessel
@SandraGuerrero indeed that's a typo.Jarrad
Appreciate the effort put in to explain this. Thank you very much !Jestude
A
3

Although it is already late, I am adding some modifications from the given answer. To get the best set after removing multicollinearity if we use @Chef1075 solution then we will lose the variables which are correlated. We have to remove only one of them. To do this I came with the following solution using @steve answer:

import pandas as pd
from sklearn.linear_model import LinearRegression

def sklearn_vif(exogs, data):
    '''
    This function calculates variance inflation function in sklearn way. 
     It is a comparatively faster process.

    '''
    # initialize dictionaries
    vif_dict, tolerance_dict = {}, {}

    # form input data for each exogenous variable
    for exog in exogs:
        not_exog = [i for i in exogs if i != exog]
        X, y = data[not_exog], data[exog]

        # extract r-squared from the fit
        r_squared = LinearRegression().fit(X, y).score(X, y)

        # calculate VIF
        vif = 1/(1 - r_squared)
        vif_dict[exog] = vif

        # calculate tolerance
        tolerance = 1 - r_squared
        tolerance_dict[exog] = tolerance

    # return VIF DataFrame
    df_vif = pd.DataFrame({'VIF': vif_dict, 'Tolerance': tolerance_dict})

    return df_vif
df = pd.DataFrame(
{'a': [1, 1, 2, 3, 4,1],
 'b': [2, 2, 3, 2, 1,3],
 'c': [4, 6, 7, 8, 9,5],
 'd': [4, 3, 4, 5, 4,6],
 'e': [8,8,14,15,17,20]}
  )

df_vif= sklearn_vif(exogs=df.columns, data=df).sort_values(by='VIF',ascending=False)
while (df_vif.VIF>5).any() ==True:
    red_df_vif= df_vif.drop(df_vif.index[0])
    df= df[red_df_vif.index]
    df_vif=sklearn_vif(exogs=df.columns,data=df).sort_values(by='VIF',ascending=False)




print(df)

   d  c  b
0  4  4  2
1  3  6  2
2  4  7  3
3  5  8  2
4  4  9  1
5  6  5  3
Aware answered 26/4, 2020 at 13:36 Comment(4)
Then, in this case, column d, c and b are the ones which does not cause multicollinearity, right?Pharyngoscope
@AlvaroMartinez. RightAware
@MdAsrafulKabir Can I ask why you are doing the following red_df_vif= df_vif.drop(df_vif.index[0]) ? So you calculate the VIF, order them highest to lowest; if the highest is greater than 5 then remove it and recalculate the entire process again?Lowman
@MdAsrafulKabir Can I ask why you are doing the following red_df_vif= df_vif.drop(df_vif.index[0]) ? So you calculate the VIF, order them highest to lowest; if the highest is greater than 5 then remove it and recalculate the entire process again?Lowman
F
2

Example for Boston Data:

VIF is calculated by auxiliary regression, so not dependent on the actual fit.

See below:

from patsy import dmatrices
from statsmodels.stats.outliers_influence import variance_inflation_factor
import statsmodels.api as sm

# Break into left and right hand side; y and X
y, X = dmatrices(formula="medv ~ crim + zn + nox + ptratio + black + rm ", data=boston, return_type="dataframe")

# For each Xi, calculate VIF
vif = [variance_inflation_factor(X.values, i) for i in range(X.shape[1])]

# Fit X to y
result = sm.OLS(y, X).fit()
From answered 18/8, 2017 at 6:22 Comment(0)
A
2

I wrote this function based on some other posts I saw on Stack and CrossValidated. It shows the features which are over the threshold and returns a new dataframe with the features removed.

from statsmodels.stats.outliers_influence import variance_inflation_factor 
from statsmodels.tools.tools import add_constant

def calculate_vif_(df, thresh=5):
    '''
    Calculates VIF each feature in a pandas dataframe
    A constant must be added to variance_inflation_factor or the results will be incorrect

    :param df: the pandas dataframe containing only the predictor features, not the response variable
    :param thresh: the max VIF value before the feature is removed from the dataframe
    :return: dataframe with features removed
    '''
    const = add_constant(df)
    cols = const.columns
    variables = np.arange(const.shape[1])
    vif_df = pd.Series([variance_inflation_factor(const.values, i) 
               for i in range(const.shape[1])], 
              index=const.columns).to_frame()

    vif_df = vif_df.sort_values(by=0, ascending=False).rename(columns={0: 'VIF'})
    vif_df = vif_df.drop('const')
    vif_df = vif_df[vif_df['VIF'] > thresh]

    print 'Features above VIF threshold:\n'
    print vif_df[vif_df['VIF'] > thresh]

    col_to_drop = list(vif_df.index)

    for i in col_to_drop:
        print 'Dropping: {}'.format(i)
        df = df.drop(columns=i)

    return df
Age answered 13/7, 2018 at 16:35 Comment(2)
Removing all variables with VIF higher than the thresh is INCORRECT. A correct approach is to remove the variable with the highest VIF, and then recalculate the VIF for the remaining variables, and repeat this step until no remaining variables have a VIF larger than thresh. For example, assuming x3=x2+x1, if we simply remove all variables with a high VIF, x1/x2/x3 would be removed and none of them is kept, and we might lose an important variable.Hagioscope
Yes, agree with Huanfa. @chef and others - you will be stopping more variables than you need to if you simply dropped all columns above your VIF threshold from your initial run. This needs to be done iteratively as mentioned by Huanfa.Con
P
1

here code using dataframe python:

To create data

import numpy as np
import scipy as sp

a = [1, 1, 2, 3, 4]
b = [2, 2, 3, 2, 1]
c = [4, 6, 7, 8, 9]
d = [4, 3, 4, 5, 4]

To create dataframe

import pandas as pd
data = pd.DataFrame()
data["a"] = a
data["b"] = b
data["c"] = c
data["d"] = d

Calculate VIF

cc = np.corrcoef(data, rowvar=False)
VIF = np.linalg.inv(cc)
VIF.diagonal()

Result

array([22.95, 3. , 12.95, 3. ])

Pinhead answered 24/1, 2020 at 15:23 Comment(0)
B
0

Yet another solution. The following code gives the exact same VIF results as the R car package does.

def calc_reg_return_vif(X, y):
    """
    Utility function to calculate the VIF. This section calculates the linear
    regression inverse R squared.

    Parameters
    ----------
    X : DataFrame
        Input data.
    y : Series
        Target.

    Returns
    -------
    vif : float
        Calculated VIF value.

    """
    X = X.values
    y = y.values

    if X.shape[1] == 1:
        print("Note, there is only one predictor here")
        X = X.reshape(-1, 1)
    reg = LinearRegression().fit(X, y)
    vif = 1 / (1 - reg.score(X, y))

    return vif


def calc_vif_from_scratch(df):
    """
    Calculating VIF using function from scratch

    Parameters
    ----------
    df : DataFrame
        without target variable.

    Returns
    -------
    vif : DataFrame
        giving the feature - VIF value pair.

    """

    vif = pd.DataFrame()

    vif_list = []
    for feature in list(df.columns):
        y = df[feature]
        X = df.drop(feature, axis="columns")
        vif_list.append(calc_reg_return_vif(X, y))
    vif["feature"] = df.columns
    vif["VIF"] = vif_list
    return vif

I've tested it on the titanic dataset. You can get the full example here: https://github.com/tulicsgabriel/Variance-Inflation-Factor-VIF-

Backwash answered 1/12, 2021 at 23:0 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.