Logistic Regression Tuning Parameter Grid in R Caret Package?
Asked Answered
B

1

7

I am trying to fit a logistic regression model in R using the caret package. I have done the following:

model <- train(dec_var ~., data=vars, method="glm", family="binomial",
                 trControl = ctrl, tuneGrid=expand.grid(C=c(0.001, 0.01, 0.1, 1,10,100, 1000)))

However, I am unsure what the tuning parameter should be for this model and I am having a difficult time finding it. I assumed it is C because C is the parameter used in sklearn. Currently, I am getting the following error -

Error: The tuning parameter grid should have columns parameter

Do you have any suggestions on how to fix this?

Beneficence answered 14/12, 2017 at 21:56 Comment(3)
Try modelLookup("glm"), See here: #43971331Canella
It is also a good idea to start with specifying tuneLength and observing the parameters caret decided to vary, instead of plunging into specification of the grid.Quarles
glm method has no tuning parameters topepo.github.io/caret/train-models-by-tag.html, the one in there is a dummy tuning parameter, it does not do anything.Alsworth
A
12

Per Max Kuhn's web-book - search for method = 'glm' here ,there is no tuning parameter glm within caret.

enter image description here

We can easily verify this is the case by testing out a few basic train calls. First off, let's start with a method (rpart) that does have a tuning parameter (cp) per the web book.

library(caret)
data(GermanCredit)

# Check tuning parameter via `modelLookup` (matches up with the web book)
modelLookup('rpart')
#  model parameter                label forReg forClass probModel
#1 rpart        cp Complexity Parameter   TRUE     TRUE      TRUE

# Observe that the `cp` parameter is tuned
set.seed(1)
model_rpart <- train(Class ~., data=GermanCredit, method='rpart')
model_rpart
#CART 

#1000 samples
#  61 predictor
#   2 classes: 'Bad', 'Good' 

#No pre-processing
#Resampling: Bootstrapped (25 reps) 
#Summary of sample sizes: 1000, 1000, 1000, 1000, 1000, 1000, ... 
#Resampling results across tuning parameters:

#  cp          Accuracy   Kappa    
#  0.01555556  0.7091276  0.2398993
#  0.03000000  0.7025574  0.1950021
#  0.04444444  0.6991700  0.1316720

#Accuracy was used to select the optimal model using  the largest value.
#The final value used for the model was cp = 0.01555556.

We see that the cp parameter was tuned. Now let's try glm.

# Check tuning parameter via `modelLookup` (shows a parameter called 'parameter')
modelLookup('glm')
#  model parameter     label forReg forClass probModel
#1   glm parameter parameter   TRUE     TRUE      TRUE

# Try out the train function to see if 'parameter' gets tuned
set.seed(1)
model_glm <- train(Class ~., data=GermanCredit, method='glm')
model_glm
#Generalized Linear Model 

#1000 samples
#  61 predictor
#   2 classes: 'Bad', 'Good' 

#No pre-processing
#Resampling: Bootstrapped (25 reps) 
#Summary of sample sizes: 1000, 1000, 1000, 1000, 1000, 1000, ... 
#Resampling results:

#  Accuracy   Kappa    
#  0.7386384  0.3478527

In this case with glm above there was no parameter tuning performed. From my experience, it appears the parameter named parameter is just a placeholder and not a real tuning parameter. As demonstrated in the code that follows, even if we try to force it to tune parameter it basically only does a single value.

set.seed(1)
model_glm2 <- train(Class ~., data=GermanCredit, method='glm',
                    tuneGrid=expand.grid(parameter=c(0.001, 0.01, 0.1, 1,10,100, 1000)))
model_glm2
#Generalized Linear Model 

#1000 samples
#  61 predictor
#   2 classes: 'Bad', 'Good' 

#No pre-processing
#Resampling: Bootstrapped (25 reps) 
#Summary of sample sizes: 1000, 1000, 1000, 1000, 1000, 1000, ... 
#Resampling results across tuning parameters:

#  Accuracy   Kappa      parameter
#  0.7386384  0.3478527  0.001    
#  0.7386384  0.3478527  0.001    
#  0.7386384  0.3478527  0.001    
#  0.7386384  0.3478527  0.001    
#  0.7386384  0.3478527  0.001    
#  0.7386384  0.3478527  0.001    
#  0.7386384  0.3478527  0.001    

#Accuracy was used to select the optimal model using  the largest value.
#The final value used for the model was parameter = 0.001.
Alsworth answered 12/1, 2018 at 1:25 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.