What does KFold in python exactly do?
Asked Answered
D

3

20

I am looking at this tutorial: https://www.dataquest.io/mission/74/getting-started-with-kaggle

I got to part 9, making predictions. In there there is some data in a dataframe called titanic, which is then divided up in folds using:

# Generate cross validation folds for the titanic dataset.  It return the row indices corresponding to train and test.
# We set random_state to ensure we get the same splits every time we run this.
kf = KFold(titanic.shape[0], n_folds=3, random_state=1)

I am not sure what is it exactly doing and what kind of object kf is. I tried reading the documentation but it did not help much. Also, there are three folds (n_folds=3), why is it later only accessing train and test (and how do I know they are called train and test) in this line?

for train, test in kf:
Doloroso answered 17/3, 2016 at 14:9 Comment(0)
G
29

KFold will provide train/test indices to split data in train and test sets. It will split dataset into k consecutive folds (without shuffling by default).Each fold is then used a validation set once while the k - 1 remaining folds form the training set (source).

Let's say, you have some data indices from 1 to 10. If you use n_fold=k, in first iteration you will get i'th (i<=k) fold as test indices and remaining (k-1) folds (without that i'th fold) together as train indices.

An example

import numpy as np
from sklearn.cross_validation import KFold

x = [1,2,3,4,5,6,7,8,9,10,11,12]
kf = KFold(12, n_folds=3)

for train_index, test_index in kf:
    print (train_index, test_index)

Output

Fold 1: [ 4 5 6 7 8 9 10 11] [0 1 2 3]

Fold 2: [ 0 1 2 3 8 9 10 11] [4 5 6 7]

Fold 3: [0 1 2 3 4 5 6 7] [ 8 9 10 11]

Import Update for sklearn 0.20:

KFold object was moved to the sklearn.model_selection module in version 0.20. To import KFold in sklearn 0.20+ use from sklearn.model_selection import KFold. KFold current documentation source

Granulocyte answered 17/3, 2016 at 14:22 Comment(3)
I get it. Whatever n_folds is, you still end up with just a testing and a training set. If n_folds is 2, then you just use half the data for training and the other half for testing, and then swap them. Am I understanding this correctly?Doloroso
Yes. You will get the i'th (1 <= i <= n_fold) fold as testing and remaining folds as training.Granulocyte
This is not an accurate description, the terms validation set and test set are used interchangeably which is not correct. Please see scikit-learn.org/stable/modules/cross_validation.htmlFiles
W
3

Sharing theoretical information about KF that I have learnt so far.

KFOLD is a model validation technique, where it's not using your pre-trained model. Rather it just use the hyper-parameter and trained a new model with k-1 data set and test the same model on the kth set.

K different models are just used for validation.

It will return the K different scores(accuracy percentage), which are based on kth test data set. And we generally take the average to analyse the model.

We repeat this process with all the different models that we want to analyse. Brief Algo:

  1. Split data in to training and test part.
  2. Trained different models say SVM, RF, LR on this training data.
   2.a Take whole data set and divide in to K-Folds.
   2.b Create a new model with the hyper parameter received after training on step 1.
   2.c Fit the newly created model on K-1 data set.
   2.d Test on Kth data set
   2.e Take average score.
  1. Analyse the different average score and select the best model out of SVM, RF and LR.

Simple reason for doing this, we generally have data deficiencies and if we divide the whole data set into:

  1. Training
  2. Validation
  3. Testing

We may left out relatively small chunk of data and which may overfit our model. Also possible that some of the data remain untouched for our training and we are not analysing the behavior against such data.

KF overcome with both the issues.

Wrenn answered 9/5, 2019 at 4:59 Comment(1)
K-fold CV also uses for model selection. See hereCristal
E
0

The procedure has a single parameter called k that refers to the number of groups that a given data sample is to be split into. As such, the procedure is often called k-fold cross-validation. When a specific value for k is chosen, it may be used in place of k in the reference to the model, such as k=10 becoming 10-fold cross-validation..

You can refer to this post for more information. https://medium.com/@xzz201920/stratifiedkfold-v-s-kfold-v-s-stratifiedshufflesplit-ffcae5bfdf

Ephemeron answered 29/4, 2020 at 14:47 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.