How to perform k-fold cross validation with tensorflow?
Asked Answered
P

3

30

I am following the IRIS example of tensorflow.

My case now is I have all data in a single CSV file, not separated, and I want to apply k-fold cross validation on that data.

I have

data_set = tf.contrib.learn.datasets.base.load_csv(filename="mydata.csv",
                                                   target_dtype=np.int)

How can I perform k-fold cross validation on this dataset with multi-layer neural network as same as IRIS example?

Pharyngeal answered 28/9, 2016 at 13:15 Comment(0)
E
41

I know this question is old but in case someone is looking to do something similar, expanding on ahmedhosny's answer:

The new tensorflow datasets API has the ability to create dataset objects using python generators, so along with scikit-learn's KFold one option can be to create a dataset from the KFold.split() generator:

import numpy as np

from sklearn.model_selection import LeaveOneOut,KFold

import tensorflow as tf
import tensorflow.contrib.eager as tfe
tf.enable_eager_execution()

from sklearn.datasets import load_iris
data = load_iris()
X=data['data']
y=data['target']

def make_dataset(X_data,y_data,n_splits):

    def gen():
        for train_index, test_index in KFold(n_splits).split(X_data):
            X_train, X_test = X_data[train_index], X_data[test_index]
            y_train, y_test = y_data[train_index], y_data[test_index]
            yield X_train,y_train,X_test,y_test

    return tf.data.Dataset.from_generator(gen, (tf.float64,tf.float64,tf.float64,tf.float64))

dataset=make_dataset(X,y,10)

Then one can iterate through the dataset either in the graph based tensorflow or using eager execution. Using eager execution:

for X_train,y_train,X_test,y_test in tfe.Iterator(dataset):
    ....
Efflux answered 10/5, 2018 at 12:59 Comment(3)
What if X and y can not be held in-memory as is assumed by this snippet? I thought the whole point of using a generator was to load samples on-demand rather than load the entire dataset into memory.Beccafico
@Beccafico The same technique can be used to load them on-demand. For example, X could represent a list of filenames and in the for loop you load the files contents on-demand.Bibliotherapy
@Bibliotherapy It is not working for large dataset(images). It still cost a lot of memory in the loop, no matter if you pre-load all data and split or splite then load on demand they are the same memory usage. I have tried both and program crash due to excessive memory usage. However, I figure it out by pass images file path and in each fold I create dataset base on splits indices for train and validation(test). Now it work without excessive memory usage.Nadene
S
14

NN's are usually used with large datasets where CV is not used - and very expensive. In the case of IRIS (50 samples for each species), you probably need it.. why not use scikit-learn with different random seeds to split your training and testing?

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)

for k in kfold:

  1. split data differently passing a different value to "random_state"
  2. learn the net using _train
  3. test using _test

If you dont like the random seed and want a more structured k-fold split, you can use this taken from here.

from sklearn.model_selection import KFold, cross_val_score
X = ["a", "a", "b", "c", "c", "c"]
k_fold = KFold(n_splits=3)
for train_indices, test_indices in k_fold.split(X):
    print('Train: %s | test: %s' % (train_indices, test_indices))
Train: [2 3 4 5] | test: [0 1]
Train: [0 1 4 5] | test: [2 3]
Train: [0 1 2 3] | test: [4 5]
Showers answered 20/11, 2016 at 10:42 Comment(3)
Answer is not related with the question!!! Should provide an answer with a Tensorflow solutionHackamore
Since the answer offers a solution that is usable with Tensorflow - I can not see the Problem.Radix
how can we make this even more randomized?Afterpiece
S
0

modifying @ahmedhosny answer

from sklearn.model_selection import KFold, cross_val_score
k_fold = KFold(n_splits=k)
train_ = []
test_ = []
for train_indices, test_indices in k_fold.split(all_data.index):
    train_.append(train_indices)
    test_.append(test_indices)
Shrier answered 3/8, 2021 at 22:26 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.