keras + scikit-learn wrapper, appears to hang when GridSearchCV with n_jobs >1
Asked Answered
V

3

6

UPDATE: I have to re-write this question as after some investigation I realise that this is a different problem.

Context: running keras in a gridsearch setting using the kerasclassifier wrapper with scikit learn. Sys: Ubuntu 16.04, libraries: anaconda distribution 5.1, keras 2.0.9, scikitlearn 0.19.1, tensorflow 1.3.0 or theano 0.9.0, using CPUs only.

Code: I simply used the code here for testing: https://machinelearningmastery.com/use-keras-deep-learning-models-scikit-learn-python/, the second example 'Grid Search Deep Learning Model Parameters'. Pay attention to line 35, which reads:

grid = GridSearchCV(estimator=model, param_grid=param_grid)

Symptoms: When grid search uses more than 1 jobs (means cpus?), e.g.,, setting 'n_jobs' on the above line A to '2', line below:

grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=2)

will cause the code to hang indefinitely, either with tensorflow or theano, and there is no cpu usage (see attached screenshot, where 5 python processes were created but none is using cpu).

enter image description here

By debugging, it appears to be the following line with 'sklearn.model_selection._search' that causes problems:

line 648: for parameters, (train, test) in product(candidate_params,
                                               cv.split(X, y, groups)))

, on which the program hangs and cannot continue.

I would really appreciate some insights as to what this means and why this could happen.

Thanks in advance

Versus answered 28/11, 2017 at 9:21 Comment(1)
Hi, I have the same problem. Did you find a solution for this problem,Dorthydortmund
L
5

Are you using a GPU? If so, you can't have multiple threads running each variation of the params because they won't be able to share the GPU.

Here's a full example on how to use keras, sklearn wrappers in a Pipeline with GridsearchCV: Pipeline with a Keras Model

If you really want to have multiple jobs in the GridSearchCV, you can try to limit the GPU fraction used by each job (e.g. if each job only allocates 0.5 of the available GPU memory, you can run 2 jobs simultaneously)

See these issues:

Legit answered 28/11, 2017 at 21:48 Comment(2)
Thanks but I am not using GPU, but CPUVersus
If I had to suggest something, it's probably something to do with the tensorflow session. I think you would need to create independent sessions for each of the gridsearch jobsLegit
C
1

I dealt with this problem too and it really slowed me down not being able to run what is essentially trivially-parallelizable code. The issue is indeed with the tensorflow session. If a session in created in the parent process before GridSearchCV.fit(), it will hang!

The solution for me was to keep all session/graph creation code restricted to the KerasClassifer class and the model creation function i passed to it.

Also what Felipe said about the memory is true, you will want to restrict the memory usage of TF in either the model creation function or a subclass of KerasClassifier.

Related info:

Concerned answered 9/4, 2018 at 11:37 Comment(0)
P
0

TL;DR Answer: You can't because your Keras model can't be serialized, and serialization is needed for parallelizing in Python with joblib.

This problem is much detailed here: https://www.neuraxle.org/stable/scikit-learn_problems_solutions.html#problem-you-can-t-parallelize-nor-save-pipelines-using-steps-that-can-t-be-serialized-as-is-by-joblib

The solution to parallelize your code is to make your Keras estimator serializable. This can be done using savers as described at the link above.

If you're lucky enough to be using TensorFlow v2's prebuilt Keras module, the following practical code sample will reveal to be useful to you as you'd practically just need to take the code and modify it with yours:

In this example, all the saving and loading code is all pre-written for you using Neuraxle-TensorFlow, and this makes it parallelizeable if you use Neuraxle's AutoML methods (e.g.: Neuraxle's grid search and Neuraxle's own parallelism things).

Potboiler answered 6/3, 2020 at 3:56 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.