I'm trying to model a biochemical process, and I structured my question as an optimization problem, that I solve using differential_evolution
from scipy.
So far, so good, I'm pretty happy with the implementation of a simplified model with 15-19 parameters.
I expanded the model and now, with 32 parameters, is taking way too long. Not totally unexpected, but still an issue, hence the question.
I've seen:
- an almost identical question for R Parallel differential evolution
- and a github issue https://github.com/scipy/scipy/issues/4864 on the topic
but it would like to stay in python (the model is within a python pipeline), and the pull request did not lead to and officially accepted solution yet, although some options have been suggested.
Also, I can't parallelize the code within the function to be optimised because is a series of sequential calculations each requiring the result of the previous step. The ideal option would be to have something that evaluates some individuals in parallel and return them to the population.
Summing up:
- Is there any option within scipy that allows parallelization of differential_evolution that I dumbly overlooked? (Ideal solution)
- Is there a suggestion for an alternative algorithm in scipy that is either (way) faster in serial or possible to parallelize?
- Is there any other good package that offers parallelized differential evolution funtions? Or other applicable optimization methods?
- Sanity check: am I overloading DE with 32 parameter and I need to radically change approach?
PS
I'm a biologist, formal math/statistics isn't really my strenght, any formula-to-english translation would be hugely appreciated :)
PPS
As an extreme option I could try to migrate to R, but I can't code C/C++ or other languages.
PDE
class withprocesses=N
parameter instead ofDE
class (as in the README example) which is serial – Gilt