You can use scipy.optimize.minimize
with jac=True
. If that's not an option for some reason, then you can look at how it handles this situation:
class MemoizeJac(object):
""" Decorator that caches the value gradient of function each time it
is called. """
def __init__(self, fun):
self.fun = fun
self.jac = None
self.x = None
def __call__(self, x, *args):
self.x = numpy.asarray(x).copy()
fg = self.fun(x, *args)
self.jac = fg[1]
return fg[0]
def derivative(self, x, *args):
if self.jac is not None and numpy.alltrue(x == self.x):
return self.jac
else:
self(x, *args)
return self.jac
This class wraps a function that returns function value and gradient, keeping a one-element cache and checks that to see if it already knows its result. Usage:
fmemo = MemoizeJac(f, fprime)
xopt = fmin_cg(fmemo, x0, fmemo.derivative)
The strange thing about this code is that it assumes f
is always called before fprime
(but not every f
call is followed by an fprime
call). I'm not sure if scipy.optimize
actually guarantees that, but the code can be easily adapted to not make that assumption, though. Robust version of the above (untested):
class MemoizeJac(object):
def __init__(self, fun):
self.fun = fun
self.value, self.jac = None, None
self.x = None
def _compute(self, x, *args):
self.x = numpy.asarray(x).copy()
self.value, self.jac = self.fun(x, *args)
def __call__(self, x, *args):
if self.value is not None and numpy.alltrue(x == self.x):
return self.value
else:
self._compute(x, *args)
return self.value
def derivative(self, x, *args):
if self.jac is not None and numpy.alltrue(x == self.x):
return self.jac
else:
self._compute(x, *args)
return self.jac
f
called beforefprime
), right? – Moltke