I am trying to modify the example here:
# include <cppad/cppad.hpp>
namespace { // ---------------------------------------------------------
// define the template function JacobianCases<Vector> in empty namespace
template <typename Vector>
bool JacobianCases()
{ bool ok = true;
using CppAD::AD;
using CppAD::NearEqual;
double eps99 = 99.0 * std::numeric_limits<double>::epsilon();
using CppAD::exp;
using CppAD::sin;
using CppAD::cos;
// domain space vector
size_t n = 2;
CPPAD_TESTVECTOR(AD<double>) X(n);
X[0] = 1.;
X[1] = 2.;
// declare independent variables and starting recording
CppAD::Independent(X);
// a calculation between the domain and range values
AD<double> Square = X[0] * X[0];
// range space vector
size_t m = 3;
CPPAD_TESTVECTOR(AD<double>) Y(m);
Y[0] = Square * exp( X[1] );
Y[1] = Square * sin( X[1] );
Y[2] = Square * cos( X[1] );
// create f: X -> Y and stop tape recording
CppAD::ADFun<double> f(X, Y);
// new value for the independent variable vector
Vector x(n);
x[0] = 2.;
x[1] = 1.;
// compute the derivative at this x
Vector jac( m * n );
jac = f.Jacobian(x);
/*
F'(x) = [ 2 * x[0] * exp(x[1]) , x[0] * x[0] * exp(x[1]) ]
[ 2 * x[0] * sin(x[1]) , x[0] * x[0] * cos(x[1]) ]
[ 2 * x[0] * cos(x[1]) , -x[0] * x[0] * sin(x[i]) ]
*/
ok &= NearEqual( 2.*x[0]*exp(x[1]), jac[0*n+0], eps99, eps99);
ok &= NearEqual( 2.*x[0]*sin(x[1]), jac[1*n+0], eps99, eps99);
ok &= NearEqual( 2.*x[0]*cos(x[1]), jac[2*n+0], eps99, eps99);
ok &= NearEqual( x[0] * x[0] *exp(x[1]), jac[0*n+1], eps99, eps99);
ok &= NearEqual( x[0] * x[0] *cos(x[1]), jac[1*n+1], eps99, eps99);
ok &= NearEqual(-x[0] * x[0] *sin(x[1]), jac[2*n+1], eps99, eps99);
return ok;
}
} // End empty namespace
# include <vector>
# include <valarray>
bool Jacobian(void)
{ bool ok = true;
// Run with Vector equal to three different cases
// all of which are Simple Vectors with elements of type double.
ok &= JacobianCases< CppAD::vector <double> >();
ok &= JacobianCases< std::vector <double> >();
ok &= JacobianCases< std::valarray <double> >();
return ok;
}
I am trying to modify it in the following way:
Let G be the Jacobian jac
that is calculated in this example, in the line:
jac = f.Jacobian(x);
and, as in the example, let X
be the independent variables. I would like to construct a new function, H
, which is a function of jac
, i.e. H(jacobian(X))
= something, such that H is autodifferentiable. An example may be H(X) = jacobian( jacobian(X)[0])
, i.e. the jacobian of the first element of jacobian(X)
w.r.t X
(a second derivative of sorts).
The problem is that jac
as written here is of type Vector
, which is a parameterized type on a raw double
, not an AD<double>
. To my knowledge, this means the output is not autodifferentiable.
I am looking for some advice on if it is possible to use the Jacobian in a larger operation, and take the Jacobian of that larger operation (not unlike any arithmetic operator) or if this is not possible.
EDIT: This has been put up for a bounty once, but I'm putting it up again to see if there's a better solution, because I think this is important. To be a bit more clear, the elements that the "correct" answer needs are:
a) A means of calculating arbitrary order derivatives.
b) An intelligent way of not having to specify the order of derivatives a priori. If the maximum order derivative must be known at compile time, the order of derivative can't be determined algorithmically. Further, specifying an enormously large order as in the current answer given will lead to memory allocation issues and, I imagine, performance issues.
c) Abstracting the templating of derivative order away from the end-user. This is important, because it can be difficult to keep track of the order of derivatives needed. This is probably something that comes "for free" if b) is solved.
If anybody can crack this, it would be an awesome contribution and an extremely useful operation.