automatic-differentiation Questions

1

Solved

I am using tf.GradientTape().gradient() to compute a representer point, which can be used to compute the "influence" of a given training example on a given test example. A representer poi...

1

Julia has a somewhat sprawling AD ecosystem, with perhaps by now more than a dozen different packages spanning, as far as I can tell, forward-mode (ForwardDiff.jl, ForwardDiff2.jl ), reverse-mode (...
Bailiff asked 2/7, 2021 at 20:33

3

Solved

Native support for differential programming has been added to Swift for the Swift for Tensorflow project. Julia has similar with Zygote. What exactly is differentiable programming? what does it en...

3

Solved

TensorFlow use reverse-mode automatic differentiation(reverse mode AD), as shown in https://github.com/tensorflow/tensorflow/issues/675. Reverse mode AD need a data structure called a Wengert List ...
Impresario asked 9/5, 2017 at 6:56

2

Solved

The Wikipedia page for backpropagation has this claim: The backpropagation algorithm for calculating a gradient has been rediscovered a number of times, and is a special case of a more general tec...

3

Solved

I just cannot seem to understand the difference. For me it looks like both just go through an expression and apply the chain rule.. What am I missing?
Nardi asked 17/4, 2017 at 16:24

1

Solved

I am trying to understand Pytorch autograd in depth; I would like to observe the gradient of a simple tensor after going through a sigmoid function as below: import torch from torch import autogra...

1

Solved

I understood the concept of automatic differentiation, but couldn't find any explanation how tensorflow calculates the error gradient for non differentiable functions as for example tf.where in my ...

3

Solved

I'm investigating ways to speed up a large section of C++ code, which has automatic derivatives for computing jacobians. This involves doing some amount of work in the actual residuals, but the maj...

1

Solved

Starting to learn pytorch and was trying to do something very simple, trying to move a randomly initialized vector of size 5 to a target vector of value [1,2,3,4,5]. But my distance is not decreas...

2

Solved

I'm going through the neural transfer pytorch tutorial and am confused about the use of retain_variable(deprecated, now referred to as retain_graph). The code example show: class ContentLoss(nn.Mo...

1

Solved

I want to use automatic differentiation mechanism provided by CppAD inside Eigen linear algebra. An example type is Eigen::Matrix< CppAD::AD,-1,-1>. As CppAD::AD is a custom numeric type the Num...
Plimsoll asked 20/7, 2017 at 16:41

2

I am trying to solve a problem of finding the roots of a function using the Newton-Raphson (NR) method in the C language. The functions in which I would like to find the roots are mostly polynomial...

2

I have a question about Theano's implementation. How the theano get the gradient of every loss function by the following function(T.grad)? Thank you for your help. gparams = T.grad(cost, self.par...
Gerius asked 3/2, 2015 at 12:52

0

I'm trying to implement Automatic Differentiation using a class that behaves like a NumPy array. It does not subclass numpy.ndarray, but contains two array attributes. One for the value, and one fo...
Boyceboycey asked 30/5, 2016 at 15:56

7

Solved

I've heard that one of McCarthy's original motivations for inventing Lisp was to write a system for automatic differentiation. Despite this, my Google searches haven't yielded any libraries/macros ...

4

Solved

In R, is it possible to find the Jacobian/Hessian/sparsity pattern analytically when you provide just the objective function and constraints for an optimization problem? AMPL does this, and from w...
Cassareep asked 21/2, 2014 at 5:30

1

I have that Haskell function, that's causing more than 50% of all the allocations of my program, causing 60% of my run time to be taken by the GC. I run with a small stack (-K10K) so there is no st...

1

Solved

I am having a hard time optimizing a program that is relying on ads conjugateGradientDescent function for most of it's work. Basically my code is a translation of an old papers code that is writte...
Prohibitionist asked 17/6, 2015 at 10:14

1

Sooooo ... as it turns out going from fake matrices to hmatrix datatypes turns out to be nontrivial :) Preamble for reference: {-# LANGUAGE RankNTypes #-} {-# LANGUAGE ParallelListComp #-} {-# LA...
Supervisor asked 6/5, 2015 at 9:5

1

Solved

Given a very simple Matrix definition based on Vector: import Numeric.AD import qualified Data.Vector as V newtype Mat a = Mat { unMat :: V.Vector a } scale' f = Mat . V.map (*f) . unMat add' a ...
Kancler asked 1/4, 2015 at 12:41

1

Solved

I am currently developing a differential operator for sympy that can be placed in matricial form. In this case the order of the args list when creating a Mul object is very important to guarantee t...

4

Solved

We need two matrices of differential operators [B] and [C] such as: B = sympy.Matrix([[ D(x), D(y) ], [ D(y), D(x) ]]) C = sympy.Matrix([[ D(x), D(y) ]]) ans = B * sympy.Matrix([[x*y**2], [x**...

3

Solved

This is in the context of Automatic Differentiation - what would such a system do with a function like map, or filter - or even one of the SKI Combinators? Example: I have the following function: ...

2

Solved

I'm trying to work with Numeric.AD and a custom Expr type. I wish to calculate the symbolic gradient of user inputted expression. The first trial with a constant expression works nicely: calcGrad0...
Pharyngoscope asked 9/5, 2011 at 14:31

© 2022 - 2024 — McMap. All rights reserved.