autograd Questions

3

Solved

I keep running into this error: RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first...

1

I want to implement non-negative matrix factorization using PyTorch. Here is my initial implement: def nmf(X, k, lr, epochs): # X: input matrix of size (m, n) # k: number of latent factors # lr:...

1

I am having trouble understanding the usage of the inputs keyword in the .backward() call. The Documentation says the following: inputs (sequence of Tensor) – Inputs w.r.t. which the gradient will...
Busby asked 21/12, 2022 at 19:57

2

Solved

I am new to PyTorch, trying it out after using a different toolkit for a while. I would like understand how to program custom layers and functions. And as a simple test, I wrote this: class Testm...
Annikaanniken asked 7/6, 2017 at 7:59

1

I am an intermediate learner in PyTorch and in some recent cases, I have seen people use the torch.inference_mode() instead of the famous torch.no_grad() while validating your trained agent in rein...
Engird asked 25/10, 2022 at 8:21

2

Solved

I want to compute the gradient between two tensors in a net. The input X tensor (batch size x m) is sent through a set of convolutional layers which give me back and output Y tensor(batch size x n)...
Hesson asked 18/2, 2019 at 19:32

2

Solved

Suppose I have my custom loss function and I want to fit the solution of some differential equation with help of my neural network. So in each forward pass, I am calculating the output of my neural...

3

Solved

When I want to evaluate the performance of my model on the validation set, is it preferred to use with torch.no_grad: or model.eval()?

1

After a recent upgrade, when running my PyTorch loop, I now get the warning Using a non-full backward hook when the forward contains multiple autograd Nodes`". The training still runs and co...
Witherspoon asked 7/4, 2021 at 21:59

1

Solved

I am having trouble understanding the conceptual meaning of the grad_outputs option in torch.autograd.grad. The documentation says: grad_outputs should be a sequence of length matching output cont...
Deflagrate asked 13/8, 2021 at 21:13

1

In my previous question I found how to use PyTorch's autograd to differentiate. And it worked: #autograd import torch from torch.autograd import grad import torch.nn as nn import torch.optim as opt...
Unpracticed asked 29/4, 2021 at 15:45

0

I'm trying to improve a CNN I made by implementing a weighted loss method described in this paper. To do this, I looked into this notebook which implements the pseudo-code of the method described i...
Brainbrainard asked 7/4, 2021 at 23:9

0

This is follow up question to this question. I tried using index_put_ as suggested in the answer, however I'm getting the following error RuntimeError: the derivative for 'indices' is not implement...
Exotoxin asked 6/4, 2021 at 17:24

1

Solved

I'm trying to understand better the role of in-place operations in PyTorch autograd. My understanding is that they are likely to cause problems since they may overwrite values needed during the bac...
Bruton asked 31/3, 2020 at 11:38

3

I was wondering how to deal with in-place operations in PyTorch. As I remember using in-place operation with autograd has always been problematic. And actually I’m surprised that this code below w...
Beaulieu asked 13/8, 2018 at 8:30

3

Solved

Q1. I'm trying to make my custom autograd function with pytorch. But I had a problem with making analytical back propagation with y = x / sum(x, dim=0) where size of tensor x is (Height, Width) (x ...
Dreg asked 7/2, 2021 at 14:58

1

Solved

When testing a network in PyTorch one can use with torch.no_grad():. What is the Libtorch (C++) equivalent? Thanks!
Oestriol asked 27/1, 2021 at 14:0

3

I know about two ways to exclude elements of a computation from the gradient calculation backward Method 1: using with torch.no_grad() with torch.no_grad(): y = reward + gamma * torch.max(net.fo...
Quenchless asked 29/6, 2019 at 8:47

1

Solved

I have a layer layer in an nn.Module and use it two or more times during a single forward step. The output of this layer is later inputted to the same layer. Can pytorch's autograd compute the grad...
Goshorn asked 8/3, 2020 at 5:0

1

Solved

I'm still working on my understanding of the PyTorch autograd system. One thing I'm struggling at is to understand why .clamp(min=0) and nn.functional.relu() seem to have different backward passes....
Thinker asked 10/3, 2020 at 13:10

2

Solved

(Note: this is not a question about back-propagation.) I am trying so solve on a GPU a non-linear PDE using PyTorch tensors in place of Numpy arrays. I want to calculate the partial derivatives of...
Vernitavernoleninsk asked 29/7, 2019 at 20:54

1

Solved

I have the following function def msfe(ys, ts): ys=ys.detach().numpy() #output from the network ts=ts.detach().numpy() #Target (true labels) pred_class = (ys>=0.5) n_0 = sum(ts==0) #Numbe...
Ritualism asked 25/10, 2019 at 14:17

1

Solved

I am trying to understand Pytorch autograd in depth; I would like to observe the gradient of a simple tensor after going through a sigmoid function as below: import torch from torch import autogra...

1

Solved

The documentation does not include any example use case of gradcheck, where would it be useful?
Brianabriand asked 23/8, 2019 at 13:30

1

Solved

Here's a simple neural network, where I’m trying to penalize the norm of activation gradients: class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 32...
Kone asked 16/2, 2019 at 19:57

© 2022 - 2025 — McMap. All rights reserved.