autograd Questions
3
Solved
I keep running into this error:
RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first...
Wetmore asked 16/1, 2018 at 5:57
1
I want to implement non-negative matrix factorization using PyTorch. Here is my initial implement:
def nmf(X, k, lr, epochs):
# X: input matrix of size (m, n)
# k: number of latent factors
# lr:...
Albion asked 15/3, 2023 at 9:14
1
I am having trouble understanding the usage of the inputs keyword in the .backward() call.
The Documentation says the following:
inputs (sequence of Tensor) – Inputs w.r.t. which the gradient will...
2
Solved
I am new to PyTorch, trying it out after using a different toolkit for a while.
I would like understand how to program custom layers and functions. And as a simple test, I wrote this:
class Testm...
1
I am an intermediate learner in PyTorch and in some recent cases, I have seen people use the torch.inference_mode() instead of the famous torch.no_grad() while validating your trained agent in rein...
2
Solved
I want to compute the gradient between two tensors in a net. The input X tensor (batch size x m) is sent through a set of convolutional layers which give me back and output Y tensor(batch size x n)...
Hesson asked 18/2, 2019 at 19:32
2
Solved
Suppose I have my custom loss function and I want to fit the solution of some differential equation with help of my neural network. So in each forward pass, I am calculating the output of my neural...
Tenner asked 12/9, 2021 at 5:8
3
Solved
When I want to evaluate the performance of my model on the validation set, is it preferred to use with torch.no_grad: or model.eval()?
Lura asked 11/4, 2019 at 8:16
1
After a recent upgrade, when running my PyTorch loop, I now get the warning
Using a non-full backward hook when the forward contains multiple autograd Nodes`".
The training still runs and co...
1
Solved
I am having trouble understanding the conceptual meaning of the grad_outputs option in torch.autograd.grad.
The documentation says:
grad_outputs should be a sequence of length matching output cont...
1
In my previous question I found how to use PyTorch's autograd to differentiate. And it worked:
#autograd
import torch
from torch.autograd import grad
import torch.nn as nn
import torch.optim as opt...
0
I'm trying to improve a CNN I made by implementing a weighted loss method described in this paper. To do this, I looked into this notebook which implements the pseudo-code of the method described i...
Brainbrainard asked 7/4, 2021 at 23:9
0
This is follow up question to this question. I tried using index_put_ as suggested in the answer, however I'm getting the following error
RuntimeError: the derivative for 'indices' is not implement...
1
Solved
I'm trying to understand better the role of in-place operations in PyTorch autograd.
My understanding is that they are likely to cause problems since they may overwrite values needed during the bac...
3
I was wondering how to deal with in-place operations in PyTorch. As I remember using in-place operation with autograd has always been problematic.
And actually I’m surprised that this code below w...
Beaulieu asked 13/8, 2018 at 8:30
3
Solved
Q1.
I'm trying to make my custom autograd function with pytorch.
But I had a problem with making analytical back propagation with y = x / sum(x, dim=0)
where size of tensor x is (Height, Width) (x ...
1
Solved
3
I know about two ways to exclude elements of a computation from the gradient calculation backward
Method 1: using with torch.no_grad()
with torch.no_grad():
y = reward + gamma * torch.max(net.fo...
1
Solved
I have a layer layer in an nn.Module and use it two or more times during a single forward step. The output of this layer is later inputted to the same layer. Can pytorch's autograd compute the grad...
Goshorn asked 8/3, 2020 at 5:0
1
Solved
I'm still working on my understanding of the PyTorch autograd system. One thing I'm struggling at is to understand why .clamp(min=0) and nn.functional.relu() seem to have different backward passes....
Thinker asked 10/3, 2020 at 13:10
2
Solved
(Note: this is not a question about back-propagation.)
I am trying so solve on a GPU a non-linear PDE using PyTorch tensors in place of Numpy arrays. I want to calculate the partial derivatives of...
1
Solved
I have the following function
def msfe(ys, ts):
ys=ys.detach().numpy() #output from the network
ts=ts.detach().numpy() #Target (true labels)
pred_class = (ys>=0.5)
n_0 = sum(ts==0) #Numbe...
1
Solved
I am trying to understand Pytorch autograd in depth; I would like to observe the gradient of a simple tensor after going through a sigmoid function as below:
import torch
from torch import autogra...
Cassicassia asked 22/10, 2019 at 18:21
1
Solved
The documentation does not include any example use case of gradcheck, where would it be useful?
1
Solved
Here's a simple neural network, where I’m trying to penalize the norm of activation gradients:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 32...
1 Next >
© 2022 - 2025 — McMap. All rights reserved.