RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation?
Asked Answered
L

3

11

I am using pytorch-1.5 to do some gan test. My code is very simple gan code which just fit the sin(x) function:

import torch
import torch.nn as nn
import numpy as np
import matplotlib.pyplot as plt


# Hyper Parameters
BATCH_SIZE = 64
LR_G = 0.0001
LR_D = 0.0001 
N_IDEAS = 5  
ART_COMPONENTS = 15 
PAINT_POINTS = np.vstack([np.linspace(-1, 1, ART_COMPONENTS) for _ in range(BATCH_SIZE)])


def artist_works():  # painting from the famous artist (real target)
    r = 0.02 * np.random.randn(1, ART_COMPONENTS)
    paintings = np.sin(PAINT_POINTS * np.pi) + r
    paintings = torch.from_numpy(paintings).float()
    return paintings


G = nn.Sequential(  # Generator
    nn.Linear(N_IDEAS, 128),  # random ideas (could from normal distribution)
    nn.ReLU(),
    nn.Linear(128, ART_COMPONENTS),  # making a painting from these random ideas
)

D = nn.Sequential(  # Discriminator
    nn.Linear(ART_COMPONENTS, 128),  # receive art work either from the famous artist or a newbie like G
    nn.ReLU(),
    nn.Linear(128, 1),
    nn.Sigmoid(),  # tell the probability that the art work is made by artist
)

opt_D = torch.optim.Adam(D.parameters(), lr=LR_D)
opt_G = torch.optim.Adam(G.parameters(), lr=LR_G)


for step in range(10000):
    artist_paintings = artist_works()  # real painting from artist
    G_ideas = torch.randn(BATCH_SIZE, N_IDEAS)  # random ideas
    G_paintings = G(G_ideas)  # fake painting from G (random ideas)

    prob_artist0 = D(artist_paintings)  # D try to increase this prob
    prob_artist1 = D(G_paintings)  # D try to reduce this prob

    D_loss = - torch.mean(torch.log(prob_artist0) + torch.log(1. - prob_artist1))
    G_loss = torch.mean(torch.log(1. - prob_artist1))

    opt_D.zero_grad()
    D_loss.backward(retain_graph=True)  # reusing computational graph
    opt_D.step()

    opt_G.zero_grad()
    G_loss.backward()
    opt_G.step()

But when i runing it got this error:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [128, 1]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

Is there something wrong with my code?

Lubbock answered 28/5, 2020 at 9:47 Comment(4)
FYI, your code works fine for me on pytorch 1.4Buster
I have the same error when running on pytorch 1.4 or 1.5 @BusterEeg
Yes, it works for me too when i change pytorch version to 1.4. Is there some thing wrong with my code or bug of pytorch-1.5?Lubbock
Can you help me with 3D-GAN that has same issues? https://mcmap.net/q/1018181/-one-of-the-variables-needed-for-gradient-computation-has-been-modified-by-an-inplace-operation-torch-cuda-floattensor-1-64-3-3-3/15257624Tunnage
H
11

This happens because the opt_D.step() modifies the parameters of your discriminator inplace. But these parameters are required to compute the gradient for the generator. You can fix this by changing your code to:

for step in range(10000):
    artist_paintings = artist_works()  # real painting from artist
    G_ideas = torch.randn(BATCH_SIZE, N_IDEAS)  # random ideas
    G_paintings = G(G_ideas)  # fake painting from G (random ideas)

    prob_artist1 = D(G_paintings)  # G tries to fool D

    G_loss = torch.mean(torch.log(1. - prob_artist1))
    opt_G.zero_grad()
    G_loss.backward()
    opt_G.step()

    prob_artist0 = D(artist_paintings)  # D try to increase this prob
    # detach here to make sure we don't backprop in G that was already changed.
    prob_artist1 = D(G_paintings.detach())  # D try to reduce this prob

    D_loss = - torch.mean(torch.log(prob_artist0) + torch.log(1. - prob_artist1))
    opt_D.zero_grad()
    D_loss.backward(retain_graph=True)  # reusing computational graph
    opt_D.step()

You can find more about this issue here https://github.com/pytorch/pytorch/issues/39141

Hemotherapy answered 2/6, 2020 at 8:15 Comment(0)
I
1

A general reason why it works on 1.4 but give an error in 1.5 is because 'Before 1.5, these tests were not working properly for the optimizers. That’s why you didn’t see any error. But the computed gradients were not correct.'

You can check this link for more discussion about the influence of version: https://discuss.pytorch.org/t/solved-pytorch1-5-runtimeerror-one-of-the-variables-needed-for-gradient-computation-has-been-modified-by-an-inplace-operation/90256/4

Idioglossia answered 9/3, 2021 at 21:45 Comment(0)
I
0

I had an similar issue and I solved it by changing my torch version to 1.4 and it worked for me

Ianthe answered 7/2, 2023 at 12:6 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.