Is it true that `inplace=True` activations in PyTorch make sense only for inference mode?
Asked Answered
R

1

9

According to the discussions on PyTorch forum :

The purpose of inplace=True is to modify the input in place, without allocating memory for additional tensor with the result of this operation.

This allows to be more efficient in memory usage but prohibits the possibility to make a backward pass, at least if the operation decreases the amount of information. And the backpropagation algorithm requires to have intermediate activations saved in order to update the weights.

Can one say, that this mode, should be turned on in layers only if the model is already trained, and one doesn't want to modify it anymore?

Rangoon answered 10/11, 2021 at 13:4 Comment(0)
S
10

nn.ReLU(inplace=True) saves memory during both training and testing.

However, there are some problems we may face when we use nn.ReLU(iplace=True) while calculating gradients. Sometimes, the original values are needed when calculating gradients. Because inplace destroys some of the original values, some usages may be problematic:

def forward(self, x):
    skip = x
    x = self.relu(x)
    x += skip # inplace addition
    # Error!

The above two consecutive inplace operations will produce an error.

However, it is fine to use first addition, then activation function with inplace=True:

def forward(self, x):
    skip = x
    x += skip # inplace addition
    x = self.relu(x)
    # No error!
Sofiasofie answered 11/11, 2021 at 6:50 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.