According to the discussions on PyTorch
forum :
- What’s the difference between nn.ReLU() and nn.ReLU(inplace=True)?
- Guidelines for when and why one should set inplace = True?
The purpose of inplace=True
is to modify the input in place, without allocating memory for additional tensor with the result of this operation.
This allows to be more efficient in memory usage but prohibits the possibility to make a backward pass, at least if the operation decreases the amount of information. And the backpropagation algorithm requires to have intermediate activations saved in order to update the weights.
Can one say, that this mode, should be turned on in layers only if the model is already trained, and one doesn't want to modify it anymore?