I want to make a cross validation in my project based on Pytorch. And I didn't find any method that pytorch provided to delete the current model and empty the memory of GPU. Could you tell that how can I do it?
pytorch delete model from gpu
I need to do the same thing because I want to train some models (one after another) in the same process. You found the answer? –
Kizer
Freeing memory in PyTorch works as it does with the normal Python garbage collector. This means once all references to an Python-Object are gone it will be deleted.
You can delete references by using the del
operator:
del model
You have to make sure though that there is no reference to the respective object left, otherwise the memory won't be freed.
So once you've deleted all references of your model
, it should be deleted and the memory freed.
If you want to learn more about memory management you can take a look here: https://pytorch.org/docs/stable/notes/cuda.html#cuda-memory-management
Does moving models/tensors/losses to cpu with eg:
model.cpu()
remove them from GPU? –
Neoimpressionism @Neoimpressionism Not directly, but when you delete all references to the GPU
model
it should be freed automatically. This works like with all other Python variables. –
Seavir hmm.. ok.. I find that a little odd.. I guess torch wouldn't have to re-allocate it's memory on GPU if you send it back, but if you want to keep it's values (eg: on cpu or pass them to another GPU) and don't need it on the original GPU any more, you'd want a way to free that memory. –
Neoimpressionism
@Neoimpressionism You can take a look here: "Returns a copy of this object in CPU memory." (if not already on CPU). Like said above: if you want to free the memory on the GPU you need to get rid of all references pointing on the GPU object. Then it will be freed automatically. So assuming
model
is on GPU: model=model.cpu()
will free the GPU-memory if you don't keep any other references to of model, but model_cpu=model.cpu()
will keep your GPU model. –
Seavir Ah, yes. Makes sense. Might be handy to mention that you can free GPU memory with
model.cpu()
IF that's the only reference to its GPU data. Likely others will have a similar use case to me (ie: want to keep the data but get it off GPU). Incidentally, would backprop info retain a reference? –
Neoimpressionism I prefer to follow the below steps rather than just doing del model_object
model_object = My_Network().cuda()
del model_object #deleting the model
# model will still be on cache until its place is taken by other objects so also execute the below lines
import gc # garbage collect library
gc.collect()
torch.cuda.empty_cache()
© 2022 - 2024 — McMap. All rights reserved.