How to remove the model of transformers in GPU memory
Asked Answered
A

2

5
from transformers import CTRLTokenizer, TFCTRLLMHeadModel
tokenizer_ctrl = CTRLTokenizer.from_pretrained('ctrl', cache_dir='./cache', local_files_only=True)
model_ctrl = TFCTRLLMHeadModel.from_pretrained('ctrl', cache_dir='./cache', local_files_only=True)
print(tokenizer_ctrl)
gen_nlp  = pipeline("text-generation", model=model_ctrl, tokenizer=tokenizer_ctrl, device=1, return_full_text=False)

Hello, my codes can load the transformer model, for example, CTRL here, into the gpu memory. How to remove it from GPU after usage, to free more gpu memory?

show I use torch.cuda.empty_cache() ?

Thanks.

Analog answered 28/9, 2021 at 7:59 Comment(0)
L
8

You can simply del tokenizer_ctrl and then use torch.cuda.empty_cache().

See this thread from pytorch forum discussing it.

Layer answered 28/9, 2021 at 10:0 Comment(1)
It didn't work for me, I think we need to do some particular step for models. It correctly work for tensors, but it doesn't work for models.Luo
S
4

@pierlj's solution doesn't seem to work for transformer models, however this solution from the thread they linked works for me:

import gc

del model
gc.collect()
torch.cuda.empty_cache()
Snuck answered 12/3 at 20:27 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.