How to load the saved tokenizer from pretrained model
Asked Answered
H

2

11

I fine-tuned a pretrained BERT model in Pytorch using huggingface transformer. All the training/validation is done on a GPU in cloud.

At the end of the training, I save the model and tokenizer like below:

best_model.save_pretrained('./saved_model/')
tokenizer.save_pretrained('./saved_model/')

This creates below files in the saved_model directory:

config.json
added_token.json
special_tokens_map.json
tokenizer_config.json
vocab.txt
pytorch_model.bin

Now, I download the saved_model directory in my computer and want to load the model and tokenizer. I can load the model like below

model = torch.load('./saved_model/pytorch_model.bin',map_location=torch.device('cpu'))

But how do I load the tokenizer? I am new to pytorch and not sure because there are multiple files. Probably I am not saving the model in the right way?

Huxham answered 16/10, 2019 at 15:57 Comment(3)
tokenizer = BertTokenizer.from_pretrained('./saved_model/')Faust
I am sure that it must work. Let me know if it works, I shall post it as answer.Faust
@AshwinGeetD'Sa yes, post it as an answer and I will mark itHuxham
A
16

If you look at the syntax, it is the directory of the pre-trained model that you are supposed to pass. Hence, the correct way to load tokenizer must be:

tokenizer = BertTokenizer.from_pretrained(<Path to the directory containing pretrained model/tokenizer>)

In your case:

tokenizer = BertTokenizer.from_pretrained('./saved_model/')

./saved_model here is the directory where you'll be saving your pretrained model and tokenizer.

Acetal answered 17/10, 2019 at 8:0 Comment(6)
How to make this reference to local model work in docker? I'm putting my model and tokenizer in a folder called "./saved" and I get the following error. Looks like Docker is still looking for the config, model, tokenizer files from hugging face.Deboradeborah
404 Client Error: Not Found for url: huggingface.co/saved/resolve/main/config.json Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/transformers/configuration_utils.py", line 505, in get_config_dict user_agent=user_agent, File "/usr/local/lib/python3.7/site-packages/transformers/file_utils.py", line 1337,Deboradeborah
any idea, how can we do same stuff in scala sparknlp implementation? I am trying to load my own tokenizer into the pipeline but keep running into compatibility issues.Bose
Have never used Scala or Spark. Sorry about that :(Faust
What if it's not bert? Is there a way to autodetect?Melosa
Insert of 'BertTokenizer' use 'AutoTokenizer'Faust
C
0

If you are loading the model from your local. don't forget local_files_only=True

tokenizer = AutoTokenizer.from_pretrained("your_local_address",  local_files_only=True)

model = AutoModel.from_pretrained("your_local_address", local_files_only=True)
Condorcet answered 28/11, 2023 at 7:13 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.