I am trying to run the code from this Hugging Face blog. At first, I had no access to the model so this error: OSError: meta-llama/Llama-2-7b-chat-hf is not a local folder, is now solved and I created an acces token from Hugging Face which works. Now I'm facing a different error when running the following code:
from transformers import AutoTokenizer
import transformers
import torch
model = "meta-llama/Llama-2-7b-chat-hf"
tokenizer = AutoTokenizer.from_pretrained(model, use_auth_token=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
sequences = pipeline(
'I liked "Breaking Bad" and "Band of Brothers". Do you have any recommendations of other shows I might like?\n',
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=200,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
Error:
ValueError: Tokenizer class LlamaTokenizer does not exist or is not currently imported.
The error is not the same as this error: ImportError: cannot import name 'LLaMATokenizer' from 'transformers', because now it is a valuerror. To make sure I use the right version I run this code:
pip install git+https://github.com/huggingface/transformers
After that I checked this issue ValueError: Tokenizer class LLaMATokenizer does not exist or is not currently imported. #22222 . This suggests:
Change the LLaMATokenizer in tokenizer_config.json into lowercase LlamaTokenizer and it works like a charm.
So, I checked the files if it is using LLamaTokenizer
instead of LlamaTokenizer
like for example here (This is the class in the file):
class LlamaTokenizer(PreTrainedTokenizer):
So I was wondering if anyone knows how to fix this error?