huggingface Questions

2

Solved

I'm trying to replied the code from this Hugging Face blog. At first I installed the transformers and created a token to login to hugging face hub: pip install transformers huggingface-cli login A...
Shellback asked 30/8, 2023 at 9:34

2

Solved

I am currently using the diffusers StableDiffusionPipeline (from hugging face) to generate AI images with a discord bot which I use with my friends. I was wondering if it was possible to get a prev...
Bighead asked 9/11, 2022 at 1:45

1

First I want to say that I don't have much experience with pytorch, ML, NLP and other related topics, so I may confuse some concepts. Sorry. I downloaded few models from Hugging Face, organized the...
Haggai asked 23/4, 2023 at 10:2

10

I ran into this problem when I was trying to import the following libraries and it is giving the error "ImportError: cannot import name 'VectorStoreIndex' from 'llama_index' (unknown location)...

4

Given a transformer model on huggingface, how do I find the maximum input sequence length? For example, here I want to truncate to the max_length of the model: tokenizer(examples["text"],...

2

I have fine-tuned the Llama-2 model following the llama-recipes repository's tutorial. Currently, I have the pretrained model and fine-tuned adapter stored in two separate directories as follows: P...

3

Solved

Take a simple example in this website, https://huggingface.co/datasets/Dahoas/rm-static: if I want to load this dataset online, I just directly use, from datasets import load_dataset dataset = load...

2

I'm confused about the technical difference between the two huggingface pipelines TextGeneration and Text2TextGeneration. In the TextGeneration it is stated that: Language generation pipeline usin...
Amortization asked 24/7, 2023 at 22:7

2

Solved

I was using this autotrain collab and when i labbelled and put my images into images folder and tried to run it , It says this error how do i solve this ? to reproduce : click link of ipynb make...

4

Solved

I'm using the transformers library in Google colab, and When i am using TrainingArguments from transformers library i'm getting Import error with this code: from transformers import TrainingArgumen...

4

Microsoft presented it's new library Semantic Kernel to code own chat programs like ChatGPT with .NET. In their documentation they are telling, that you can either use OpenAI´s LLM, Azure LLM or Hu...
Seguidilla asked 15/9, 2023 at 7:57

5

Solved

**tldr; what I really want to know is what is the official way to set pad token for fine tuning it wasn't set during original training, so that it doesn't not learn to predict EOS. ** colab: https:...

2

I'm trying to run through the 🤗 LoRA tutorial. I've gotten the dataset pulled down, trained it and have checkpoints on disk (in the form of several subdirectories and .safetensors files). The last...

1

I'm trying to use accelerate module to parallelize my model training. But I have troubles to use it when training models with fp16. If I load the model with torch_dtype=torch.float16, I got ValueEr...
Blastosphere asked 21/3, 2023 at 15:2

5

I always get the "Potential NSFW content was detected in one or more images. A black image will be returned instead. Try again with a different prompt and/or seed." error when using stabl...
Heliotropin asked 23/9, 2022 at 12:59

1

I'd like to finetune Starcoder (https://huggingface.co/bigcode/starcoder) on my dataset and on a GCP VM instance. It's says in the documentation that for training the model, they used 512 Tesla A10...

3

Solved

I am looking at a few different examples of using PEFT on different models. The LoraConfig object contains a target_modules array. In some examples, the target modules are ["query_key_value&qu...

2

I'm very new to generative AI. I have 64gb RAM and 20GB GPU. I used some opensource model from Huggingface and used Python to simply prompt it with out of box model and displaying the result. I dow...

6

I am facing below issue while loading the pretrained BERT model from HuggingFace due to SSL certificate error. Error: SSLError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries ex...

1

I have 10 code repositories in Javascript (VueJS) (Each repository corresponds to 1 Theme) I want to train an LLM model on these 10 code repositories to generate new themes using prompts. The LLM m...
Ovolo asked 14/6, 2023 at 7:54

3

Say I have the following model (from this script): from transformers import AutoTokenizer, GPT2LMHeadModel, AutoConfig config = AutoConfig.from_pretrained( "gpt2", vocab_size=len(token...

4

Is there any way of getting sentence embeddings from meta-llama/Llama-2-13b-chat-hf from huggingface? Model link: https://huggingface.co/meta-llama/Llama-2-13b-chat-hf I tried using transfomer.Auto...

1

I am trying to run the code from this Hugging Face blog. At first, I had no access to the model so this error: OSError: meta-llama/Llama-2-7b-chat-hf is not a local folder, is now solved and I crea...
Hexapody asked 30/8, 2023 at 11:11

1

Solved

I'm trying to load (peoples speech) dataset, but it's way too big, is there's a way to download only a part of it? from datasets import load_dataset from datasets import load_dataset train = load...
Deferred asked 17/2, 2023 at 7:8

3

I was going through the falcon qlora tutorial and I saw this: def get_model_tokenizer_qlora_falcon7b(model_name: str = "ybelkada/falcon-7b-sharded-bf16", config: wand.Config, # todo lor...
Countersubject asked 7/7, 2023 at 1:0

© 2022 - 2024 — McMap. All rights reserved.