huggingface-transformers Questions
2
Solved
I'm trying to replied the code from this Hugging Face blog. At first I installed the transformers and created a token to login to hugging face hub:
pip install transformers
huggingface-cli login
A...
Shellback asked 30/8, 2023 at 9:34
4
I want to fine tune LabSE for Question answering using squad dataset. and i got this error:
ValueError: The model did not return a loss from the inputs, only the following keys: last_hidden_state,p...
Barbital asked 9/8, 2022 at 10:43
1
First I want to say that I don't have much experience with pytorch, ML, NLP and other related topics, so I may confuse some concepts. Sorry.
I downloaded few models from Hugging Face, organized the...
Haggai asked 23/4, 2023 at 10:2
3
Solved
I am trying to fine-tune the BERT language model on my own data. I've gone through their docs, but their tasks seem to be not quite what I need, since my end goal is embedding text. Here's my code:...
Reunite asked 17/2, 2022 at 23:45
6
Solved
Context: I am trying to query Llama-2 7B, taken from HuggingFace (meta-llama/Llama-2-7b-hf). I give it a question and context (I would guess anywhere from 200-1000 tokens), and ask it to answer the...
Hillis asked 26/7, 2023 at 14:46
4
Given a transformer model on huggingface, how do I find the maximum input sequence length?
For example, here I want to truncate to the max_length of the model: tokenizer(examples["text"],...
Norvan asked 24/6, 2023 at 18:45
8
Solved
The default cache directory is lack of disk capacity, I need change the configure of the default cache directory.
Trishtrisha asked 8/8, 2020 at 7:28
2
I have fine-tuned the Llama-2 model following the llama-recipes repository's tutorial. Currently, I have the pretrained model and fine-tuned adapter stored in two separate directories as follows:
P...
Ecotype asked 23/9, 2023 at 21:20
3
Solved
I am trying to test the hugging face's prithivida/parrot_paraphraser_on_T5 model but getting token not found error.
from parrot import Parrot
import torch
import warnings
warnings.filterwarnings(&q...
Reduplicate asked 27/11, 2022 at 20:36
2
I'm confused about the technical difference between the two huggingface pipelines TextGeneration and Text2TextGeneration.
In the TextGeneration it is stated that:
Language generation pipeline usin...
Amortization asked 24/7, 2023 at 22:7
2
Solved
I would like to fine-tune already fine-tuned BertForSequenceClassification model with new dataset containing just 1 additional label which hasn't been seen by model before.
By that, I would like to...
Unclinch asked 19/4, 2021 at 8:32
4
Solved
I've been looking to use Hugging Face's Pipelines for NER (named entity recognition). However, it is returning the entity labels in inside-outside-beginning (IOB) format but without the IOB labels....
Dahl asked 30/3, 2020 at 18:58
4
Solved
I'm using the transformers library in Google colab, and
When i am using TrainingArguments from transformers library i'm getting Import error with this code:
from transformers import TrainingArgumen...
Clomp asked 10/6, 2023 at 21:51
13
Solved
For example, I want to download bert-base-uncased on https://huggingface.co/models, but can't find a 'Download' link. Or is it not downloadable?
Swank asked 19/5, 2021 at 0:34
4
Is there a method for converting Hugging Face Transformer embeddings back to text?
Suppose that I have text embeddings created using Hugging Face's ClipTextModel using the following method:
import ...
Examination asked 6/11, 2022 at 11:45
11
Solved
This is my first post and I am new to coding, so please let me know if you need more information. I have been running some AI to generate artwork and it has been working, but when I reloaded it the...
Opulence asked 6/2, 2022 at 21:54
3
my code is working but I am getting this warning and how can I avoid it
All model checkpoint layers were used when initializing TFRobertaForSequenceClassification.
All the layers of TFRobertaForSe...
Nebiim asked 3/8, 2022 at 11:54
4
I'm trying to fine-tune llama2-13b-chat-hf with an open source datasets.
I always used this template but now I'm getting this error:
ImportError: Using bitsandbytes 8-bit quantization requires Acce...
Sulfaguanidine asked 22/2 at 12:37
5
Solved
**tldr; what I really want to know is what is the official way to set pad token for fine tuning it wasn't set during original training, so that it doesn't not learn to predict EOS. **
colab: https:...
Hernandes asked 7/7, 2023 at 1:11
2
I'm trying to run through the 🤗 LoRA tutorial. I've gotten the dataset pulled down, trained it and have checkpoints on disk (in the form of several subdirectories and .safetensors files).
The last...
Shaw asked 25/2 at 15:26
4
In trying to evaluate several transformers models sequentially with the same dataset to check which one performs better.
The list of models is this one:
MODELS = [
('xlm-mlm-enfr-1024' ,"XLMM...
Ep asked 31/12, 2021 at 16:39
3
Solved
I'm using symanto/sn-xlm-roberta-base-snli-mnli-anli-xnli pretrained model from huggingface. My task requires to use it on pretty large texts, so it's essential to know maximum input length.
The fo...
Chronometer asked 31/3, 2022 at 10:49
3
Solved
I am looking at a few different examples of using PEFT on different models. The LoraConfig object contains a target_modules array. In some examples, the target modules are ["query_key_value&qu...
Flit asked 26/7, 2023 at 5:23
8
I'm trying to load quantization like
from transformers import LlamaForCausalLM
from transformers import BitsAndBytesConfig
model = '/model/'
model = LlamaForCausalLM.from_pretrained(model, quantiz...
Alagez asked 17/8, 2023 at 18:32
3
Solved
In HuggingFace, every time I call a pipeline() object, I get a warning:
`"Setting `pad_token_id` to `eos_token_id`:{eos_token_id} for open-end generation."
How do I suppress this warning...
Adolfoadolph asked 17/10, 2021 at 23:40
1 Next >
© 2022 - 2024 — McMap. All rights reserved.