How to disable TOKENIZERS_PARALLELISM=(true | false) warning?
Asked Answered
I

5

63

I use pytorch to train huggingface-transformers model, but every epoch, always output the warning:

The current process just got forked. Disabling parallelism to avoid deadlocks... To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false)

How to disable this warning?

Inconceivable answered 2/7, 2020 at 7:35 Comment(0)
T
93

Set the environment variable to the string "false"

either by

TOKENIZERS_PARALLELISM=false

in your shell

or by:

import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"

in the Python script

Talie answered 2/7, 2020 at 19:37 Comment(6)
Worked for me. Still, worth having a look at this answer that points out that using Fast Tokenizers may be the source of this, and that you may need to be wary of any consequences of using them.Flexure
Also see my note below on the implications - #62691779Windsucking
I had to do export TOKENIZERS_PARALLELISM=falseAdrianeadrianna
The warning also disappears if it is set to "true". The important is only to set it to something.Noontime
Worked for me, but mind that for spreading tensors across the GPUs, and then perhaps also for other tasks, you sometimes have to run export ... instead of in-Python code os.environ[] right at the beginning of the cell you are in or the Python code you want to run, and os.environ[] would disturb or not work, see for example How to Solve "CUDA: Invalid Device Ordinal" Error in PyTorch Single-GPU Inference on a Model Trained with DataParallel? and How to set env variable in Jupyter notebook.Ingraham
export TOKENIZERS_PARALLELISM=falseBrunildabruning
W
30

If you have explicitly selected fast (Rust code)tokenisers, you may have done so for a reason. When dealing with large datasets, Rust-based tokenisers process data much faster and these can be explicitly invoked by setting the "use_fast" option during tokeniser creation. Almost all HF models nowadays come with this option. Though not obvious from the warning message, TOKENIZERS_PARALLELISM is an env variable & not a tokeniser hyper-parameter. By setting this to False, the problem does go away, but as some of the comments above show, there is confusion on the impact of this change. For e.g. does this effect parallelism at the model level? Let us look at the Rust code to see if what is the default behaviour and what may happen if we turn it OFF to solve the problem.

https://docs.rs/tokenizers/latest/src/tokenizers/utils/parallelism.rs.html In most cases, we (the end user) would not have explicitly set the TOKENIZERS_PARALLELISM to True or False. For all such cases, the tokeniser code assumes it to be TRUE. We can explicitly disable it by setting it to False. But you can see that in that case, the code makes the iterator serialised. Even if you dont set this env variable to False, the executing code itself does so if it encounters Python forks later (and this is what causes the warning to be displayed in the first place). Can we avoid this?

Let us take a step back at the warning itself.

"The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...To disable this warning, you can either: - Avoid using tokenizers before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)"

This happens only with HF's FastTokenizers as these do parallel processing in Rust. In this situation, when we fork processes via multiprocessing in Python there is a conflict. Forking happens because we would have started looping over the data loader (with num_workers>0) in the train() method. This combination is deemed unsafe to work and if this is encountered, the tokeniser switches OFF parallelism in itself to avoid deadlocks. When we talk of parallelism here we strictly refer to the tokeniser code and NOT anything else. In other words, only the parts of code where we convert the input textual data into tokens (say using tokenizer.encode_plus or any of the other functions) is impacted. So this should not impact the use of parallel threads with num_workers which leverage the multiple GPU cores...like the data loader function. How can we tell this? Well, we can just try adding a 5 sec delay in the dataset get_item function along with a print statement and then see for ourselves by looping over the data loader for diff values of num_workers. When num_workers = 0, the main process does the heavy lifting and there is a gap of 5 sec between fetches. When num_workers = 1 a fork happens, we get the above warning on parallelism and since the main process does not participate in data-lifting, we still get a 5 sec gap between fetches. From num_workers > 2, there are multiple fetches depending on the num_workers in a 5-sec interval.

In fact this leads to the conclusion that a simple option to fix the above warning might be to simply make the num_workers = 0 in the data-loader definition. If num_workers is 0, then there is no Python fork and the main process itself does all the data-lifting. This works and we are able to now leverage the power of fast tokenisers to the hilt but at the compromise of eliminating parallel processing at the Python end. Considering that Data loaders work best in parallel mode by prefetching batches in parallel to GPU from host(CPU) for execution, this is usually NOT a good option.

What happens if we set TOKENIZERS_PARALLELISM=true? On the latest versions of Pytorch, transformers, tokenisers etc, if you do this and then try training with num_workers>0 in the data loader, your training will freeze without any error or even warning message. In fact, this issue motivated me to post this answer as I couldnt find a solution to fix that training-freeze problem anywhere. The root cause is actually the data loader which fails in this situation due to the above conflict (it refuses to "fork" due to fear of deadlocks).

So going back to our core issue, it seems that the RUST based parallelism is in conflict with the forks that we do in Python. However this may be easily solved by simply removing all use of tokenisers prior to the training call (ie prior to data loader being used). Many a times we may be using the tokenisers to see what the tokenised output is etc by doing a my_dataset_name[0]. Just remove all such tokeniser calls and let the train() function loop be the first time that tokenisers get accessed. This simple fix makes the RUST parellization happen after the Python fork and this should work.

Alternatively convert your data to tokens beforehand and store them in a dict. Then your dataset should not use the tokenizer at all but during runtime simply calls the dict(key) where key is the index. This way you avoid conflict. The warning still comes but you simply dont use tokeniser during training any more (note for such scenarios to save space, avoid padding during tokenise and add later with collate_fn)

Having said all that, the Rust tokeniser implementation is so insanely fast that it usually does not matter even if the serialised option is invoked inside the tokenizer i.e. if parallelism is automatically disabled in the tokenizer. It still beats the conventional tokeniser.

So in most cases, one can just ignore the warning and let the tokeniser parallelization be disabled during execution... or explicitly set the TOKENIZERS_PARALLELISM to False right from the beginning. In rare cases, where speed is of utmost importance, one of the above suggested options can be explored.

Windsucking answered 10/7, 2022 at 8:2 Comment(2)
Comfirmed. Just ignore the warning information could be fine.Timmerman
I would be skeptical and unsatisfied with simple advice like "Just ignore the warning". I need to understand why. Thank you for the detailed explanation.Daynadays
C
29

I'm going to leave this comment here to help anyone wondering if it is possible to keep the parallelism. And also because it is the first stackoverflow page when searching the error directly on Google.

According to this comment on github the FastTokenizers seem to be the issue. Also according to another comment on gitmemory you shouldn't use the tokenizer before forking the process. (which basically means before iterating through your dataloader)

So the solution is to not use FastTokenizers before training/fine-tuning or use the normal Tokenizers.

Check the huggingface documentation to find out if you really need the FastTokenizer.

Churlish answered 25/4, 2021 at 15:9 Comment(5)
so does this warning message mean that the training/fine-tuning is not happening in a parallel manner?Fibster
Not according to my experience. I ran two experiments: (a) one with this warning message (b) another without it. I just saved my dataloader from (a) and simply loaded it using torch.save() and torch.load() . Both experiments finished in approx same time (1 hour per epoch, for 3 epochs).Fibster
Example how to use the FastTokenizers after training and example using "normal" Tokenizer?Galloot
Why do you want to use FastTokenizers after training? You should use them during training/inference. The docs tell you how to use "normal" Tokenizers.Churlish
No, it does NOT effect anything else. Pl see my note belowWindsucking
I
2

I solved this problem by downgrading huggingface's transfomers library version from 3.0.0 to 2.11.0, and tokenizers library version from 0.8.0rc4 to 0.7.0.

It seems to be a problem of the huggingface's tokenizer library version "0.8.0rc4". Currently, it seems that there is no solution to set TOKENIZERS_PARALLELISM=(true | false) as error message say.

reference : https://github.com/ThilinaRajapakse/simpletransformers/issues/515

Interrupter answered 2/7, 2020 at 12:40 Comment(0)
C
2

Set the environment variable to string "false" :

  1. On Bash
export TOKENIZERS_PARALLELISM=false
  1. On Python
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"
Correy answered 26/7, 2023 at 14:52 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.