huggingface transformers longformer optimizer warning AdamW
Asked Answered
T

2

5

I get below warning when I try to run the code from this page.

/usr/local/lib/python3.7/dist-packages/transformers/optimization.py:309: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use thePyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
  FutureWarning,

I am super confused because the code doesn't seem to set the optimizer at all. The most probable places where the optimizer was set could be below but I dont know how to change the optimizer then

# define the training arguments
training_args = TrainingArguments(
    output_dir = '/media/data_files/github/website_tutorials/results',
    num_train_epochs = 5,
    per_device_train_batch_size = 8,
    gradient_accumulation_steps = 8,    
    per_device_eval_batch_size= 16,
    evaluation_strategy = "epoch",
    disable_tqdm = False, 
    load_best_model_at_end=True,
    warmup_steps=200,
    weight_decay=0.01,
    logging_steps = 4,
    fp16 = True,
    logging_dir='/media/data_files/github/website_tutorials/logs',
    dataloader_num_workers = 0,
    run_name = 'longformer-classification-updated-rtx3090_paper_replication_2_warm'
)

# instantiate the trainer class and check for available devices
trainer = Trainer(
    model=model,
    args=training_args,
    compute_metrics=compute_metrics,
    train_dataset=train_data,
    eval_dataset=test_data
)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
device

I tried another transformer such as distilbert-base-uncased using the identical code but it seems to run without any warnings.

  1. Is this warning more specific to longformer?
  2. How should I change the optimizer?
Transvestite answered 14/2, 2022 at 14:19 Comment(1)
this answers it discuss.huggingface.co/t/…Transvestite
R
5

You need to add optim='adamw_torch', the default is optim='adamw_hf'

Refer here

Can you try the following:

# define the training arguments
training_args = TrainingArguments(
optim='adamw_torch',
# your training arguments
...
...
...
)
Raseta answered 2/11, 2022 at 8:23 Comment(1)
I confirm it works. Thanks!Vegetate
P
2
import torch_optimizer as optim
    
optim.AdamW(params, opt.learning_rate, (opt.optim_alpha, opt.optim_beta), opt.optim_epsilon, weight_decay=opt.weight_decay)

It can be used this way.

Parallelism answered 11/8, 2022 at 7:17 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.