Pytorch: RuntimeError: expected dtype Float but got dtype Long
Asked Answered
B

4

12

I encounter this weird error when building a simple NN in Pytorch. I dont understand this error and why this consern Long and Float datatype in backward function. Anyone encounter this before? Thanks for any help.

Traceback (most recent call last):
  File "test.py", line 30, in <module>
    loss.backward()
  File "/home/liuyun/anaconda3/envs/torch/lib/python3.7/site-packages/torch/tensor.py", line 198, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/liuyun/anaconda3/envs/torch/lib/python3.7/site-packages/torch/autograd/__init__.py", line 100, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: expected dtype Float but got dtype Long (validate_dtype at /opt/conda/conda-bld/pytorch_1587428398394/work/aten/src/ATen/native/TensorIterator.cpp:143)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x4e (0x7f5856661b5e in /home/liuyun/anaconda3/envs/torch/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #1: at::TensorIterator::compute_types() + 0xce3 (0x7f587e3dc793 in /home/liuyun/anaconda3/envs/torch/lib/python3.7/site
-packages/torch/lib/libtorch_cpu.so)
frame #2: at::TensorIterator::build() + 0x44 (0x7f587e3df174 in /home/liuyun/anaconda3/envs/torch/lib/python3.7/site-packages
/torch/lib/libtorch_cpu.so)
frame #3: at::native::smooth_l1_loss_backward_out(at::Tensor&, at::Tensor const&, at::Tensor const&, at::Tensor const&, long)
 + 0x193 (0x7f587e22cf73 in /home/liuyun/anaconda3/envs/torch/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #4: <unknown function> + 0xe080b7 (0x7f58576960b7 in /home/liuyun/anaconda3/envs/torch/lib/python3.7/site-packages/torc
h/lib/libtorch_cuda.so)
frame #5: at::native::smooth_l1_loss_backward(at::Tensor const&, at::Tensor const&, at::Tensor const&, long) + 0x16e (0x7f587
e23569e in /home/liuyun/anaconda3/envs/torch/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so)
frame #6: <unknown function> + 0xed98af (0x7f587e71c8af in /home/liuyun/anaconda3/envs/torch/lib/python3.7/site-packages/torc
h/lib/libtorch_cpu.so)
frame #7: <unknown function> + 0xe22286 (0x7f587e665286 in /home/liuyun/anaconda3/envs/torch/lib/python3.7/site-packages/torc
h/lib/libtorch_cpu.so)

Here is the source code:

import torch
import torch.nn as nn
import numpy as np
import torchvision
from torchvision import models
from UTKLoss import MultiLoss
from ipdb import set_trace

# out features [13, 2, 5]
model_ft = models.resnet18(pretrained=True)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, 20)
model_ft.cuda()

criterion = MultiLoss()
optimizer = torch.optim.Adam(model_ft.parameters(), lr = 1e-3)

image = torch.randn((1, 3, 128, 128)).cuda()
age = torch.randint(110, (1,)).cuda()
gender = torch.randint(2, (1,)).cuda()
race = torch.randint(5, (1,)).cuda()
optimizer.zero_grad()
output = model_ft(image)
age_loss, gender_loss, race_loss = criterion(output, age, gender, race)
loss = age_loss + gender_loss + race_loss
loss.backward()
optimizer.step()

Here is what I define my loss function

import torch
import torch.nn as nn
import torch.nn.functional as F


class MultiLoss(nn.Module):
    def __init__(self):
        super().__init__()

    def forward(self, output, age, gender, race):
        age_pred = output[:, :13]
        age_pred = torch.sum(age_pred, 1)
        gender_pred = output[:, 13: 15]
        race_pred = output[:, 15:]
        age_loss = F.smooth_l1_loss(age_pred.view(-1, 1), age.cuda())
        gender_loss = F.cross_entropy(gender_pred, torch.flatten(gender).cuda(), reduction='sum')
        race_loss = F.cross_entropy(race_pred, torch.flatten(race).cuda(), reduction='sum')
        return age_loss, gender_loss, race_loss
Berardo answered 4/7, 2020 at 8:6 Comment(0)
S
18

Change the criterion call to:

age_loss, gender_loss, race_loss = criterion(output, age.float(), gender, race)

If you look at your error we can trace it to:

frame #3: at::native::smooth_l1_loss_backward_out

In the MultiLoss Class, the smooth_l1_loss works with age. So I changed it's type to float (as the expected dtype is Float) while passing it to the criterion. You can check that age is torch.int64 (i.e. torch.long) by printing age.dtype

I am not getting the error after doing this. Hope it helps.

Strawser answered 4/7, 2020 at 15:15 Comment(0)
P
1

Check the data type of "output", "age", "gender", "race"

there may be a difference like

"torch.float32" 
"torch.float64"

Set them in the same type. it will fix the error

Padang answered 17/2, 2022 at 20:9 Comment(0)
E
0

This error may not directly be related to dtypes. I was training a hugging face text classification flow that worked perfectly with a larger dataset, but once I tested it with a tiny version of that dataset I ran into this error.

For me, the issue was my tiny dataset only had 1 row of data. Once I passed in a larger dataset (4 rows in train, 2 rows in test) the issue disappeared.

Not a very complete solution as I didn't bother debugging further, but I figured it might help others. I imagine somewhere in hugging face's Trainer train code there's some conversion of dtypes that was resulting in a Long type when there's only 1 data row.

Effector answered 21/2 at 1:35 Comment(0)
P
0

Solution is simple! Just convert variables to float32 by:

tensor.to(cuda0, dtype=torch.float64)
Polak answered 3/8 at 8:38 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.