RuntimeError: 0D or 1D target tensor expected, multi-target not supported I was training a deep learning model but I am getting this issue
Asked Answered
C

3

8
*My Training Model*
def train(model,criterion,optimizer,iters):
    epoch = iters
    train_loss = []
    validaion_loss = []
    train_acc = []
    validation_acc = []
    states = ['Train','Valid']
    for epoch in range(epochs):
        print("epoch : {}/{}".format(epoch+1,epochs))
        for phase in states:
            if phase == 'Train':
                model.train() *training the data if phase is train*
                dataload = train_data_loader
            else:
                model.eval()
                dataload = valid_data_loader
        
            run_loss,run_acc = 0,0 *creating variables to calculate loss and acc*
            for data in dataload:
                inputs,labels = data
                inputs = inputs.to(device)
                labels = labels.to(device)
            
                labels = labels.byte()
                optimizer.zero_grad() #Using the optimizer
            
                with torch.set_grad_enabled(phase == 'Train'):
                    outputs = model(inputs)
                    loss = criterion(outputs,labels.unsqueeze(1).float())
                
                    predict = outputs>=0.5
                    if phase == 'Train':
                        loss.backward() #backward propagation
                        optimizer.step()
                
                    acc = torch.sum(predict == labels.unsqueeze(1))
                run_loss+=loss.item()
                run_acc+=acc.item()/len(labels)
            if phase == 'Train': #calulating train loss and accucracy
                epoch_loss = run_loss/len(train_data_loader)
                train_loss.append(epoch_loss)
                epoch_acc = run_acc/len(train_data_loader)
                train_acc.append(epoch_acc)
            else: #training validation loss and accuracy
                epoch_loss = run_loss/len(valid_data_loader)
                validaion_loss.append(epoch_loss)
                epoch_acc = run_acc/len(valid_data_loader)
                validation_acc.append(epoch_acc)
        
            print("{}, loss :{},accuracy:{}".format(phase,epoch_loss,epoch_acc))
    
    history = {'Train_loss':train_loss,'Train_accuracy':train_acc,
               'Validation_loss':validaion_loss,'Validation_Accuracy':validation_acc}
    return model,history[enter image description here][1]

I was experiencing the error as 0D or 1D target tensor expected, multi-target not supported could you please help in rectifying the code which is described above. Referred the previous related articles but unable to get the desired result. What are the code snippets I had to change so that my model will run successfully. Any suggestions are mostly welcome. Thanks in Advance.

Compare answered 8/3, 2022 at 18:25 Comment(10)
You can look at this https://mcmap.net/q/1323911/-how-to-solve-this-pytorch-runtimeerror-1d-target-tensor-expected-multi-target-not-supportedBelvabelvedere
This may help as well: https://mcmap.net/q/1323911/-how-to-solve-this-pytorch-runtimeerror-1d-target-tensor-expected-multi-target-not-supportedBelvabelvedere
I loooked those articles before but i am unable to change in my respective code.Compare
Could you please print out the entire error message from your notebook as it is to help us debug in which line the error popped up?Belvabelvedere
Also print out the shape of labels and predict, pleaseBelvabelvedere
I sent you a request to see the google colab fileBelvabelvedere
In your colab, there is data stored on your google drive which I cannot access to it. Therefore, I cannot run the code.Belvabelvedere
I can help you without access to your data, just first, print the shape of output and labels like that print(outputs.shape) and print(labels.shape)Belvabelvedere
@Phoenix Shape of Inputs is torch.Size([32, 3, 224, 224]) shape of labels is torch.Size([32])Compare
@Phoenix output shape is torch.Size([32, 3])Compare
B
13

Your problem is that labels have the correct shape to calculate the loss. When you add .unsqueeze(1) to labels you made your labels with this shape [32,1] which is not consistent to the requirment to calcualte the loss.

To fix the problem, you only need to remove .unsqueeze(1) for labels.

If you read the documentation of CrossEntropLoss, the arguments:

  • Input should be in (N,C) shape which is outputs in your case and [32,3].
  • Target should be in N shape which is labels in your case and should be [32]. Therefore, the loss function expects labels to be in 1D target not multi-target.
Belvabelvedere answered 9/3, 2022 at 9:57 Comment(0)
B
3

This issue can also be due to loss function. Try using alternative loss functions that can deal with multi-target tensor. I used nn.MSELoss() and the error went away.

Bendicta answered 11/2, 2023 at 1:14 Comment(0)
R
1

For me the issue was the one_hot not getting used with num_classes parameter because without it, chances are your batch might not contain elements of a particular class which makes the dimension of one_hot encoding different. Use num_classes parameter in one_hot to fix the issue.

# for example if I have 4 classes with labels[0, 1, 2, 3], I expect one_hot to return Nx4 tensor.

# case where size(N, 4) is correct because last class is part of batch
F.one_hot(torch.tensor([3]))
tensor([[0, 0, 0, 1]])

# case where size(N, 3) is incorrect because last class is not part of batch
F.one_hot(torch.tensor([0, 1, 2]))
tensor([[1, 0, 0],
        [0, 1, 0],
        [0, 0, 1]])

# specify num_classes parameter to get the correct size tensor
F.one_hot(torch.tensor([0, 1, 2]), num_classes = 4)
tensor([[1, 0, 0, 0],
        [0, 1, 0, 0],
        [0, 0, 1, 0]])

This is a higher chance of this happening at the end of dataloader, because drop_last=False causes the batch_size to be smaller than the dataloader batch_size.

Rodriquez answered 19/4, 2023 at 5:51 Comment(1)
Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center.Livia

© 2022 - 2024 — McMap. All rights reserved.