Pytorch model accuracy test
Asked Answered
D

3

7

I'm using Pytorch to classify a series of images. The NN is defined as follows:

model = models.vgg16(pretrained=True)
model.cuda()
for param in model.parameters(): param.requires_grad = False

classifier = nn.Sequential(OrderedDict([
                           ('fc1', nn.Linear(25088, 4096)),
                           ('relu', nn.ReLU()),
                           ('fc2', nn.Linear(4096, 102)),
                           ('output', nn.LogSoftmax(dim=1))
                           ]))

model.classifier = classifier

The criterions and optimizers are as follows:

criterion = nn.NLLLoss()
optimizer = optim.Adam(model.classifier.parameters(), lr=0.001)

My validation function is as follows:

def validation(model, testloader, criterion):
    test_loss = 0
    accuracy = 0
    for images, labels in testloader:

        images.resize_(images.shape[0], 784)

        output = model.forward(images)
        test_loss += criterion(output, labels).item()

        ps = torch.exp(output)
        equality = (labels.data == ps.max(dim=1)[1])
        accuracy += equality.type(torch.FloatTensor).mean()

    return test_loss, accuracy

This is the piece of code that is throwing the following error:

RuntimeError: input has less dimensions than expected

epochs = 3
print_every = 40
steps = 0
running_loss = 0
testloader = dataloaders['test']

# change to cuda
model.to('cuda')

for e in range(epochs):
    running_loss = 0
    for ii, (inputs, labels) in enumerate(dataloaders['train']):
        steps += 1

        inputs, labels = inputs.to('cuda'), labels.to('cuda')

        optimizer.zero_grad()

        # Forward and backward passes
        outputs = model.forward(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        running_loss += loss.item()

        if steps % print_every == 0:
            model.eval()
            with torch.no_grad():
                test_loss, accuracy = validation(model, testloader, criterion)

            print("Epoch: {}/{}.. ".format(e+1, epochs),
                  "Training Loss: {:.3f}.. ".format(running_loss/print_every),
                  "Test Loss: {:.3f}.. ".format(test_loss/len(testloader)),
                  "Test Accuracy: {:.3f}".format(accuracy/len(testloader)))

            running_loss = 0

Any help?

Devolve answered 5/9, 2018 at 2:16 Comment(2)
well, I don't see your testloader definition. Seems like you're not just inputting the right shape (should have a batch index at the first dimension for one thing)Redmund
As @Redmund said, there is some problem with the input dimension. Everything else looks fine to me. just check your input dimension.Anastatius
J
4

Just in case it helps someone.

If you don't have a GPU system (say you are developing on a laptop and will eventually test on a server with GPU) you can do the same using:

if torch.cuda.is_available():
        inputs =inputs.to('cuda')
    else:
        inputs = inputs.to('cuda')

Also, if you are wondering why there is a LogSoftmax, instead of Softmax that is because he is using NLLLoss as his loss function. You can read more about softmax here

Josi answered 3/2, 2019 at 23:15 Comment(0)
M
4

You can find below another validation method that may help in case someone wants to build models using GPU. First thing we need to create device to use either GPU or CPU. Start with importing torch modules.

import torch
import torch.nn as nn

from torch.utils.data import DataLoader

And then create device:

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

We will use this device on our datas. We can calculate the accuracy of our model with the method below.

def check_accuracy(test_loader: DataLoader, model: nn.Module, device):
    num_correct = 0
    total = 0
    model.eval()

    with torch.no_grad():
        for data, labels in test_loader:
            data = data.to(device=device)
            labels = labels.to(device=device)

            predictions = model(data)
            num_correct += (predictions == labels).sum()
            total += labels.size(0)

        print(f"Test Accuracy of the model: {float(num_correct)/float(total)*100:.2f}")
Mydriatic answered 14/11, 2020 at 21:14 Comment(0)
D
3

I needed to change the validation function as follows:

def validation(model, testloader, criterion):
    test_loss = 0
    accuracy = 0

    for inputs, classes in testloader:
        inputs = inputs.to('cuda')
        output = model.forward(inputs)
        test_loss += criterion(output, labels).item()

        ps = torch.exp(output)
        equality = (labels.data == ps.max(dim=1)[1])
        accuracy += equality.type(torch.FloatTensor).mean()

    return test_loss, accuracy

inputs need to be converted to 'cuda': inputs.to('cuda')

Devolve answered 5/9, 2018 at 6:40 Comment(1)
I assume, labels meant to be classes?Courageous

© 2022 - 2024 — McMap. All rights reserved.