PyTorch Lightning: Multiple scalars (e.g. train and valid loss) in same Tensorboard graph
Asked Answered
C

2

6

With PyTorch Tensorboard I can log my train and valid loss in a single Tensorboard graph like this:

writer = torch.utils.tensorboard.SummaryWriter()

for i in range(1, 100):
    writer.add_scalars('loss', {'train': 1 / i}, i)

for i in range(1, 100):
    writer.add_scalars('loss', {'valid': 2 / i}, i)

enter image description here

How can I achieve the same with Pytorch Lightning's default Tensorboard logger?

def training_step(self, batch: Tuple[Tensor, Tensor], _batch_idx: int) -> Tensor:
    inputs_batch, labels_batch = batch

    outputs_batch = self(inputs_batch)
    loss = self.criterion(outputs_batch, labels_batch)

    self.log('loss/train', loss.item())  # creates separate graph

    return loss

def validation_step(self, batch: Tuple[Tensor, Tensor], _batch_idx: int) -> None:
    inputs_batch, labels_batch = batch

    outputs_batch = self(inputs_batch)
    loss = self.criterion(outputs_batch, labels_batch)

    self.log('loss/valid', loss.item(), on_step=True)  # creates separate graph
Catechin answered 20/2, 2021 at 1:27 Comment(0)
I
6

The doc describe it as self.logger.experiment.some_tensorboard_function() where some_tensorboard_function is the provided functions from tensorboard so for your question you want to use

self.logger.experiment.add_scalars() 

Tensorboard doc for pytorch-lightning can be found here

Isochromatic answered 21/2, 2021 at 20:41 Comment(2)
Yeah, already feared that I could not use self.log(). Was hoping I had missed something as logging train/valid loss together seems like a pretty basic use case to me.Catechin
I agree, this seems like a very common use case that self.log does not cover. Hope someone would fill this gap soon.Bataan
K
5

Just to clarify the above the code then in Pytorch Lightning would be:

def training_step(self, batch: Tuple[Tensor, Tensor], _batch_idx: int) -> Tensor:
    inputs_batch, labels_batch = batch

    outputs_batch = self(inputs_batch)
    loss = self.criterion(outputs_batch, labels_batch)

    self.logger.experiment.add_scalars('loss', {'train': loss},self.global_step) 

    return loss

def validation_step(self, batch: Tuple[Tensor, Tensor], _batch_idx: int) -> None:
    inputs_batch, labels_batch = batch

    outputs_batch = self(inputs_batch)
    loss = self.criterion(outputs_batch, labels_batch)

    self.logger.experiment.add_scalars('loss', {'valid': loss},self.global_step) 
Kitts answered 2/3, 2022 at 3:14 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.