Image Generator for 3D volumes in keras with data augmentation
Asked Answered
P

1

11

Since the ImageDataGenerator by keras is not suitable for 3D volumes, I started to write my own generator for keras (semantic segmentation, not classification!).

1) If there is anybody out there that has adapted the ImageDataGenerator code to work with 3D volumes, please share it! This guy has done it for videos.

2) According to this tutorial I wrote a custom generator.


import glob
import os

import keras
import numpy as np
import skimage
from imgaug import augmenters as iaa


class DataGenerator(keras.utils.Sequence):
    """Generates data for Keras"""
    """This structure guarantees that the network will only train once on each sample per epoch"""

    def __init__(self, list_IDs, im_path, label_path, batch_size=4, dim=(128, 128, 128),
                 n_classes=4, shuffle=True, augment=False):
        'Initialization'
        self.dim = dim
        self.batch_size = batch_size
        self.list_IDs = list_IDs
        self.im_path = im_path
        self.label_path = label_path
        self.n_classes = n_classes
        self.shuffle = shuffle
        self.augment = augment
        self.on_epoch_end()

    def __len__(self):
        'Denotes the number of batches per epoch'
        return int(np.floor(len(self.list_IDs) / self.batch_size))

    def __getitem__(self, index):
        'Generate one batch of data'
        # Generate indexes of the batch
        indexes = self.indexes[index * self.batch_size:(index + 1) * self.batch_size]

        # Find list of IDs
        list_IDs_temp = [self.list_IDs[k] for k in indexes]

        # Generate data
        X, y = self.__data_generation(list_IDs_temp)

        return X, y

    def on_epoch_end(self):
        'Updates indexes after each epoch'
        self.indexes = np.arange(len(self.list_IDs))
        if self.shuffle == True:
            np.random.shuffle(self.indexes)

    def __data_generation(self, list_IDs_temp):

        if self.augment:
            pass

        if not self.augment:

            X = np.empty([self.batch_size, *self.dim])
            Y = np.empty([self.batch_size, *self.dim, self.n_classes])

            # Generate data
            for i, ID in enumerate(list_IDs_temp):
                img_X = skimage.io.imread(os.path.join(im_path, ID))
                X[i,] = img_X

                img_Y = skimage.io.imread(os.path.join(label_path, ID))
                Y[i,] = keras.utils.to_categorical(img_Y, num_classes=self.n_classes)

            X = X.reshape(self.batch_size, *self.dim, 1)
            return X, Y

params = {'dim': (128, 128, 128),
          'batch_size': 4,
          'im_path': "some/path/for/the/images/",
          'label_path': "some/path/for/the/label_images",
          'n_classes': 4,
          'shuffle': True,
          'augment': True}



partition = {}
im_path = "some/path/for/the/images/"
label_path = "some/path/for/the/label_images/"

images = glob.glob(os.path.join(im_path, "*.tif"))
images_IDs = [name.split("/")[-1] for name in images]

partition['train'] = images_IDs

training_generator = DataGenerator(partition['train'], **params)

My images have the size (128, 128, 128) and when I load them in I get a 5D tensor of size (batch_size, depth, heigt, width, number_of_channels), e.g. (4, 128, 128, 128, 1). For the label_images (which have the same dimensions and are single channel coded (value 1 = label 1, value 2 = label 2, value 3 = label 3 and value 0 = label 4 or background)) I get a binary representation of the labels with the to_categorical() function from keras and end up with a 5D, e.g. (4, 128, 128, 128, 4). The images and label_images have the same name and are located in different folders.

As I only have very few images, I would like to extend the total number of images through image augmentation. How would I do that with this generator? I have successfully tested the imgaug package, but instead of adding images to my set I only transform the existing images (e.g. flip them horizontally)

Edit: I was in misconception regarding data augmentation. See this article about image augmentation. Images will be passed in with random transformations (on-the-fly). Now I just have to gather enough data and set the parameters with imgaug. I will update this soon.

Pane answered 31/7, 2019 at 10:31 Comment(3)
Hi Johannes: Can you please let us know if your issue is resolved. Thanks!Naldo
Hi Tensoflow Support! I switched to PyTorch as I discovered Elektronn3. Based on this I could write efficient code for my custom 3D dataset. Might try out keras soon again to compare both frameworks.Pane
I’m voting to close this question because I can't seem to find your question, if you have one.Detumescence
B
1

I found an implementation of a Keras customDataGenerator for 3D volume. Here is a GitHub link. The implementation can easily be expanded to include new augmentation techniques. Here is a minimal working example I am working in my project (3D volume semantic segmentation) based in the implementation I shared in the link:

from generator import customImageDataGenerator

def generator(images, groundtruth, batch):
    """Load a batch of augmented images"""
  
    gen = customImageDataGenerator(mirroring=True, 
                                   rotation_90=True,
                                   transpose_axes=True
                                   )

    for b in gen.flow(x=images, y=groundtruth, batch_size=batch):
        yield (b[0], (b[1]).astype(float))

# images = (123, 48,48,48,1)
# groundtruth = (123, 48,48,48,1)
history = model.fit(
    x=generator(images, groundtruth, batchSize),
    validation_data=(imagesTest, groundtruthTest),
    steps_per_epoch=len(images) / batchSize,
    epochs=epochs,
    callbacks=[callbacks],
    )
Bum answered 22/12, 2022 at 15:59 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.