I am training an image classification CNN using Keras.
Using the ImageDataGenerator
function, I apply some random transformations to the training images (e.g. rotation, shearing, zooming).
My understanding is, that these transformations are applied randomly to each image before passed to the model.
But some things are not clear to me:
1) How can I make sure that specific rotations of an image (e.g. 90°, 180°, 270°) are ALL included while training.
2) The steps_per_epoch
parameter of model.fit_generator
should be set to the
number of unique samples of the dataset divided by the batch size define in the flow_from_directory
method. Does this still apply when using the above mentioned image augmentation methods, since they increase the number of training images?
Thanks, Mario