I want to train a model on several GPUs using tensorflow 2.0. In the tensorflow tutorial for distributed training (https://www.tensorflow.org/guide/distributed_training), the tf.data
datagenerator is converted into a distributed dataset as follows:
dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset)
However, I want to use my own custom data generator instead (for example, the keras.utils.Sequence
datagenerator, along with keras.utils.data_utils.OrderedEnqueuer
for asynchronous batch generation). But the mirrored_strategy.experimental_distribute_dataset
method supports only tf.data
datagenerator. How do I use the keras datagenerator instead?
Thank you!