How can you load all batch data into GPU memory in Keras (Theano backend)?
Asked Answered
E

1

6

Keras loads data onto the GPU batch-by-batch (noted by the author here).

For small datasets, this is very inefficient. Is there a way to modify Keras or call Theano functions directly (after defining the model in Keras) to allow all batches to be moved to the GPU up front, and training done using the batches already in GPU memory?

(Someone asked the same question on the Keras list a few weeks ago, but has no replies so far.)

Extinctive answered 14/8, 2016 at 16:23 Comment(1)
Did you find a good answer to your question?Sall
D
1

Just hard-wire your data into the model as a non-trainable embedding matrix (Embedding layer with your custom initializer). Then instead of the training data you pass a bunch of indices to the model.fit method.

Distinguishing answered 20/7, 2017 at 12:27 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.