I am pretty new to tensorflow. I used to use theano for deep learning development. I notice a difference between these two, that is where input data can be stored.
In Theano, it supports shared variable to store input data on GPU memory to reduce the data transfer between CPU and GPU.
In tensorflow, we need to feed data into placeholder, and the data can come from CPU memory or files.
My question is: is it possible to store input data on GPU memory for tensorflow? or does it already do it in some magic way?
Thanks.
log_device_placement
in the first example you link to shows that the queueing operations generated bytf.train.slice_producer
reside on the CPU. Queueing slices on the CPU would seem to negate the advantage of storing the data on the GPU since the slices would be transferred to CPU and back. Am I missing something? – Forepeaktf.data.Dataset.from_tensor_slices
and some of theIterator
functionality don't currently have GPU kernels either. That's how I ended up here. – Forepeak