Why was Eigen chosen for TensorFlow? [closed]
Asked Answered
M

1

21

The TensorFlow white paper mentions that Eigen is used. Are there public explanations for how Eigen was chosen, and are they motivation for using Eigen in TensorFlow C++ op kernels?

Memoried answered 7/1, 2017 at 6:3 Comment(9)
Armadillo is also header only.Seaman
tensorflow uses the Tensor module of Eigen (which is mostly maintained by the main author of tensorflow). I don't have any experience with armadillo, nor do I know why he chose Eigen. I do know that he once asked if it was possible to integrate tensorflow as a module of Eigen as well (which we rejected, since it goes quite out of the scope of Eigen).Seethrough
hi dani - i've thought of amadillo as headers only in the past, and did use it that way for many years. it was ok but no matrix invert without installing blas, openblas etc. for a project in 2014 i added eigen just to do matrix invert headers only - an odd situation. recent armadillo versions seem to move away from mentioning headers only and simply go with library install, with openblas etc.Memoried
hi chtz - that definitely makes sense to me, and fits with the brief comment in the whitepaper. i've integrated eigen in my current project, side by side with armadillo, and will definitely report here on impressions. as an old school blas, lapack, etc guy - eigen with tensorflow seems to have the feel of the future. in other words, i have the feeling my project will be using eigen within tensorflow ops... i'll update this discussion soon.Memoried
TensorFlow uses CUDA since it's faster, same TF Eigen op implementation can run both on CPU and GPU. From docs it looks like Armadillo is OpenCL onlySilvie
hi yaroslav - interesting, let's see if i understand - cuda is associated with nvidia, and they're driving the hardware side of things. so for cpp tf op kernel development, the fact that eigen has cuda is another plus... interesting! i think i understand... more evidence that eigen could be important for my project...Memoried
@NoahSmith Instead of hi Name please use @Name. This way a notification is sent to the user.Seethrough
@Seethrough ahha thanks, understood, will do.Memoried
What is the paper mentioned by the OP?Clearing
C
22

I think that one of the key feature that drove the use of Eigen in the first place is because Eigen features its own highly optimized matrix product kernels whereas all other competitors have to be linked to some BLAS libraries. Moreover, the code of Eigen's product kernel is C++ with easy access to low-level internal kernels, so it was 'easy' for them to tweak and extend it to match their needs. This way Google has been able to develop the Tensor module with high CPU performance in a pure header-only fashion. The support for CUDA and now OpenCL via SyCL came later, those are not intrinsic features of Eigen that drove the initial choice.

Cashbook answered 10/1, 2017 at 8:15 Comment(1)
thanks this fits with what i'm seeing as i look further into the eigen code. matrix manipulations are highly visible, not outside in blas. really interesting and motivating to dig deeper.Memoried

© 2022 - 2024 — McMap. All rights reserved.