C++ Eigen: dynamic tensor
Asked Answered
D

2

6

I would like to implement a C++ class that has a vector of tensors as member. The dimensions of the tensors are not predefined but will take values according to some input data. Moreover, the ranks of the tensors can be different. Something like this:

std::vector< TensorXd > myTensors;

In Eigen, however, there is no such TensorXd type for dynamic tensors.

For constructing each tensor, I will read a vector of data std::vector<double> values that represents a tensor of dimension n x n x ... x n (r times). Something like this:

Tensor<double, r> tensor = TensorMap<double, r>(values.data(), std::vector<size_t>(r, n);
myTensors.push_back(tensor);

Is it possible to do that?

Thank you very much in advance for your help!

Update:

As Yaroslav Bulatov pointed out, Eigen does not support dynamic rank and thus the supported ranks have to be written out explicitly. In my code:

#include <iostream>
#include <vector>
#include <Eigen/Dense>
#include <unsupported/Eigen/CXX11/Tensor>

typedef Eigen::Tensor< double , 3 > Tensor3d;
typedef Eigen::Tensor< double , 4 > Tensor4d;
typedef Eigen::Tensor< double , 5 > Tensor5d;
typedef Eigen::Tensor< double , 6 > Tensor6d;
typedef Eigen::Tensor< double , 7 > Tensor7d;
typedef Eigen::Tensor< double , 8 > Tensor8d;
typedef Eigen::Tensor< double , 9 > Tensor9d;
typedef Eigen::Tensor< double , 10 > Tensor10d;

class MyClass
{
private:                          
    Eigen::MatrixXd Potentials_1;            
    std::vector<Eigen::MatrixXd> Potentials_2;  
    std::vector< Tensor3d > Potentials_3;
    std::vector< Tensor4d > Potentials_4;
    std::vector< Tensor5d > Potentials_5;
    std::vector< Tensor6d > Potentials_6;
    std::vector< Tensor7d > Potentials_7;
    std::vector< Tensor8d > Potentials_8;
    std::vector< Tensor9d > Potentials_9;
    std::vector< Tensor10d > Potentials_10;

public:
    MyClass();
    void setPotentials_1(const Eigen::MatrixXd &_Potentials_1){ Potentials_1 = _Potentials_1; }
    void setPotentials_2(const std::vector<Eigen::MatrixXd> &_Potentials_2){ Potentials_2 = _Potentials_2; }
    void setPotentials_3(const std::vector<Tensor3d> &_Potentials_3){ Potentials_3 = _Potentials_3; }
    void setPotentials_4(const std::vector<Tensor4d> &_Potentials_4){ Potentials_4 = _Potentials_4; }
    void setPotentials_5(const std::vector<Tensor5d> &_Potentials_5){ Potentials_5 = _Potentials_5; }
    void setPotentials_6(const std::vector<Tensor6d> &_Potentials_6){ Potentials_6 = _Potentials_6; }
    void setPotentials_7(const std::vector<Tensor7d> &_Potentials_7){ Potentials_7 = _Potentials_7; }
    void setPotentials_8(const std::vector<Tensor8d> &_Potentials_8){ Potentials_8 = _Potentials_8; }
    void setPotentials_9(const std::vector<Tensor9d> &_Potentials_9){ Potentials_9 = _Potentials_9; }
    void setPotentials_10(const std::vector<Tensor10d> &_Potentials_10){ Potentials_10 = _Potentials_10; }
};

Yaroslav also suggested that using macros can help to void code duplication. I'm not familiar with C++ macros, thus any help would be very appreciated.

Thanks for your help!

Demean answered 26/2, 2017 at 22:50 Comment(2)
Eigen doesn't support dynamic ranks so every supported rank has to be written out explicitly, using macros to save on code duplication. See github.com/tensorflow/tensorflow/commit/eaf96c45 for an example of adding support for a couple of extra ranks to opsPest
@YaroslavBulatov Thanks. I'm not familiar with C++ macros. Could you please read the update and tell me how to use macros in my case? Thank you very much!Demean
P
4

In my implementation I have overcome the problem by using a new C ++ 17 utility, std :: variant; I used it as Union Template.

typedef std::variant<Eigen::Tensor<double, 2>, Eigen::Tensor<double, 3>, /* ... */> TensorOptions;

I defined the trait above for easier reading.

// i is the order of the tensor
TensorOptions makeTensor(const std::size_t& i,const std::initializer_list<int>& dimensions) const
        {
            int * Arr= new int[i];
            std::copy(std::begin(dimensions), std::end(dimensions), Arr);
            switch (i) {
        
               case 2: {
                    Eigen::Tensor<double, 2> T2;
                    T2.resize(Arr);
                    return T2;
                }
                case 3: {
                    Eigen::Tensor<double, 3> T3;
                    T3.resize(Arr);
                    return T3;
                }
                /* ... */
              }
              delete [] Arr;
         }
                
int main() {
        auto myTensor{makeTensor(2, {4, 5})};  // Tensor 2D 4x5
 }

I want to point out that to access the methods it is necessary to use std :: visit. Here I report a couple of examples

// Rank
auto rnk = std::visit([](const auto &tensor) { return tensor.rank(); }, myTensor);



// Unfolding
// idxResh and idxSh are dynamic arrays

Eigen::Tensor<double,2> unfolded = std::visit([&idxResh, &idxSh]( auto& tensor) {
        // Shuffle
        auto tensSh=tensor.shuffle(idxSh);
        // Reshape
        Eigen::Tensor<double,2> tensResh=tensSh.reshape(idxResh);
        return tensResh;},myTensor);
Powerless answered 16/9, 2020 at 12:46 Comment(1)
Nice. I won't be able to check this any time soon, but it looks neat. +1Demean
S
3

You can check out the xtensor C++ template library, which supports both dynamic and static dimensionality.

http://xtensor.readthedocs.io/en/latest/

xtensor has an API that is very similar to that of numpy, including vectorization, broadcasting, universal function. There is a numpy to xtensor cheat sheet here: http://xtensor.readthedocs.io/en/latest/numpy.html

Finally, you can try it live in a C++ Jupyter notebook by clicking on the binder badge at the top of https://github.com/QuantStack/xtensor/

xtensor also comes with bindings for the main languages of scientific computing (R, Julia, Python).

Stuyvesant answered 22/9, 2017 at 20:0 Comment(4)
Thanks. Nice one! But how does xtensor compare with Eigen tensor in terms of performance? Especially on tensor contraction (which is a key operation in my project).Demean
There is not a systematic benchmark. SIMD acceleration was only recently plugged into xtensor.Stuyvesant
I've checked the docs and it appears that xtensor does not support tensor contraction yet. More specifically I need to perform the (inner) product of a D-dimensional tensor with (D-1) vectors (and thus the result is a vector). Is it possible to do that efficiently in xtensor? I guess I should post a separate question.Demean
There is a xtensor-blas project which exposes some blas-level features, but not general tensor contraction yet. At the moment, xtensor is only a few months old, and new features are being added as we speak.Stuyvesant

© 2022 - 2024 — McMap. All rights reserved.