I am using torch C++ frontend and want to have a tensor with specified value in it. To achieve this one may allocate memory and set value by hand, then use torch::from_blob
to build a tensor on the memory block, but it seems not clean enough for me.
In the very bottom of this document I found out that I can use subscript to directly access and modify the data. However, this approach has a big running time overhead, likely because the subscript access will treat the element of tensor as a 0-d tensor. The following code will cost more than 2 seconds on my machine (-O3
optimization level), which is unreasonably long for modern CPU.
torch::Tensor tensor = torch::empty({1000, 1000});
for(int i=0; i < 1000; i++)
{
for(int j=0 ; j < 1000; j++)
{
tensor[i][j] = calc_tensor_data(i,j);
}
}
Is there a clean and fast way to achieve this goal?