When saving tensor, torch saves not only data but also -- as you can see -- several other useful information for later deserialisation.
If you need csv serialisation, you are good to implement it yourself.
Fortunately, this is very straightforward.
Here is a quick example :
require 'torch'
matrix = torch.Tensor(5,3) -- a 5x3 matrix
matrix:random(1,10) -- matrix initialized with random numbers in [1,10]
print(matrix) -- let's see the matrix content
subtensor = matrix[{{1,3}, {2,3}}] -- let's create a view on the row 1 to 3, for which we take columns 2 to 3 (the view is a 3x2 matrix, note that values are bound to the original tensor)
local out = assert(io.open("./dump.csv", "w")) -- open a file for serialization
splitter = ","
for i=1,subtensor:size(1) do
for j=1,subtensor:size(2) do
out:write(subtensor[i][j])
if j == subtensor:size(2) then
out:write("\n")
else
out:write(splitter)
end
end
end
out:close()
The output on my computer for the matrix is :
10 10 6
4 8 3
3 8 5
5 5 5
1 6 8
[torch.DoubleTensor of size 5x3]
and the file dumped content :
10,6
8,3
8,5
HTH