Pytorch RuntimeError: [enforce fail at CPUAllocator.cpp:56] posix_memalign(&data, gAlignment, nbytes) == 0. 12 vs 0
Asked Answered
F

3

6

I'm building a simple content based recommendations system. In order to compute the cosine similarity in a GPU accelerated way, i'm using Pytorch.

At the time of creating the tfidf vocabulary tensor from a csr_matrix, it promts the following RuntimeErrorr

RuntimeError: [enforce fail at CPUAllocator.cpp:56] posix_memalign(&data, gAlignment, nbytes) == 0. 12 vs 0

I'm doing it in this way:

coo = tfidf_matrix.tocoo()
values = coo.data
indices = np.vstack( (coo.row, coo.col ))
i = torch.LongTensor(indices)
v = torch.FloatTensor(values)
tfidf_matrix_tensor = torch.sparse.FloatTensor(i, v, torch.Size(coo1.shape)).to_dense() 
# Prompts the error

I tried with a small test (tfidf matrix size = 10,296) dataset and it works. The tfidf matrix size from the real dataset is (27639, 226957)

Forgive answered 23/5, 2019 at 10:13 Comment(1)
This is likely a bug in PyTorch and is better resolved by asking on github issues.Detonate
I
2

I tried the same piece of code that was throwing this error with the older version of PyTorch. It said that I need to have more RAM. Therefore, it's not a PyTorch bug. The only solution is to reduce matrix size somehow.

Ineslta answered 29/5, 2019 at 12:4 Comment(0)
J
0

I was having the same issue with converting small Numpy matrices, and the fix was using torch.tensor instead of torch.Tensor. I'd imagine once you do that, you can cast to the specific type of tensor you want.

Jamille answered 11/8, 2021 at 20:28 Comment(0)
H
0

A bit tangential, in my case, I ran into this issue while running the DGL implementation of GraphSAGE. I was working on a Twitter network graph of around 10 M nodes and was using the default Twitter userid as my nodeid. I realised that the CPU was running out of memory when trying to map this to the long dtype, so I remapped my ids from 0,1... and then this issue was resolved.

Hg answered 27/3, 2022 at 20:58 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.