TypeError: object of type 'numpy.int64' has no len()
Asked Answered
C

7

6

I am making a DataLoader from DataSet in PyTorch.

Start from loading the DataFrame with all dtype as an np.float64

result = pd.read_csv('dummy.csv', header=0, dtype=DTYPE_CLEANED_DF)

Here is my dataset classes.

from torch.utils.data import Dataset, DataLoader
class MyDataset(Dataset):
    def __init__(self, result):
        headers = list(result)
        headers.remove('classes')

        self.x_data = result[headers]
        self.y_data = result['classes']
        self.len = self.x_data.shape[0]

    def __getitem__(self, index):
        x = torch.tensor(self.x_data.iloc[index].values, dtype=torch.float)
        y = torch.tensor(self.y_data.iloc[index], dtype=torch.float)
        return (x, y)

    def __len__(self):
        return self.len

Prepare the train_loader and test_loader

train_size = int(0.5 * len(full_dataset))
test_size = len(full_dataset) - train_size
train_dataset, test_dataset = torch.utils.data.random_split(full_dataset, [train_size, test_size])

train_loader = DataLoader(dataset=train_dataset, batch_size=16, shuffle=True, num_workers=1)
test_loader = DataLoader(dataset=train_dataset)

Here is my csv file

When I try to iterate over the train_loader. It raises the error

for i , (data, target) in enumerate(train_loader):
    print(i)

TypeError                                 Traceback (most recent call last)
<ipython-input-32-0b4921c3fe8c> in <module>
----> 1 for i , (data, target) in enumerate(train_loader):
      2     print(i)

/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py in __next__(self)
    635                 self.reorder_dict[idx] = batch
    636                 continue
--> 637             return self._process_next_batch(batch)
    638 
    639     next = __next__  # Python 2 compatibility

/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py in _process_next_batch(self, batch)
    656         self._put_indices()
    657         if isinstance(batch, ExceptionWrapper):
--> 658             raise batch.exc_type(batch.exc_msg)
    659         return batch
    660 

TypeError: Traceback (most recent call last):
  File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 138, in _worker_loop
    samples = collate_fn([dataset[i] for i in batch_indices])
  File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 138, in <listcomp>
    samples = collate_fn([dataset[i] for i in batch_indices])
  File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataset.py", line 103, in __getitem__
    return self.dataset[self.indices[idx]]
  File "<ipython-input-27-107e03bc3c6a>", line 12, in __getitem__
    x = torch.tensor(self.x_data.iloc[index].values, dtype=torch.float)
  File "/opt/conda/lib/python3.6/site-packages/pandas/core/indexing.py", line 1478, in __getitem__
    return self._getitem_axis(maybe_callable, axis=axis)
  File "/opt/conda/lib/python3.6/site-packages/pandas/core/indexing.py", line 2091, in _getitem_axis
    return self._get_list_axis(key, axis=axis)
  File "/opt/conda/lib/python3.6/site-packages/pandas/core/indexing.py", line 2070, in _get_list_axis
    return self.obj._take(key, axis=axis)
  File "/opt/conda/lib/python3.6/site-packages/pandas/core/generic.py", line 2789, in _take
    verify=True)
  File "/opt/conda/lib/python3.6/site-packages/pandas/core/internals.py", line 4537, in take
    new_labels = self.axes[axis].take(indexer)
  File "/opt/conda/lib/python3.6/site-packages/pandas/core/indexes/base.py", line 2195, in take
    return self._shallow_copy(taken)
  File "/opt/conda/lib/python3.6/site-packages/pandas/core/indexes/range.py", line 267, in _shallow_copy
    return self._int64index._shallow_copy(values, **kwargs)
  File "/opt/conda/lib/python3.6/site-packages/pandas/core/indexes/numeric.py", line 68, in _shallow_copy
    return self._shallow_copy_with_infer(values=values, **kwargs)
  File "/opt/conda/lib/python3.6/site-packages/pandas/core/indexes/base.py", line 538, in _shallow_copy_with_infer
    if not len(values) and 'dtype' not in kwargs:
TypeError: object of type 'numpy.int64' has no len()

Related issues:
https://github.com/pytorch/pytorch/issues/10165
https://github.com/pytorch/pytorch/pull/9237
https://github.com/pandas-dev/pandas/issues/21946

Questions:
How to workaround pandas issue here?

Cantabrigian answered 24/12, 2018 at 18:13 Comment(4)
Try looking at the shape of train_loader using train_loader.shape. Most probably, there is some issue with the number of entries.Domino
@Bazingaa ['_DataLoader__initialized', 'batch_sampler', 'batch_size', 'collate_fn', 'dataset', 'drop_last', 'num_workers', 'pin_memory', 'sampler', 'timeout', 'worker_init_fn'] It does not has shapeCantabrigian
Your problem is caused by this line: x = torch.tensor(self.x_data.iloc[index].values, dtype=torch.float), I guess more precisely it is caused by calling .values. But I'm no expert in pandas. So this doesn't seem to to have something to do with PyTorch itself. I added the pandas tag to your question, I guess someone there will be able to tell you exactly what the problem is.Nerta
@blue-phoenox same errorCantabrigian
C
4

Reference:
https://github.com/pytorch/pytorch/issues/9211

Just add .tolist() to indices line.

def random_split(dataset, lengths):
    """
    Randomly split a dataset into non-overlapping new datasets of given lengths.
    Arguments:
        dataset (Dataset): Dataset to be split
        lengths (sequence): lengths of splits to be produced
    """
    if sum(lengths) != len(dataset):
        raise ValueError("Sum of input lengths does not equal the length of the input dataset!")

    indices = randperm(sum(lengths)).tolist()
    return [Subset(dataset, indices[offset - length:offset]) for offset, length in zip(_accumulate(lengths), lengths)]
Cantabrigian answered 25/12, 2018 at 11:4 Comment(0)
H
4

I think the issue is that after using random_split, index is now a torch.Tensor rather than an int. I found that adding a quick type check to __getitem__ and then using .item() on the tensor works for me:

def __getitem__(self, index):

    if type(index) == torch.Tensor:
        index = index.item()

    x = torch.tensor(self.x_data.iloc[index].values, dtype=torch.float)
    y = torch.tensor(self.y_data.iloc[index], dtype=torch.float)
    return (x, y)

Source: https://discuss.pytorch.org/t/issues-with-torch-utils-data-random-split/22298/8

Hoxha answered 4/1, 2019 at 8:10 Comment(0)
C
0

Why not simply to try:

self.len = len(self.x_data)

len works fine with pandas DataFrame w/o conversion to array or tensor.

Cottonmouth answered 26/12, 2018 at 20:26 Comment(0)
C
0

I solved the issue by upgrading my version of PyTorch to version 1.3.

https://pytorch.org/get-started/locally/

Claustral answered 21/10, 2019 at 3:23 Comment(0)
M
0

I have total 2298 images. So if I do the following way

[int(len(data)*0.8),int(len(data)*0.2)]

it throw the error mentioned in question. As

[int(len(data)*0.8)+int(len(data)*0.2)]=2297

So what I do is floor and ceil functions

[int(np.floor(len(data)*0.8)),int(np.ceil(len(data)*0.2))])

and it resulted in 2298 and error is gone

Ministration answered 14/7, 2020 at 7:15 Comment(0)
U
0

In my script, I first create a Tensordataset by dataset = TensorDataset(data_x, data_y) and then use the train_dataset, test_dataset = torch.utils.data.random_split(dataset, [train_size, test_size]). This will not cause a problem in later training iterations.

Uticas answered 6/3, 2021 at 4:30 Comment(0)
V
0

What I like to do is split the data to 2 data frames like this-

from sklearn.model_selection import train_test_split

train, test = train_test_split(full_dataset, test_size=0.2)

Then create loaders from the 2 dataset like this-

train_loader = DataLoader(dataset=train, batch_size=16, shuffle=True, num_workers=1)
test_loader = DataLoader(dataset=test)

I think it is the most clean way to do this.

Vandusen answered 14/9, 2021 at 13:20 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.