OverflowError: Python int too large to convert to C long torchtext.datasets.text_classification.DATASETS['AG_NEWS']()
Asked Answered
T

1

8

I have 64 bit windows 10 OS I have installed python 3.6.8 I have installed torch and torchtext using pip. torch version is 1.2.0

I am trying to load AG_NEWS dataset using below code:

import torch
import torchtext
from torchtext.datasets import text_classification
NGRAMS = 2
import os
if not os.path.isdir('./.data'):
    os.mkdir('./.data')
train_dataset, test_dataset = text_classification.DATASETS['AG_NEWS'](root='./.data', ngrams=NGRAMS, vocab=None)

On the last statement of above code, I am getting below error:

---------------------------------------------------------------------------
OverflowError                             Traceback (most recent call last)
<ipython-input-1-7e8544fdaaf6> in <module>
      6 if not os.path.isdir('./.data'):
      7     os.mkdir('./.data')
----> 8 train_dataset, test_dataset = text_classification.DATASETS['AG_NEWS'](root='./.data', ngrams=NGRAMS, vocab=None)
      9 # BATCH_SIZE = 16
     10 # device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

c:\users\pramodp\appdata\local\programs\python\python36\lib\site-packages\torchtext\datasets\text_classification.py in AG_NEWS(*args, **kwargs)
    168     """
    169 
--> 170     return _setup_datasets(*(("AG_NEWS",) + args), **kwargs)
    171 
    172 

c:\users\pramodp\appdata\local\programs\python\python36\lib\site-packages\torchtext\datasets\text_classification.py in _setup_datasets(dataset_name, root, ngrams, vocab, include_unk)
    126     if vocab is None:
    127         logging.info('Building Vocab based on {}'.format(train_csv_path))
--> 128         vocab = build_vocab_from_iterator(_csv_iterator(train_csv_path, ngrams))
    129     else:
    130         if not isinstance(vocab, Vocab):

c:\users\pramodp\appdata\local\programs\python\python36\lib\site-packages\torchtext\vocab.py in build_vocab_from_iterator(iterator)
    555     counter = Counter()
    556     with tqdm(unit_scale=0, unit='lines') as t:
--> 557         for tokens in iterator:
    558             counter.update(tokens)
    559             t.update(1)

c:\users\pramodp\appdata\local\programs\python\python36\lib\site-packages\torchtext\datasets\text_classification.py in _csv_iterator(data_path, ngrams, yield_cls)
     33     with io.open(data_path, encoding="utf8") as f:
     34         reader = unicode_csv_reader(f)
---> 35         for row in reader:
     36             tokens = ' '.join(row[1:])
     37             tokens = tokenizer(tokens)

c:\users\pramodp\appdata\local\programs\python\python36\lib\site-packages\torchtext\utils.py in unicode_csv_reader(unicode_csv_data, **kwargs)
    128             maxInt = int(maxInt / 10)
    129 
--> 130     csv.field_size_limit(sys.maxsize)
    131 
    132     if six.PY2:

OverflowError: Python int too large to convert to C long

I think the issue is with either windows os or torchtext because I am getting same error for below code as well.

pos = data.TabularDataset( path='data/pos/pos_wsj_train.tsv', format='tsv', fields=[('text', data.Field()),
        ('labels', data.Field())])

Can somebody please help? and mainly I don't have any large numerical values in the file.

Toowoomba answered 18/9, 2019 at 9:0 Comment(0)
K
8

I also encountered a similar problem. I changed a line of code in my torchtext\utils.py file and my error disappeared.

Changed this:

csv.field_size_limit(sys.maxsize)

To this:

csv.field_size_limit(maxInt)
Kg answered 23/9, 2019 at 23:4 Comment(1)
worked! Just had to restart. I have pytho 3.7.3, windows 10, 64 bitIcken

© 2022 - 2024 — McMap. All rights reserved.