import csv with different number of columns per row using Pandas
Asked Answered
M

7

38

What is the best approach for importing a CSV that has a different number of columns for each row using Pandas or the CSV module into a Pandas DataFrame.

"H","BBB","D","Ajxxx Dxxxs"
"R","1","QH","DTR"," "," ","spxxt rixxls, raxxxd","1"

Using this code:

import pandas as pd
data = pd.read_csv("smallsample.txt",header = None)

the following error is generated

Error tokenizing data. C error: Expected 4 fields in line 2, saw 8
Macbeth answered 19/11, 2014 at 15:10 Comment(2)
There may be a pandas way to get around this, but you could also write a quick script to append some ,"" for each column that a line is lacking. By lacking, I mean compared to the row with the most columns.Magen
@NicholasFlees: using range(n) in pd.read_csv() function, where n is the number of columns required does the job, without needing to append ,"" empty strings equivalent to number of columns requiredPitfall
K
39

Supplying a list of columns names in the read_csv() should do the trick.

ex: names=['a', 'b', 'c', 'd', 'e']

https://github.com/pydata/pandas/issues/2981

Edit: if you don't want to supply column names then do what Nicholas suggested

Kielce answered 19/11, 2014 at 15:43 Comment(0)
P
27

You can dynamically generate column names as simple counters (0, 1, 2, etc).

Dynamically generate column names

# Input
data_file = "smallsample.txt"

# Delimiter
data_file_delimiter = ','

# The max column count a line in the file could have
largest_column_count = 0

# Loop the data lines
with open(data_file, 'r') as temp_f:
    # Read the lines
    lines = temp_f.readlines()

    for l in lines:
        # Count the column count for the current line
        column_count = len(l.split(data_file_delimiter)) + 1
        
        # Set the new most column count
        largest_column_count = column_count if largest_column_count < column_count else largest_column_count

# Generate column names (will be 0, 1, 2, ..., largest_column_count - 1)
column_names = [i for i in range(0, largest_column_count)]

# Read csv
df = pandas.read_csv(data_file, header=None, delimiter=data_file_delimiter, names=column_names)
# print(df)

Missing values will be assigned to the columns which your CSV lines don't have a value for.

Palaeozoic answered 19/10, 2018 at 9:59 Comment(1)
Just minor issue. Do you really have to do temp_f.close()? The with statement should automatically close it for you.Bezoar
C
16

Polished version of P.S. answer is as follows. It works. Remember we have inserted lot of missing values in the dataframe.

### Loop the data lines
with open("smallsample.txt", 'r') as temp_f:
    # get No of columns in each line
    col_count = [ len(l.split(",")) for l in temp_f.readlines() ]

### Generate column names  (names will be 0, 1, 2, ..., maximum columns - 1)
column_names = [i for i in range(0, max(col_count))]

### Read csv
df = pd.read_csv("smallsample.txt", header=None, delimiter=",", names=column_names)
Cohbert answered 6/9, 2019 at 14:42 Comment(0)
M
14

If you want something really concise without explicitly giving column names, you could do this:

  • Make a one column DataFrame with each row being a line in the .csv file
  • Split each row on commas and expand the DataFrame
  • update df object
df = pd.read_fwf('<filename>.csv', header=None)
df = df[0].str.split(',', expand=True)
Mindi answered 3/10, 2019 at 22:3 Comment(2)
Nailed it, thanks. Here's the doc link for anyone else who wasn't aware of this function. pandas.pydata.org/pandas-docs/stable/reference/api/…Jannette
Be warn that this would only work for very basic csv files without commas within quoted strings. So a,b,"c,d,e",f would be interpreted as a b "c d e" f rather than a b c,d,e fHappen
P
14

Error tokenizing data. C error: Expected 4 fields in line 2, saw 8

The error gives a clue to solve the problem "Expected 4 fields in line 2", saw 8 means length of the second row is 8 and first row is 4.

import pandas as pd
# inside range set the maximum value you can see in "Expected 4 fields in line 2, saw 8"
# here will be 8 
data = pd.read_csv("smallsample.txt",header = None,names=range(8))

Use range instead of manually setting names as it will be cumbersome when you have many columns.

You can use shantanu pathak's method to find longest row length in your data.

Additionally you can fill up the NaN values with 0, if you need to use even data length. Eg. for clustering (k-means)

new_data = data.fillna(0)
Pontificals answered 16/2, 2020 at 9:46 Comment(0)
G
3

We could even use pd.read_table() method to read csv file which converts it into type DataFrame of single columns which can be read and split by ','

Glottic answered 11/9, 2017 at 6:10 Comment(0)
H
2

Manipulate your csv and in the first row, put the row that has the most elements, so that all next rows have less elements. Pandas will create as much columns as the first row has.

Hospitalization answered 3/11, 2021 at 15:52 Comment(2)
This isn't a pragmatic thing to do, neither does this answer even affirm to programming - it's just a hack.Pitfall
agree. it is just a hack.Hospitalization

© 2022 - 2024 — McMap. All rights reserved.