I'm trying to analyse the weather records for New York, using the daily data taken from here: http://cdiac.ornl.gov/epubs/ndp/ushcn/daily_doc.html
I'm loading the data with:
tf = pandas.read_fwf(io.open('state30_NY.txt'), widths=widths, names=names, na_values=['-9999'])
Where:
>>> widths
[6, 4, 2, 4, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1, 5, 1, 1, 1]
>>> names
['COOP', 'YEAR', 'MONTH', 'ELEMENT', 'VALUE1', 'MFLAG1', 'QFLAG1', 'SFLAG1', 'VALUE2', 'MFLAG2', 'QFLAG2', 'SFLAG2', 'VALUE3', 'MFLAG3', 'QFLAG3', 'SFLAG3', 'VALUE4', 'MFLAG4', 'QFLAG4', 'SFLAG4', 'VALUE5', 'MFLAG5', 'QFLAG5', 'SFLAG5', 'VALUE6', 'MFLAG6', 'QFLAG6', 'SFLAG6', 'VALUE7', 'MFLAG7', 'QFLAG7', 'SFLAG7', 'VALUE8', 'MFLAG8', 'QFLAG8', 'SFLAG8', 'VALUE9', 'MFLAG9', 'QFLAG9', 'SFLAG9', 'VALUE10', 'MFLAG10', 'QFLAG10', 'SFLAG10', 'VALUE11', 'MFLAG11', 'QFLAG11', 'SFLAG11', 'VALUE12', 'MFLAG12', 'QFLAG12', 'SFLAG12', 'VALUE13', 'MFLAG13', 'QFLAG13', 'SFLAG13', 'VALUE14', 'MFLAG14', 'QFLAG14', 'SFLAG14', 'VALUE15', 'MFLAG15', 'QFLAG15', 'SFLAG15', 'VALUE16', 'MFLAG16', 'QFLAG16', 'SFLAG16', 'VALUE17', 'MFLAG17', 'QFLAG17', 'SFLAG17', 'VALUE18', 'MFLAG18', 'QFLAG18', 'SFLAG18', 'VALUE19', 'MFLAG19', 'QFLAG19', 'SFLAG19', 'VALUE20', 'MFLAG20', 'QFLAG20', 'SFLAG20', 'VALUE21', 'MFLAG21', 'QFLAG21', 'SFLAG21', 'VALUE22', 'MFLAG22', 'QFLAG22', 'SFLAG22', 'VALUE23', 'MFLAG23', 'QFLAG23', 'SFLAG23', 'VALUE24', 'MFLAG24', 'QFLAG24', 'SFLAG24', 'VALUE25', 'MFLAG25', 'QFLAG25', 'SFLAG25', 'VALUE26', 'MFLAG26', 'QFLAG26', 'SFLAG26', 'VALUE27', 'MFLAG27', 'QFLAG27', 'SFLAG27', 'VALUE28', 'MFLAG28', 'QFLAG28', 'SFLAG28', 'VALUE29', 'MFLAG29', 'QFLAG29', 'SFLAG29', 'VALUE30', 'MFLAG30', 'QFLAG30', 'SFLAG30', 'VALUE31', 'MFLAG31', 'QFLAG31', 'SFLAG31']
Now, the issue I have is that when reading in the data, there seem to be a lot of inf
values, and those shouldn't be in the source data (the nearest thing in the data are -9999
values, which represent invalid data).
Normally, if I were using lists
or the like, I would print out the whole thing to find alignment errors, and work out which rows are affected, then look at the source file to see what's happening. I'd like to know how to do the equivalent in pandas so I can figure out where these inf
values are coming from.
Here's the code which shows me inf
s:
>>> tf[tf['ELEMENT']=='TMIN'].min()
COOP 300023
YEAR 1876
MONTH 1
ELEMENT TMIN
VALUE1 -38
MFLAG1 inf
QFLAG1 inf
SFLAG1 inf
VALUE2 -34
MFLAG2 inf
QFLAG2 inf
SFLAG2 inf
VALUE3 -38
MFLAG3 inf
QFLAG3 inf
...
MFLAG28 inf
QFLAG28 inf
SFLAG28 inf
VALUE29 -46
MFLAG29 inf
QFLAG29 inf
SFLAG29 inf
VALUE30 -57
MFLAG30 inf
QFLAG30 inf
SFLAG30 inf
VALUE31 -40
MFLAG31 inf
QFLAG31 inf
SFLAG31 inf
Length: 128, dtype: object
Edit: Corrected column widths. Problem still remains.
sum(widths)
adds up to the right number of columns. However, the problem still persists even with that change. – Elburt