Understanding pandas.read_csv() float parsing
Asked Answered
K

2

3

I am having problems reading probabilities from CSV using pandas.read_csv; some of the values are read as floats with > 1.0.

Specifically, I am confused about the following behavior:

>>> pandas.read_csv(io.StringIO("column\n0.99999999999999998"))["column"][0]
1.0
>>> pandas.read_csv(io.StringIO("column\n0.99999999999999999"))["column"][0]
1.0000000000000002
>>> pandas.read_csv(io.StringIO("column\n1.00000000000000000"))["column"][0]
1.0
>>> pandas.read_csv(io.StringIO("column\n1.00000000000000001"))["column"][0]
1.0
>>> pandas.read_csv(io.StringIO("column\n1.00000000000000008"))["column"][0]
1.0
>>> pandas.read_csv(io.StringIO("column\n1.00000000000000009"))["column"][0]
1.0000000000000002

Default float-parsing behavior seems to be non-monotonic, and especially some values starting 0.9... are converted to floats that are strictly greater than 1.0, causing problems e.g. when feeding them into sklearn.metrics.

The documentation states that read_csv has a parameter float_precision that can be used to select “which converter the C engine should use for floating-point values”, and setting this to 'high' indeed solves my problem.

However, I would like to understand the default behavior:

  1. Where can I find the source code of the default float converter?
  2. Where can I find documentation on the intended behavior of the default float converter and the other possible choices?
  3. Why does a single-figure change in the least significant position skip a value?
  4. Why does this behave non-monotonically at all?

Edit regarding “duplicate question”: This is not a duplicate. I am aware of the limitations of floating-point math. I was specifically asking about the default parsing mechanism in Pandas, since the builtin float does not show this behavior:

>>> float("0.99999999999999999")
1.0

...and I could not find documentation.

Kendry answered 29/5, 2017 at 17:7 Comment(3)
Possible duplicate of Is floating point math broken?Cubage
@pvg, this is not a dupe. OP has carefully demonstrated non-monotonic behavior, and is asking from some illumination so that they can work with it productively.Laryngoscope
@StephenRauch it's either a dupe or there is a related dupe. Look at the desired precision and the results, these are all smaller than the epsilon of an ieee double. It's not really a sensible place to operate.Cubage
A
2

@MaxU already showed the source code for the parser and the relevant tokenizer xstrtod so I'll focus on the "why" part:

The code for xstrtod is roughly like this (translated to pure Python):

def xstrtod(p):
    number = 0.
    idx = 0
    ndecimals = 0

    while p[idx].isdigit():
        number = number * 10. + int(p[idx])
        idx += 1

    idx += 1

    while idx < len(p) and p[idx].isdigit():
        number = number * 10. + int(p[idx])
        idx += 1
        ndecimals += 1

    return number / 10**ndecimals

Which reproduces the "problem" you saw:

print(xstrtod('0.99999999999999997'))  # 1.0
print(xstrtod('0.99999999999999998'))  # 1.0
print(xstrtod('0.99999999999999999'))  # 1.0000000000000002
print(xstrtod('1.00000000000000000'))  # 1.0
print(xstrtod('1.00000000000000001'))  # 1.0
print(xstrtod('1.00000000000000002'))  # 1.0
print(xstrtod('1.00000000000000003'))  # 1.0
print(xstrtod('1.00000000000000004'))  # 1.0
print(xstrtod('1.00000000000000005'))  # 1.0
print(xstrtod('1.00000000000000006'))  # 1.0
print(xstrtod('1.00000000000000007'))  # 1.0
print(xstrtod('1.00000000000000008'))  # 1.0
print(xstrtod('1.00000000000000009'))  # 1.0000000000000002
print(xstrtod('1.00000000000000019'))  # 1.0000000000000002

The problem seems to be the 9 in the last place which alters the result. So it's floating point accuracy:

>>> float('100000000000000008')
1e+17
>>> float('100000000000000009')
1.0000000000000002e+17

It's the 9 in the last place that is responsible for the skewed results.


If you want high precision you can define your own converters or use python-provided ones, i.e. decimal.Decimal if you want arbitary precision:

>>> import pandas
>>> import decimal
>>> converter = {0: decimal.Decimal}  # parse column 0 as decimals
>>> import io
>>> def parse(string):
...     return '{:.30f}'.format(pd.read_csv(io.StringIO(string), converters=converter)["column"][0])
>>> print(parse("column\n0.99999999999999998"))
>>> print(parse("column\n0.99999999999999999"))
>>> print(parse("column\n1.00000000000000000"))
>>> print(parse("column\n1.00000000000000001"))
>>> print(parse("column\n1.00000000000000008"))
>>> print(parse("column\n1.00000000000000009"))

which prints:

0.999999999999999980000000000000
0.999999999999999990000000000000
1.000000000000000000000000000000
1.000000000000000010000000000000
1.000000000000000080000000000000
1.000000000000000090000000000000

Exactly representing the input!

Axilla answered 29/5, 2017 at 18:0 Comment(3)
I had a look at the xstrtod source (thanks @MaxU); the problem is that there is much potential for inaccuracies in the intermediate results, especially number, which leads to some strings being parsed in a way that they don't result in the closest representable float. (Also, the exponent is not checked for overflow.)Kendry
In my opinion, "high" precision should be used by default; this behavior leads to numeric errors that are higher than I would expect and difficult to detect. Most of all, this behavior should be documented. (Or has anybody here found the relevant documentation?)Kendry
No, I haven't found any documentation of that behaviour. Fortunatly in most cases numbers don't have more than 10 decimals and for these numbers it works "good enough". That's optimization for you :/Axilla
B
2

If you want to understand how it works - look at the source code - file "_libs/parsers.pyx" lines: 492-499 for Pandas 0.20.1:

    self.parser.double_converter_nogil = xstrtod  # <------- default converter 
    self.parser.double_converter_withgil = NULL
    if float_precision == 'high':
        self.parser.double_converter_nogil = precise_xstrtod # <------- 'high' converter
        self.parser.double_converter_withgil = NULL
    elif float_precision == 'round_trip':  # avoid gh-15140
        self.parser.double_converter_nogil = NULL
        self.parser.double_converter_withgil = round_trip

Source code for xstrtod

Source code for precise_xstrtod

Bordello answered 29/5, 2017 at 17:35 Comment(0)
A
2

@MaxU already showed the source code for the parser and the relevant tokenizer xstrtod so I'll focus on the "why" part:

The code for xstrtod is roughly like this (translated to pure Python):

def xstrtod(p):
    number = 0.
    idx = 0
    ndecimals = 0

    while p[idx].isdigit():
        number = number * 10. + int(p[idx])
        idx += 1

    idx += 1

    while idx < len(p) and p[idx].isdigit():
        number = number * 10. + int(p[idx])
        idx += 1
        ndecimals += 1

    return number / 10**ndecimals

Which reproduces the "problem" you saw:

print(xstrtod('0.99999999999999997'))  # 1.0
print(xstrtod('0.99999999999999998'))  # 1.0
print(xstrtod('0.99999999999999999'))  # 1.0000000000000002
print(xstrtod('1.00000000000000000'))  # 1.0
print(xstrtod('1.00000000000000001'))  # 1.0
print(xstrtod('1.00000000000000002'))  # 1.0
print(xstrtod('1.00000000000000003'))  # 1.0
print(xstrtod('1.00000000000000004'))  # 1.0
print(xstrtod('1.00000000000000005'))  # 1.0
print(xstrtod('1.00000000000000006'))  # 1.0
print(xstrtod('1.00000000000000007'))  # 1.0
print(xstrtod('1.00000000000000008'))  # 1.0
print(xstrtod('1.00000000000000009'))  # 1.0000000000000002
print(xstrtod('1.00000000000000019'))  # 1.0000000000000002

The problem seems to be the 9 in the last place which alters the result. So it's floating point accuracy:

>>> float('100000000000000008')
1e+17
>>> float('100000000000000009')
1.0000000000000002e+17

It's the 9 in the last place that is responsible for the skewed results.


If you want high precision you can define your own converters or use python-provided ones, i.e. decimal.Decimal if you want arbitary precision:

>>> import pandas
>>> import decimal
>>> converter = {0: decimal.Decimal}  # parse column 0 as decimals
>>> import io
>>> def parse(string):
...     return '{:.30f}'.format(pd.read_csv(io.StringIO(string), converters=converter)["column"][0])
>>> print(parse("column\n0.99999999999999998"))
>>> print(parse("column\n0.99999999999999999"))
>>> print(parse("column\n1.00000000000000000"))
>>> print(parse("column\n1.00000000000000001"))
>>> print(parse("column\n1.00000000000000008"))
>>> print(parse("column\n1.00000000000000009"))

which prints:

0.999999999999999980000000000000
0.999999999999999990000000000000
1.000000000000000000000000000000
1.000000000000000010000000000000
1.000000000000000080000000000000
1.000000000000000090000000000000

Exactly representing the input!

Axilla answered 29/5, 2017 at 18:0 Comment(3)
I had a look at the xstrtod source (thanks @MaxU); the problem is that there is much potential for inaccuracies in the intermediate results, especially number, which leads to some strings being parsed in a way that they don't result in the closest representable float. (Also, the exponent is not checked for overflow.)Kendry
In my opinion, "high" precision should be used by default; this behavior leads to numeric errors that are higher than I would expect and difficult to detect. Most of all, this behavior should be documented. (Or has anybody here found the relevant documentation?)Kendry
No, I haven't found any documentation of that behaviour. Fortunatly in most cases numbers don't have more than 10 decimals and for these numbers it works "good enough". That's optimization for you :/Axilla

© 2022 - 2024 — McMap. All rights reserved.