Should I use numpy.polyfit or numpy.polynomial.polyfit or numpy.polynomial.polynomial.Polynomial?
Asked Answered
W

1

7

What is the difference between

https://docs.scipy.org/doc/numpy/reference/generated/numpy.polyfit.html

and

https://docs.scipy.org/doc/numpy/reference/generated/numpy.polynomial.polynomial.polyfit.html

and which one should I use when?

I checked the code and however both use numpy.linalg.linalg.lstsq at their code, but are different otherwise.

The documentation of numpy.polyfit also suggests to use

https://docs.scipy.org/doc/numpy/reference/generated/numpy.polynomial.polynomial.Polynomial.fit.html

What is the right choice?

(Bonus: How would I use the class when the first thing I want to do is to fit to my data?)

Waldner answered 31/10, 2019 at 20:52 Comment(0)
C
7

From what I can tell there's a lot of legacy baggage here, and we should not be using numpy.polyfit, and we should prefer numpy.polynomial.polynomial.Polynomial.fit.

Consider comments on this github issue from 2016:

While the documentation is reasonably clear in noting that coefficients are returned [from numpy.polyfited.] with highest-order last, this is fairly easy to miss, and is inconsistent with, e.g. numpy.polynomial.polynomial.polyfit().

And a bit later

Having the zero-degree coefficient first, as done in numpy.polynomial.polynomial.polyfit is definitely more logical. I was under the impression that the only reason numpy.polyfit deviates from this is historical accident, which of course is nearly impossible to correct now since many programmes may depend on this behaviour. Maybe the easiest solution would be to point people to the "preferred" solution in numpy.polyfit?

From an earlier comment it's evident that the "historical accident" is the behaviour of MATLAB's polyfit that takes high orders first. Early numpy kept this confusing convention (which it may have even inherited from a predecessor of the project), but later numpy.polynomial.polynomial.polyfit was implemented to Do It Right™. The crucial difference is that (unlike MATLAB) python uses 0-based indices, in which case it's perfectly natural to have zeroth order first. With this convention there's the beautiful property that item k corresponds to the term x**k.

Then there's a newer account in another issue from this year that tries to give a more coherent picture. Quoting the historical recollection from the issue:

History

(not necessarily in chronological order)

  1. A certain JVM-based linear algebra package had a function, polyfit, for fitting polynomials which made some weird design choices, like returning the coefficients highest-degree first.
  2. numpy, in an attempt to support fugitives from said environment, created the function numpy.polyfit which aped that design choice
  3. numpy implemented numpy.ma.polyfit for masked arrays, using numpy.polyfit
  4. In an attempt to fix the mistakes of history, numpy created the function numpy.polynomial.polynomial.polyfit with almost exactly the same signature, but with a more sensible coefficient ordering, and quietly preferred that people use that instead
  5. People were confused by these two very similar functions (#7478); also the new function could not return a covariance matrix and it did not have a masked array counterpart
  6. Powering on towards both API nirvana and worn-out keyboards, numpy introduced the numpy.polynomial.polynomial.Polynomial class, and documented in numpy.polyfit that that was the preferred way of fitting polynomials, although that also had no masked implementation and also did not return a covariance matrix

The responses from devs on the two issues make it clear that numpy.polyfit is technical debt, and as its documentation says new code should use the Polynomial class. The documentations have improved a lot since 2016, in that now there are pointers from numpy.polyfit to Polynomial, but there's still a lot of ambiguity. Ideally both polyfit methods should explain their situation with respect to the other, and point users to the Polynomial class as the one obvious way to write new code.

Campground answered 1/11, 2019 at 19:12 Comment(4)
Also a very strange difference: The weight in np.polyfit is only applied to the y-values (right hand side for the least squares), while in np.polynomial.polynomial.polyfit is applied to the entire point (x_i, y_i) (left and right hand side for the least squares). I do not understand what the point of the way of np.polyfit is... Can you comment?Waldner
@Waldner are you sure that's an actual difference and not just a difference in phrasing? Later np.polynomial.polynomial.polyfit says "Ideally the weights are chosen so that the errors of the products w[i]*y[i] all have the same variance." and the algorithm says "This problem is solved by setting up the (typically) over-determined matrix equation: V(x) * c = w * y", so the weights multiply the y components. I would think the only meaningful thing to do is to weigh points, which happens to be implemented by multiplying y components. Is there functional difference with some example?Harlene
I checked the code, so yes I am more or less sure. As I wrote: In np.polynomial.polynomial.polyfit we multiply the weights to both sides (including y). My point is that in np.polyfit we only multiply to the y-side. "I would think the only meaningful thing to do is to weigh points" - I agree, that is why I think the behavior of np.polyfit is strange. Please check the code to confirm (or refute).Waldner
@Waldner I generated random polynomials and fitted polynomials with both methods with weights. Here are some test results. The results are the same (not exactly, but the two are known to use a different algorithm). This suggests that they implement the same fitting, it's just that one is more numerically stable than the other. If you remove the w keywords for a given set of input points you'll see that both results change and stay equal, so the weights do affect the results. This is conclusive enough for me.Harlene

© 2022 - 2024 — McMap. All rights reserved.