Strategies for debugging numerical stability issues?
Asked Answered
S

2

7

I'm trying to write an implementation of Wilson's spectral density factorization algorithm [1] for Python. The algorithm iteratively factorizes a [QxQ] matrix function into its square root (it's sort of an extension of the Newton-Raphson square-root finder for spectral density matrices).

The problem is that my implementation only converges for matrices of size 45x45 and smaller. So after 20 iterations, the summed squared difference between matrices is about 2.45e-13. However, if I make an input of size 46x46, it does not converge until the 100th or so iteration. For 47x47 or larger, the matrices never converge; the error fluctuates between 100 and 1000 for about 100 iterations, and then starts to grow very quickly.

How would you go about trying to debug something like this? There doesn't appear to be any specific point at which it goes crazy, and the matrices are too large for me to actually attempt to do the calculation by hand. Does anyone have tips / tutorials / heuristics for find bizarre numerical bugs like this?

I've never dealt with anything like this before but I'm hoping some of you have...

Thanks, - Dan

[1] G. T. Wilson. "The Factorization of Matricial Spectral Densities". SIAM J. Appl. Math (Vol 23, No. 4, Dec. 1972)

Slating answered 21/11, 2009 at 19:4 Comment(2)
What do you mean by "only converges for matrixes of size 45x45?" Do also matrixes smaller than 45x45 fail as well?Scanlan
No, sorry, will edit the post. It converges successfully for size 45x45 and smallerSlating
C
4

I would recommend asking this question on the scipy-user mailing list, perhaps with an example of your code. Generally the people on the list seem to be highly experienced with numerical computation and are really helpful, just following the list is an education in itself.

Otherwise, I'm afraid I don't have any ideas... If you think it is a numerical precision/floating point rounding issue, the first thing you could try is bump all the dtypes up to float128 and see if makes any difference.

Chirrupy answered 21/11, 2009 at 19:14 Comment(1)
Yes, I'll try both of those. I'm not certain it's a numerical precision issue .. but the matrix dimensionality is definitely not used as an input anywhere in the algo ... and the fact that it works for small matrices suggests it probably is.Slating
S
2

Interval arithmetic can help, but I'm not sure if performance will be sufficient to actually allow meaningful debugging at the matrix sizes of your interest (you have to figure on a couple orders of magnitude worth of slowdown, what between replacing highly-HW-helped "scalar" floating point operations with SW-heavy "interval" ones, and adding the checks about which intervals are growing too wide, when, where, and why).

Swarth answered 21/11, 2009 at 19:26 Comment(1)
So ... you mean plug matrices of intervals into SciPy? I'm not even sure I can do this without rewriting linpack in interval math, can I?Slating

© 2022 - 2024 — McMap. All rights reserved.