The code below reproduces the problem I have encountered in the algorithm I'm currently implementing:
import numpy.random as rand
import time
x = rand.normal(size=(300,50000))
y = rand.normal(size=(300,50000))
for i in range(1000):
t0 = time.time()
y *= x
print "%.4f" % (time.time()-t0)
y /= y.max() #to prevent overflows
The problem is that after some number of iterations, things start to get gradually slower until one iteration takes multiple times more time than initially.
A plot of the slowdown
CPU usage by the Python process is stable around 17-18% the whole time.
I'm using:
- Python 2.7.4 32-bit version;
- Numpy 1.7.1 with MKL;
- Windows 8.
print numpy.amin(numpy.abs(y[y != 0]))
and got4.9406564584124654e-324
, so I think denormal numbers are your answer. I don't know how to flush denormals to zero from within Python other than creating a C extension though... – Emergency