Some supplemental information that I found through experimentation.
This can be circumvented. Timings are on a intel cpu with intel mkl for BLAS. Im also using fortran ordered arrays to keep everything equivalent a the scipy.linalg.blas
is the fortran BLAS.
Lets take the following example:
from scipy.linalg.blas import sgemm
from scipy.linalg.blas import dgemm
arr_int64 = np.random.randint(-500,500,(6000,2000))
arr_int32 = array_int64.astype(np.int32)
arr_float64 = array_int64.astype(np.float64)+np.random.rand(6000,2000)
arr_float32 = array_int64.astype(np.float32)
First lets take the DGEMM calls.
%timeit np.dot(arr_float64.T,arr_float64) #400% CPU threaded BLAS
1 loops, best of 3: 969 ms per loop
%timeit np.dot(arr_float32.T,arr_float32) #400% CPU threaded BLAS
1 loops, best of 3: 513 ms per loop
%timeit np.dot(arr_int64.T,arr_int64) #100% CPU?
1 loops, best of 3: 24.7 s per loop
%timeit np.dot(arr_int32.T,arr_int32) #100% CPU?
1 loops, best of 3: 21.3 s per loop
Calling DGEMM/SGEMM directly:
%timeit dgemm(alpha=1, a=arr_float64, b=arr_float64, trans_a=True)
1 loops, best of 3: 1.13 s per loop
%timeit dgemm(alpha=1, a=arr_int64, b=arr_int64, trans_a=True)
1 loops, best of 3: 869 ms per loop
%timeit sgemm(alpha=1, a=arr_float32, b=arr_float32, trans_a=True)
1 loops, best of 3: 657 ms per loop
%timeit sgemm(alpha=1, a=arr_int32, b=arr_int32, trans_a=True)
1 loops, best of 3: 432 ms per loop
np.allclose( np.dot(arr_int32.T,arr_int32), sgemm(alpha=1, a=arr_int32, b=arr_int32, trans_a=True))
#True
Looks like something strange going on in the np.dot
call. Similar to naive algorithm speed:
%timeit np.einsum('ij,jk',arr_int32.T,arr_int32)
1 loops, best of 3: 14.1 s per loop
%timeit np.einsum('ij,jk',arr_int64.T,arr_int64)
1 loops, best of 3: 26 s per loop