Performance: Matlab vs Python
Asked Answered
G

5

4

I recently switched from Matlab to Python. While converting one of my lengthy codes, I was surprised to find Python being very slow. I profiled and traced the problem with one function hogging up time. This function is being called from various places in my code (being part of other functions which are recursively called). Profiler suggests that 300 calls are made to this function in both Matlab and Python.

In short, following codes summarizes the issue at hand:

MATLAB

The class containing the function:

classdef ExampleKernel1 < handle  
methods (Static)
    function [kernel] = kernel_2D(M,x,N,y) 
        kernel  = zeros(M,N);
        for i= 1 : M
            for j= 1 : N
                % Define the custom kernel function here
                kernel(i , j) = sqrt((x(i , 1) - y(j , 1)) .^ 2 + ...
                                (x(i , 2) - y(j , 2)) .^2 );             
            end
        end
    end
end
end

and the script to call test.m:

xVec=[   
49.7030   78.9590
42.6730   11.1390
23.2790   89.6720
75.6050   25.5890
81.5820   53.2920
44.9680    2.7770
38.7890   78.9050
39.1570   33.6790
33.2640   54.7200
4.8060   44.3660
49.7030   78.9590
42.6730   11.1390
23.2790   89.6720
75.6050   25.5890
81.5820   53.2920
44.9680    2.7770
38.7890   78.9050
39.1570   33.6790
33.2640   54.7200
4.8060   44.3660
];
N=size(xVec,1);
kex1=ExampleKernel1;
tic
for i=1:300
    K=kex1.kernel_2D(N,xVec,N,xVec);
end
toc

Gives the output

clear all
>> test
Elapsed time is 0.022426 seconds.
>> test
Elapsed time is 0.009852 seconds.

PYTHON 3.4

Class containing the function CustomKernels.py:

from numpy import zeros
from math import sqrt
class CustomKernels:
"""Class for defining the custom kernel functions"""
    @staticmethod
    def exampleKernelA(M, x, N, y):
        """Example kernel function A"""
        kernel = zeros([M, N])
        for i in range(0, M):
            for j in range(0, N):
                # Define the custom kernel function here
                kernel[i, j] = sqrt((x[i, 0] - y[j, 0]) ** 2 + (x[i, 1] - y[j, 1]) ** 2)
        return kernel

and the script to call test.py:

import numpy as np
from CustomKernels import CustomKernels
from time import perf_counter

xVec = np.array([
    [49.7030,  78.9590],
    [42.6730,  11.1390],
    [23.2790,  89.6720],
    [75.6050,  25.5890],
    [81.5820,  53.2920],
    [44.9680,   2.7770],
    [38.7890,  78.9050],
    [39.1570,  33.6790],
    [33.2640,  54.7200],
    [4.8060 ,  44.3660],
    [49.7030,  78.9590],
    [42.6730,  11.1390],
    [23.2790,  89.6720],
    [75.6050,  25.5890],
    [81.5820,  53.2920],
    [44.9680,   2.7770],
    [38.7890,  78.9050],
    [39.1570,  33.6790],
    [33.2640,  54.7200],
    [4.8060 ,  44.3660]
    ])
N = xVec.shape[0]
kex1 = CustomKernels.exampleKernelA
start=perf_counter()
for i in range(0,300):
    K = kex1(N, xVec, N, xVec)
print(' %f secs' %(perf_counter()-start))

Gives the output

%run test.py
 0.940515 secs
%run test.py
 0.884418 secs
%run test.py
 0.940239 secs

RESULTS

Comparing the results it seems Matlab is about 42 times faster after a "clear all" is called and then 100 times faster if script is run multiple times without calling "clear all". That is at least and order of magnitude if not two orders of magnitudes faster. This is a very surprising result for me. I was expecting the result to be the other way around.

Can someone please shed some light on this?

Can someone suggest a faster way to perform this?

SIDE NOTE

I have also tried to use numpy.sqrt which makes the performance worse, therefore I am using math.sqrt in Python.

EDIT

The for loops for calling the functions are purely fictitious. They are there just to "simulate" 300 calls to the function. As I described earlier, the kernel functions (kernel_2D in Matlab and kex1 in Python) are called from various different places in the program. To make the problem shorter, I "simulate" the 300 calls using the for loop. The for loops inside the kernel functions are essential and unavoidable because of the structure of the kernel matrix.

EDIT 2

Here is the larger problem: https://github.com/drfahdsiddiqui/bbfmm2d-python

Giulio answered 28/9, 2017 at 17:32 Comment(16)
Generally don't try and loop over an array in python. Call the operations on the entire array(s) using numpy so the actual per-element calcualtion is done inside the libraryProtanopia
The power of numpy is the ability to get rid of those for loopsUnifilar
I see what you are saying, this is true for Matlab as well. But the structure of the kernel matrix makes a for looping unavoidable in this case. At any rate, why is function calling so expensive in Python and less so in Matlab?Giulio
If you have an example of the kernel matrix you want to create, some wizard here can probably find a way to vectorize it. And if not, you can always @jit it.Unifilar
To me, it looks like you can just use SciPy cdist - docs.scipy.org/doc/scipy/reference/generated/…Contempt
I think that's a placeholder function, @Divakar. If not you're absolutely right.Unifilar
@DanielF What's a placeholder function?Contempt
exampleKernelAUnifilar
If the problem is the loop by which you call the exampleKernelA function 300 times, you should probably consider numba's @jit. In general, looping in Python is slow compared to just-in-time (or ahead-of-time of course) compiled languages like modern MATLAB distrubutions.Timberwork
@Contempt very good suggestion, I didn't know about that. Alas I have 12 much more complicated kernel functions which are not a mere distance. Moreover this is just a simple example and not a real kernel function that I would use.Giulio
@Timberwork The 300 calls are fictitious. The function is called from various different places 300 times (part of a series of recursive functions which depend on if conditions). I understand that moving the for loop in the function body will make it faster, but it doesn't solve my larger problem.Giulio
@FahdSiddiqui I would go on a case by case basis for each of the kernels. Doesn't sound tidy, but I guess getting performance demands it in most scenarios.Contempt
In older MATLABs avoiding loops was also important. But newer versions do some just in time compiling that speeds up many loops.Copyholder
Given that you already have access to C++ code (as per your EDIT 2), I would consider generating bindings of that code to Python rather than translating it, unless you are doing this translation for specific reasons other than having the algorithm available in Python.Timberwork
It is a bit ironic that Python has become popular for scientific computing, because, as you've found, pure Python is (by some subjective standards), slow. Hence, tools and libraries such numpy, cython (derived from pyrex), weave (now defunct), numba, and pypy; plus an assortment of tools that attempt to make it easier to call compiled languages from Python: ctypes (std lib), cffi, swig, f2py, boost.python, xdress, pybind11, ...Villasenor
@WarrenWeckesser My initial impressions agree with you. @Timberwork Yes I can bind to C++ but that has no learning for me. Purely academic exercise (hence the specific reason i.e. learning Python). In fact I can code everything in C++ but I want a quicker language (Matlab is excellent for that) to prototype my ideas before putting them in C++. I want to move away from proprietary (Matlab) to free (Python etc.). So trying a few languages before I settle. Thanks everyone!Giulio
U
2

You want to get rid of those for loops. Try this:

def exampleKernelA(M, x, N, y):
    """Example kernel function A"""
    i, j = np.indices((N, M))
    # Define the custom kernel function here
    kernel[i, j] = np.sqrt((x[i, 0] - y[j, 0]) ** 2 + (x[i, 1] - y[j, 1]) ** 2)
    return kernel

You can also do it with broadcasting, which may be even faster, but a little less intuitive coming from MATLAB.

Unifilar answered 28/9, 2017 at 17:38 Comment(19)
Thanks. The suggested form does not work. line 41, in exampleKernelA kernel[i, j] = sqrt((x[i, 0] - y[j, 0]) ** 2 + (x[i, 1] - y[j, 1]) ** 2) TypeError: only length-1 arrays can be converted to Python scalarsGiulio
@FahdSiddiqui You need to use np.sqrt(), not math.sqrt(). And more importantly, if you want to have any chance of porting your more complicated kernels, you need to understand how NumPy works. Spend half a day on reading the manual – it will pay off.Roderica
Why is broadcasting a little less intuitive coming from Matlab? Matlab has had broadcasting (with a different name) since 2007, and it takes place implicitly since 2017Christcrossrow
Sorry, my last MATLAB experience is . . a while ago. Man I feel old now.Unifilar
@LuisMendo you don't really suggest that bsxfun is intuitive now do you ? And technically that's not broadcasting but element-wise opBordelaise
@Bordelaise It is broadcasting. The b and s in bsxfun refer to singleton expansion, which is Matlab's name for broadcasting. Whether bsxfun is intuitive or not is subjectiveChristcrossrow
@LuisMendo you are forgetting the (b)inary which is key here. You wouldn't have shape expansions with bsxfun such as shape mismatching ops lead to error. Broadcasting is not limited to an expansion of a binary op but also more general which is what matlab tried to mimic. The proper broadcasting was introduced in octave 2012, numpy quite some years now etc. It's just keeping up. Op expansion to the whole array is not the story here.Bordelaise
@Bordelaise I don't follow. Can you give an example in Octave or Numpy where the broadcasting is not for a binary (i.e. two-input) operator?Christcrossrow
@Bordelaise for a reasonable discussion of this matter you'd have to first define broadcasting, because I have to agree with Luis that I don't understand your distinction. Also, I don't believe broadcasting is intuitive if you don't understand how bsxfun behaves.Moonscape
@DanielF Much better performance with your suggestion. Should have thought of that! The improvement is significant, from ~0.94 seconds to 0.068 seconds. However Matlab is still 3 to 6 times faster than numpy. I will accept your answer. ThanksGiulio
@AndrasDeak As you wish. Just look at how many people asking here about bsxfun related questions in matlab which won't be asked had they understood bsxfun already. That should give an idea about intuitiveness. I really don't need to argue about matlab and its API.Bordelaise
@SvenMarnach Thank you for your suggestion. Made it work! God forbid if I should read a manual :P Just starting to learn Python for scientific computations, long way to go, much to learn. Want to know what all this hoopla is about Python. This community is great! Thank you everyone, learned a lot in 24 hours!Giulio
@Bordelaise No one here needs to argue. It's just for the purpose of improving our knowledge. Pity that you don't want to share yours with usChristcrossrow
@LuisMendo argue as in discuss. No hard feelings. It's just not me take it from Lauren blogs.mathworks.com/loren/2016/10/24/… or Clive Moler's blog or elsewhere. Calling it intuitive is quite some stretch from actual reality. There is absolutely nothing intuitive about matlab's design and people have been complaining about it as far as it goes about two decades since I've been (ab)using it.Bordelaise
@Bordelaise As I said, the intuitiveness of bsxfun is subjective. I was more interested in your distinction between Matlab's broadcasting and other (?) types of non-binary (?) broadcastingChristcrossrow
@LuisMendo The generic programming constructs. bsxfun takes a binary op and supplies arguments that are coming from both arrays. So in a sense it is a shorthand for a double loop. Broadcasting concept in general is an object property. Take NumPy's or eigen's examples. All have more flexible operations that you can design your own @func. Reduce is only one of the properties of broadcasting.Bordelaise
@LuisMendo See for example my own question where I am actually trying to stop broadcasting #40694880Bordelaise
@Bordelaise Thanks for the links. I'd say Numpy's broadcast is the same as Matlab's: "Two dimensions are compatible when they are equal, or one of them is 1". About eigen I cannot really say, I never used itChristcrossrow
Upon further investigation I have found that using indices as indicated in the answer is still slower. The Solution: Use meshgrid ` def exampleKernelA(M, x, N, y): """Example kernel function A""" # Euclidean norm function implemented using meshgrid idea. # Fastest i1, j1 = meshgrid(y[:, 0], x[:, 0]) i2, j2 = meshgrid(y[:, 1], x[:, 1]) # Define custom kernel here kernel = sqrt((i1 - j1) ** 2 + (i2 - j2) ** 2) return kernel` Result Very very fast, 10 times faster than indices approach.Giulio
G
2

Upon further investigation I have found that using indices as indicated in the answer is still slower.

Solution: Use meshgrid

def exampleKernelA(M, x, N, y):
    """Example kernel function A"""
    # Euclidean norm function implemented using meshgrid idea.
    # Fastest
    x0, y0 = meshgrid(y[:, 0], x[:, 0])
    x1, y1 = meshgrid(y[:, 1], x[:, 1])
    # Define custom kernel here
    kernel = sqrt((x0 - y0) ** 2 + (x1 - y1) ** 2)
    return kernel

Result: Very very fast, 10 times faster than indices approach. I am getting times which are closer to C.

However: Using meshgrid with Matlab beats C and Numpy by being 10 times faster than both.

Still wondering why!

Giulio answered 3/10, 2017 at 16:49 Comment(0)
Z
1

Matlab uses commerical MKL library. If you use free python distribution, check whether you have MKL or other high performance blas library used in python or it is the default ones, which could be much slower.

Zecchino answered 15/6, 2018 at 16:35 Comment(1)
MKL relevant if BLAS routines are called, which isn't relevant in this example. Its only the jit compiler which matters here.Messieurs
M
1

Comparing Jit-Compilers

It has been mentioned that Matlab uses an internal Jit-compiler to get good performance on such tasks. Let's compare Matlabs jit-compiler with a Python jit-compiler (Numba).

Code

import numba as nb
import numpy as np
import math
import time

#If the arrays are somewhat larger it makes also sense to parallelize this problem
#cache ==True may also make sense
@nb.njit(fastmath=True) 
def exampleKernelA(M, x, N, y):
  """Example kernel function A"""
  #explicitly declaring the size of the second dim also improves performance a bit
  assert x.shape[1]==2
  assert y.shape[1]==2

  #Works with all dtypes, zeroing isn't necessary
  kernel = np.empty((M,N),dtype=x.dtype)
  for i in range(M):
    for j in range(N):
      # Define the custom kernel function here
      kernel[i, j] = np.sqrt((x[i, 0] - y[j, 0]) ** 2 + (x[i, 1] - y[j, 1]) ** 2)
  return kernel


def exampleKernelB(M, x, N, y):
    """Example kernel function A"""
    # Euclidean norm function implemented using meshgrid idea.
    # Fastest
    x0, y0 = np.meshgrid(y[:, 0], x[:, 0])
    x1, y1 = np.meshgrid(y[:, 1], x[:, 1])
    # Define custom kernel here
    kernel = np.sqrt((x0 - y0) ** 2 + (x1 - y1) ** 2)
    return kernel

@nb.njit() 
def exampleKernelC(M, x, N, y):
  """Example kernel function A"""
  #explicitly declaring the size of the second dim also improves performance a bit
  assert x.shape[1]==2
  assert y.shape[1]==2

  #Works with all dtypes, zeroing isn't necessary
  kernel = np.empty((M,N),dtype=x.dtype)
  for i in range(M):
    for j in range(N):
      # Define the custom kernel function here
      kernel[i, j] = np.sqrt((x[i, 0] - y[j, 0]) ** 2 + (x[i, 1] - y[j, 1]) ** 2)
  return kernel


#Your test data
xVec = np.array([
    [49.7030,  78.9590],
    [42.6730,  11.1390],
    [23.2790,  89.6720],
    [75.6050,  25.5890],
    [81.5820,  53.2920],
    [44.9680,   2.7770],
    [38.7890,  78.9050],
    [39.1570,  33.6790],
    [33.2640,  54.7200],
    [4.8060 ,  44.3660],
    [49.7030,  78.9590],
    [42.6730,  11.1390],
    [23.2790,  89.6720],
    [75.6050,  25.5890],
    [81.5820,  53.2920],
    [44.9680,   2.7770],
    [38.7890,  78.9050],
    [39.1570,  33.6790],
    [33.2640,  54.7200],
    [4.8060 ,  44.3660]
    ])

#compilation on first callable
#can be avoided with cache=True
res=exampleKernelA(xVec.shape[0], xVec, xVec.shape[0], xVec)
res=exampleKernelC(xVec.shape[0], xVec, xVec.shape[0], xVec)

t1=time.time()
for i in range(10_000):
  res=exampleKernelA(xVec.shape[0], xVec, xVec.shape[0], xVec)

print(time.time()-t1)

t1=time.time()
for i in range(10_000):
  res=exampleKernelC(xVec.shape[0], xVec, xVec.shape[0], xVec)

print(time.time()-t1)

t1=time.time()
for i in range(10_000):
  res=exampleKernelB(xVec.shape[0], xVec, xVec.shape[0], xVec)

print(time.time()-t1)

Performance

exampleKernelA: 0.03s
exampleKernelC: 0.03s
exampleKernelB: 1.02s
Matlab_2016b (your code, but 10000 rep., after few runs): 0.165s
Messieurs answered 16/6, 2018 at 17:33 Comment(3)
Switch OP’s loops around, MATLAB code will be significantly faster. Also, fastmath should not feature in this comparison.Cockcroft
@Cris Luengo I already tried to switch the loops without effect (maybe because of the small array size) I will try it without fastmath and add the results. For a really fair comparison the newest Matlab version should be used to... Add your results.Messieurs
yes, you’re right, it’s a small array, it probably fits in the cache. Never mind. :)Cockcroft
B
1

I got ~5x speed improvement over the meshgrid solution using only broadcasting:

def exampleKernelD(M, x, N, y):
    return np.sqrt((x[:,1:] - y[:,1:].T) ** 2 + (x[:,:1] - y[:,:1].T) ** 2)
Blanketing answered 27/2, 2019 at 21:18 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.