I'm programming in python on windows and would like to accurately measure the time it takes for a function to run. I have written a function "time_it" that takes another function, runs it, and returns the time it took to run.
def time_it(f, *args):
start = time.clock()
f(*args)
return (time.clock() - start)*1000
i call this 1000 times and average the result. (the 1000 constant at the end is to give the answer in milliseconds.)
This function seems to work but i have this nagging feeling that I'm doing something wrong, and that by doing it this way I'm using more time than the function actually uses when its running.
Is there a more standard or accepted way to do this?
When i changed my test function to call a print so that it takes longer, my time_it function returns an average of 2.5 ms while the cProfile.run('f()') returns and average of 7.0 ms. I figured my function would overestimate the time if anything, what is going on here?
One additional note, it is the relative time of functions compared to each other that i care about, not the absolute time as this will obviously vary depending on hardware and other factors.
time-it
allows you to add a decorator to any function to time it. It also accepts a logger name as a param so you can log the timepip install time-it
disclaimer: I wrote the module – Caliche