I want to compare execution time of two snippets and see which one is faster. So, I want an accurate method to measure execution time of my python snippets.
I already tried using time.time()
, time.process_time()
, time.perf_counter_ns()
as well as timeit.timeit()
, but I am facing the same issues with all of the them. That is: when I use any of the above methods to measure execution time of THE SAME snippet, it returns a different value each time I run it. And this variation is somewhat significant, to the extent that I cannot reliably use them to compare difference in execution time of two snippets.
As an example, I am running following code in my google colab:
import time
t1 = time.perf_counter()
sample_list = []
for i in range(1000000):
sample_list.append(i)
t2 = time.perf_counter()
print(t2 - t1)
I ran above code 10 times and the variation in my results is about 50% (min value = 0.14, max value = 0.28).
Any alternatives?
timeit
runs the snippet many times (default 1 million). If its a CPU bound bit of code, it should be similar across runs. I notice drift of < 10%. If doing I/O, that's a different story. Do you have an example we could try? – History