I'm using Python 3.3.1 64-bit on Windows and this code snippet:
len ([None for n in range (1, 1000000) if n%3 == 1])
executes in 136ms, compared to this one:
sum (1 for n in range (1, 1000000) if n%3 == 1)
which executes in 146ms. Shouldn't a generator expression be faster or the same speed as the list comprehension in this case?
I quote from Guido van Rossum From List Comprehensions to Generator Expressions:
...both list comprehensions and generator expressions in Python 3 are actually faster than they were in Python 2! (And there is no longer a speed difference between the two.)
EDIT:
I measured the time with timeit
. I know that it is not very accurate, but I care only about relative speeds here and I'm getting consistently shorter time for list comprehension version, when I test with different numbers of iterations.
time
orclock
instead oftimeit
for something that takes only 1/8th of a second can easily have an error much, much larger than 7%.) – Goonlen
withsum
? Counting elements is a lot faster than adding their contents. – Beagletimeit
- edited the question. – Joinville1
, which is probably the same as visiting each list element and incrementing a counter. If list is a regular structure then counting is much faster, but what about an overhead of creating the list? – Joinvilletimeit
and I tried larger number of iterations with the same result. – Joinvillesum()
is smart enough to figure out that all it ever needs to add in this special case is just1
s... – Beaglesum
does aPyNumber_InPlaceAdd
for each element returned byPyIter_Next
, so there's no way it can optimize the case of always adding 1. – Goon