First of all, there is no point in timing the creation of a generator expression. Creating a generator doesn't iterate over the contents, so it's very fast. Spot the differences between creating a generator expression over one element vs. over 10 million:
>>> print(timeit('(y for y in range(1))', number=100000))
0.060932624037377536
>>> print(timeit('(y for y in range(10000000))', number=100000))
0.06168231705669314
Generators take more time to iterate over than, say a list object:
>>> from collections import deque
>>> def drain_iterable(it, _deque=deque):
... deque(it, maxlen=0)
...
>>> def produce_generator():
... return (y for y in range(100))
...
>>> print(timeit('drain_iterable(next(generators))',
... 'from __main__ import drain_iterable, produce_generator;'
... 'generators=iter([produce_generator() for _ in range(100000)])',
... number=100000))
0.5204695729771629
>>> print(timeit('[y for y in range(100)]', number=100000))
0.3088444779859856
Here I tested iteration over the generator expression by just discarding all elements as fast as possible.
That's because a generator is essentially a function being executed until it yields a value, then is paused, then is activated again for the next value, then paused again. See What does the "yield" keyword do? for a good overview. The administration involved with this process takes time. In contrast, a list comprehension doesn't have to spend this time, it does all looping without re-activating and de-activating a function for every value produced.
Generators are memory efficient, not execution efficient. They can save execution time, sometimes, but usually because you are avoiding allocating and deallocating larger blocks of memory.
(y for y in range(100))
doesn't do anything but create a single generator object. No iteration is done, so that's really a worthless test compared to other things that happen. – Communicable