I'm aware that Python threads can only execute bytecode one at a time, so why would the threading library provide locks? I'm assuming race conditions can't occur if only one thread is executing at a time.
The library provides locks, conditions, and semaphores. Is the only purpose of this to synchronize execution?
Update:
I performed a small experiment:
from threading import Thread
from multiprocessing import Process
num = 0
def f():
global num
num += 1
def thread(func):
# return Process(target=func)
return Thread(target=func)
if __name__ == '__main__':
t_list = []
for i in xrange(1, 100000):
t = thread(f)
t.start()
t_list.append(t)
for t in t_list:
t.join()
print num
Basically I should have started 100k threads and incremented by 1. The result returned was 99993.
a) How can the result not be 99999 if there's a GIL syncing and avoiding race conditions? b) Is it even possible to start 100k OS threads?
Update 2, after seeing answers:
If the GIL doesn't really provide a way to perform a simple operation like incrementing atomically, what's the purpose of having it there? It doesn't help with nasty concurrency issues, so why was it put in place? I've heard use cases for C-extensions, can someone examplify this?