I have a relatively simple (no classes) python 2.7 program. The first thing the program does is read an sqlite dbase into a dictionary. The database is large, but not huge, around 90Meg on disk. It takes about 20 seconds to read in. After reading in the database I initialize some variables, e.g.
localMax = 0
localMin = 0
firstTime = True
When I debug this program in Eclipse-3.7.0/pydev - even these simple lines - each single-step in the debugger eats up 100% of a core, and takes between 5 and 10 seconds. I can see the python process goes to 100% cpu for 10 seconds. Single-step... wait 10 seconds... single-step... wait 10 seconds... If I debug at the command line just using pdb, no problems. If I'm not debugging at all, the program runs at "normal" speed, nothing strange like in Eclipse.
I've reproduced this on a dual core Win7 PC w/ 4G memory, my 8 core Ubuntu box w/ 8G of memory, and even my Mac Air. How's that for multi-platform development! I kept thinking it would work somewhere. I'm never even close to running out of memory at any time.
On each Eclipse single-step, why does the python process jump to 100% CPU, and take 10 seconds?