It's REALLY hard to say.
I worked on improving the compile time on our project at work, and found that ONE file took 15 minutes (when compiling in -O2
, but about 15 seconds in -O0
) and gets compiled twice, so for a total compile time of about 60-70 minutes, this was roughly half the time. Turning off ONE optimisation feature brought that one file down to about 20 seconds instead of 15 minutes... This file was producing one function that was machine-generated and a few tens of thousands of lines long, which cause the compiler to do some magic long stuff (presumably some O(N^2) algorithm).
This can also happen if you have a small function that then calls lots of small functions in turn, that eventually, through layers of inlining, turns into a large file.
At other times, I've found that reducing the number of files and putting more code in one file works better.
In general, my experience (both with my own compiler project, and other people's/company's compilers) is that it's NOT the parsing and reading of files that take the time, but the various optimisation and code-generation passes. You can try that out by compiling all files using -fsyntax-only
or whatever it is called for your compiler. That will JUST read the source and check that it's syntactically correct. Try also compiling with -O0
if you aren't already. Often a specific optimisation pass is the problem, and some passes are worse than others, so it's useful to check what individual optimisation passes there are in a particular -O
option - in gcc that can be listed with -Q -O2 --help=optimizers
[in this case for -O2
].
You really need to figure out what the compiler is spending time on. It's no point in changing the code around if the problem is that you are spending most of the time optimising the code. It's no point in cutting down optimisers if the time is spent in parsing, and optimisation adds no extra time. Without actually building YOUR project, it's very hard to say for sure.
Another tip is to check top
to see if your compile processes uses 100% cpu each - if not, you're probably not having enough memory in your compile machine. I have a build option for my work project which "kills" my desktop machine by running so much out of memory the whole system just grinds to a halt - even switching from one tab to another in the web-browser takes 15-30 seconds. The only solution is to run less -j
[but of course, I usually forget, and at that point - so if I don't want to interrupt it, I go for lunch, long coffee break or some such until it finishes, because the machine is just unusuable]. This is for debug builds only, because putting together the debug info for the large codebase takes up a lot of memory [apparently!]