Are there any drawbacks to using -O3 in GCC?
Asked Answered
D

4

18

I've been a Software Engineer for 13 years in various languages, though I'm just now making my way into C and later C++. As I'm learning C, I'm using the GCC compiler to compile my programs, and I'm wondering if there are any gotchas to using -O3 or other optimization flags. Is there a chance that my software will break in ways that I won't be able to catch without testing the compiled code, or perhaps during cross-compilation, I might inadvertently mess something up for a different platform.

Before I blindly turn those options on, I'd like to know what I can expect. Also, as -Ofast turns on non-standards-compliant flags, I'm leaning towards not using that. Am I correct in my assumptions that -Ofast will most likely have "side-effects?"

I've glanced over https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html before I posted this question.

Deepset answered 13/12, 2015 at 1:16 Comment(8)
I think it is a case of -O2 has a lot more mileage than -O3. I would recommend you make sure you test the binaries...Machos
you should test all your binaries anyway...Machos
@dwelch You're right. After I wrote that about the tests, I realized that I couldn't test it before it's compiled. I guess I'm used to scripted language tests where I can evade the public interface.Deepset
If you exploit undefined behaviour, you well might get some surprises. For compiant code: the code must not behave different. However, your question cannot be answered without a code-review (and that is off-topic on SO).Juliajulian
If you still want to debug your program, you should use -Og. However, if your code breaks with optimisation on, you very well might also get missbehaviour with the next version of gcc or when using a different architecture, compiler, add another line of code, etc. Relying on UB is always invitation for disaster.Juliajulian
And for -Ofast: Do you actually have run-time problems? If not, I strongly recommend to stick with the standard. If you have, think about local optimisations or enabling this option locally.Juliajulian
Although there may be some odd cases where O3 does something odd in general this should be the case. The one area where using higher optimization levels is likely to break you program is if you have undefined behavior, which you don't want anyway. See this interesting case for an extreme example.Unaccountable
the main reason I rarely use -O# is because Most of the time is spent debugging (especially on large projects) and using the optimization parameter almost always results in the compiler changing the order of statements (for instance when removing fixed calculations from a for() loop) which makes using a debugger difficult as any source code display will jump around or even stay in the same code statement for several 'next' steps of the debugger. It may optimize certain statement into oblivion. Suggest when debugging, do not optimize, then optimize and completely retest the codePrey
W
15

The only drawback of -O3 should be the inability to follow the code in a debugger.

The use of -Ofast can affect some of your floating point operations causing rounding errors, but unless you are running specifically long chains of floating point calculations you are unlikely to ever notice.

Broken code (code with bad pointers or statements with undefined behavior) is likely to behave differently at different level of optimization -- the first reaction of many programmers is to blame the compiler -- enabling all warnings and fixing them usually helps.

Waterfowl answered 13/12, 2015 at 1:36 Comment(7)
Warnings and errors aren't enabled by default? That sucks. I take it that I use` -Wall` and` -Eall` for that? I just found How to turn on (literally) ALL of GCC's warnings?.Deepset
Contrary to what you would beleive -Wall does not enable all warning but only the most useful subset -- there are more warning which you can check hereWaterfowl
@RickMacGillis -Wall -Wextra -pedantic is handy, and also remember to manually specify a language standard (-std=c11 or -std=c++14), or it will instead give you GNU-mode with a bunch of (useful, but nonstandard) extensions.Scutt
@Leushenko: The extensions normally should not interfere with compliant code.Juliajulian
-Wconversion is also to recommend. But note that it might raise quite some warnings if your code is not carefully written. But the warnings are great to point you at potential conversion problems.Juliajulian
Another drawback is increased compilation times so you might not want to compile with all optimizations enabled during your normal code, build, test, debug cycle. The -Og level was specifically designed for this case. But of course, testing with the final optimizations enabled is also important.Dorrisdorry
@5gon12eder: Actually -Og did quite a good job for an embedded project. Not sure If I even need -O3.Juliajulian
J
14

It's important to remember that almost all compiler optimizations are heuristics. In other words, an "optimization" is only an attempt to make your program more "optimal", but it could very well have the opposite effect. Just because -O3 is supposed to be better than -O2, and -O2 should be better than -O1—that doesn't mean that is in fact always how it works.

Sometimes an "optimization" enabled by -O3 might actually slow your program down when compared with the version generated with -O2 or -O1. You should experiment with different levels of optimization to see what works best for your specific code. You can even turn individual optimizations on and off if you really want to fine-tune it.

So in short—yes, there can be downsides to using -O3. I have personally observed that a lot of stuff I've written works better with -O2 than with -O3—but that's really program-specific.


FYI, here's another SO question that's asking why -O2 gives better results than -O3 for a specific program: gcc optimization flag -O3 makes code slower then -O2

Jink answered 13/12, 2015 at 1:54 Comment(0)
B
7

"I'd like to know what I can expect"

I've been using C++ (mostly GCC on vxWorks) in embedded systems for more than 2 decades. I have tremendous respect for the compiler writers.


Trust your compiler: IMHO, -O3 has never broken any code ... but has, on occasion, revealed interesting coding errors.


Choose: Teams must choose whether or not to "ship what you test, and test what you ship", regardless of -O1 or -O3 choice. The teams I have worked with have always committed to ship and test with -O3.


Single Step can be un-cooperative: On a personal practice level, when using -O3 code, I typically 'abandon' gdb single step. I make much more use of breakpoints, and there are slight variations in coding choices to make auto variables (stack data) and class data more 'visible'. (You can make the gdb command 'p' your inconvenient friend).


Single Step is Necessary: Please note that even though we "test and ship"d using -O3, we debugged using -O1 code almost exclusively.


Debug is Necesary: The trade off between debugging -01 yet testing and shipping -O3 is the extra re-compiles needed to switch the two executables. The time saved in -O1 to explore, identify, and fix the code bug(s) has to make up the 2 rebuilds (to -01 and back to -O3).


Regression Test Automation: I want to say systems test (aka integration test or regression test) of -O3 has to step it up a notch, but I can't really describe it ... perhaps I should recommend that the level of test automation be higher (every one REGRESSION TESTs!). But I'm not sure. The automation level of regression test probably correlates more to team size rather than performance level.


A 'successful' embedded system does 2 things. It satisfies the requirements. And, I think more importantly, in all human visible behaviour it acts like a lightly loaded desktop. For any action (button press, cable disconnect, test equipment induced error, or even a lowly status light change), the results have no human perceptible delay. -O3 helps. A successful system can be done ... I have seen it.

Bequest answered 13/12, 2015 at 4:14 Comment(0)
D
2

Since -O3 starts moving your code around to optimise it, in some cases you may see that your results are different or your breaks.

If you test your code for correctness with -O3 and find an issue that you can't debug, it is recommended to switch to -O0 to see if you get the same behaviour.

Doubler answered 13/12, 2015 at 1:23 Comment(4)
Compliant code must not behave different than required by the standard. And changing optimisation for debugging does not get you further, except for sheer luck. (this has nothing to do with the problems debuging optimised code; in general, one should us e-=g when debugging - assuming gcc).Juliajulian
True, but not all code are standard and/or well written. I have seen this behaviour specially in scientific codes with huge number of lines.Doubler
That's why I started with "Compliant code ...". The question cannot be answered without a code-review.Juliajulian
Correction: I meant -Og for optimisations which do not interfere with debugging.Juliajulian

© 2022 - 2024 — McMap. All rights reserved.