I often hear that when compiling C and C++ programs I should "always enable compiler warnings". Why is this necessary? How do I do that?
Sometimes I also hear that I should "treat warnings as errors". Should I? How do I do that?
I often hear that when compiling C and C++ programs I should "always enable compiler warnings". Why is this necessary? How do I do that?
Sometimes I also hear that I should "treat warnings as errors". Should I? How do I do that?
C and C++ compilers are notoriously bad at reporting some common programmer mistakes by default, such as:
return
a value from a functionprintf
and scanf
families not matching the format stringThese can be detected and reported, just usually not by default; this feature must be explicitly requested via compiler options.
This depends on your compiler.
Microsoft C and C++ compilers understand switches like /W1
, /W2
, /W3
, /W4
and /Wall
. Use at least /W3
. /W4
and /Wall
may emit spurious warnings for system header files, but if your project compiles cleanly with one of these options, go for it. These options are mutually exclusive.
Most other compilers understand options like -Wall
, -Wpedantic
and -Wextra
. -Wall
is essential and all the rest are recommended (note that, despite its name, -Wall
only enables the most important warnings, not all of them). These options can be used separately or all together.
Your IDE may have a way to enable these from the user interface.
A compiler warning signals a potentially serious problem in your code. The problems listed above are almost always fatal; others may or may not be, but you want compilation to fail even if it turns out to be a false alarm. Investigate each warning, find the root cause, and fix it. In the case of a false alarm, work around it — that is, use a different language feature or construct so that the warning is no longer triggered. If this proves to be very hard, disable that particular warning on a case by case basis.
You don't want to just leave warnings as warnings even if all of them are false alarms. It could be OK for very small projects where the total number of warnings emitted is less than 7. Anything more, and it's easy for a new warning to get lost in a flood of old familiar ones. Don't allow that. Just cause all your project to compile cleanly.
Note this applies to program development. If you are releasing your project to the world in the source form, then it might be a good idea not to supply -Werror
or equivalent in your released build script. People might try to build your project with a different version of the compiler, or with a different compiler altogether, which may have a different set of warnings enabled. You may want their build to succeed. It is still a good idea to keep the warnings enabled, so that people who see warning messages could send you bug reports or patches.
This is again done with compiler switches. /WX
is for Microsoft, most others use -Werror
. In either case, the compilation will fail if there are any warnings produced.
Probably not! As you crank up your optimisation level, the compiler starts looking at the code more and more closely, and this closer scrutiny may reveal more mistakes. Thus, do not be content with the warning switches by themselves, always use them when compiling with optimisations enabled (-O2
or -O3
, or /O2
if using MSVC).
char*
to access that type somewhere and if it had padding but code wasn't assuming it, it could mean a problem. –
Sodamide -Werror
and cousins make a lot of sense to do in CI, but not so much during daily development; For example, it's reasonable to warn about unused parameters in general but I don't need that to stop compilation when I'm adding a bunch of new functions and just sketching out the general structure. –
Immobility -Wall
every so often, but I personally wouldn't recommend combining it with -WX
. It enables some warnings that can catch subtle little bugs like "Hey, this class has a virtual function, but you forgot to make the dtor virtual!", but also infomessage "warnings" like "Hey, I optimised this out since you never called it!" or "Hey, I didn't inline this!" –
Antidisestablishmentarianism -Werror=return-type -Werror=implicit-function-declaration
. This may be good even for released versions of your code. –
Snooperscope -Weverything -Werror
gets you the Hard Mode achievement. –
Snake -Weverything
, so I asked a question, did some research, then answered my own question, writing about it here: https://mcmap.net/q/93965/-what-does-the-clang-compiler-39-s-weverything-option-include-and-where-is-it-documented. My final recommendation is -Wall -Wextra -Werror
, as even clang recommends that we do not use -Weverything
in general. –
Taxable C is, famously, a rather low-level language as HLLs go. C++, though it might seem to be a considerably higher-level language than C, still shares a number of its traits. And one of those traits is that the languages were designed by programmers, for programmers — and, specifically, programmers who knew what they were doing.
(For the rest of this answer I'm going to focus on C. Most of what I'll say also applies to C++, though perhaps not as strongly. Although as Bjarne Stroustrup has famously said, "C makes it easy to shoot yourself in the foot; C++ makes it harder, but when you do it blows your whole leg off.".)
If you know what you are doing — really know what you are doing — sometimes you may have to "break the rules". But most of the time, most of us will agree that well-intentioned rules keep us all out of trouble, and that wantonly breaking those rules all the time is a bad idea.
But in C and C++, there are surprisingly large numbers of things you can do that are "bad ideas", but which aren't formally "against the rules". Sometimes they're a bad idea some of the time (but might be defensible other times); sometimes they're a bad idea virtually all of the time. But the tradition has always been not to warn about these things — because, again, the assumption is that programmers know what they are doing, they wouldn't be doing these things without a good reason, and they'd be annoyed by a bunch of unnecessary warnings.
But of course not all programmers really know what they're doing. And, in particular, every C programmer (no matter how experienced) goes through a phase of being a beginning C programmer. And even experienced C programmers can get careless and make mistakes.
Finally, experience has shown not only that programmers do make mistakes, but that these mistakes can have real, serious consequences. If you make a mistake, and the compiler doesn't warn you about it, and somehow the program doesn't immediately crash or do something obviously wrong because of it, the mistake can lurk there, hidden, sometimes for years, until it causes a really big problem.
So it turns out that, most of the time, warnings are a good idea, after all. Even the experienced programmers have learned that (actually, it's "especially the experienced programmers have learned that"), on balance, the warnings tend to do more good than harm. For every time you did something wrong deliberately and the warning was a nuisance, there are probably at least ten times you did something wrong by accident and the warning saved you from further trouble. And most warnings can be disabled or worked around for those few times when you really want to do the "wrong" thing.
(A classic example of such a "mistake" is the test if(a = b)
. Most of the time, this is truly a mistake, so most compilers these days warn about it — some even by default. But if you really want to both assign b
to a
and test the result, you can disable the warning by typing if((a = b))
.)
The second question is, why would you want to ask the compiler to treat warnings as errors? I'd say it's because of human nature, specifically, the all-too-easy reaction of saying "Oh, that's just a warning, that's not so important, I'll clean that up later." But if you're a procrastinator (and I don't know about you, but I'm a world-class procrastinator) it's easy to put off the necessary cleanup for basically ever — and if you get into the habit of ignoring warnings, it gets easier and easier to miss an important warning message that's sitting there, unnoticed, in the midst of all the ones you're relentlessly ignoring.
So asking the compiler to treat warnings as errors is a little trick you can play on yourself to get around this human foible, to force yourself to fix the warnings today, because otherwise your program won't compile.
Personally, I'm not as insistent about treating warnings as errors — in fact, if I'm honest, I can say that I don't tend to enable that option in my "personal" programming. But you can be sure I've got that option enabled at work, where our style guide (which I wrote) mandates its use. And I would say — I suspect most professional programmers would say — that any shop that doesn't treat warnings as errors in C is behaving irresponsibly, is not adhering to commonly-accepted industry best practices.
if(a = b)
, therefore we don't need to warn about it." (Then someone produces a list of 10 critical bugs in 10 released products that result from this particular error.) "Okay, no experienced C programmer would ever write that..." –
Vicegerent if (returnCodeFromFoo = foo(bar))
and mean it, to capture and test the code in one place (Assume the only purpose of foo
is to have side effects!) The fact that a really really experienced programmer may know this is not a good coding style is beside the point ;) –
Homograph if (returnCodeFromFoo = foo(bar))
, then they put a comment in and turn off the warning (so that when the maintenance programmer looks at it 4 years later, he/she will realize that the code is intentional. That said, I worked with someone who in (in Microsoft C++ land) insisted that combining /Wall with treating warnings as errors was the way to go. Uh, it isn't (unless you want to put in a lot of suppression comments). –
Chemise for (int i=0;i<20,i++)
See it? That's a comma, not a semicolon. Comma is an "operator" that returns the value of the right-hand expression, and ++ won't return a 0 for a very long time. Perfectly legit compilable code. –
Chromophore lint
. Philosophical, separation-of-functionality issues aside, there was of course also a very practical motivation for having what amounted to a seven-pass compiler (cpp
, c0
, c1
, c2
, as
, ld
, lint
): the 64k address space of the PDP-11. Eventually the gcc
folks declared that having lint
as a separate program was Wrong, that they intended to build full, lint
-like capability into the compiler itself -- and they mostly did, except you have to ask for most of those warnings, they're not on by default. Thus this question. –
Vicegerent Warnings consist of the best advice some of the most skilled C++ developers could bake into an application. They're worth keeping around.
C++, being a Turing complete language, has plenty of cases where the compiler must simply trust that you knew what you are doing. However, there are many cases where the compiler can realize that you probably did not intend to write what you wrote. A classic example is printf() codes which don't match the arguments, or std::strings passed to printf (not that that ever happens to me!). In these cases, the code you wrote is not an error. It is a valid C++ expression with a valid interpretation for the compiler to act on. But the compiler has a strong hunch that you simply overlooked something which is easy for a modern compiler to detect. These are warnings. They are things that are obvious to a compiler, using all the strict rules of C++ at its disposal, that you might have overlooked.
Turning warnings off, or ignoring them, is like choosing to ignore free advice from those more skilled than you. It’s a lesson in hubris that ends either when you fly too close to the sun and your wings melt, or a memory corruption error occurs. Between the two, I'll take falling from the sky any day!
"Treat warnings as errors" is the extreme version of this philosophy. The idea here is that you resolve every warning the compiler gives you -- you listen to every bit of free advice and act on it. Whether this is a good model for development for you depends on the team and what kind of product you are working on. It's the ascetic approach that a monk might have. For some, it works great. For others, it does not.
On many of my applications we do not treat warnings as errors. We do this because these particular applications need to compile on several platforms with several compilers of varying ages. Sometimes we find it is actually impossible to fix a warning on one side without it turning into a warning on another platform. So we are merely careful. We respect warnings, but we don't bend over backwards for them.
equals
/ hashCode
), and it's a quality of implementation issue which of those are reported. –
Brannen constexpr unsigned v = std::integral_constant<unsigned, -1 + 1u>::value;
(or more usefully, even just constexpr auto AllSet = -1u;
) on MSVC, with -W2
. –
Antidisestablishmentarianism printf("%d", someString)
which many compilers give a warning for, while printf(makePercentD(), someString)
may be too difficult for static analysis to demonstrate that makePercentD always returns "%d" Turing completness of templates is just a devil in that you can't prove a file is even compilable, much less compile to something that does what you want it to do. –
Manizales Not only does handling the warnings make better code, it makes you a better programmer. Warnings will tell you about things that may seem little to you today, but one day that bad habit will come back and bite your head off.
Use the correct type, return that value, evaluate that return value. Take time and reflect "Is this really the correct type in this context?" "Do I need to return this?" And the biggie; "Is this code going to be portable for the next 10 years?"
Get into the habit of writing warning-free code in the first place.
Debugging a segmentation fault, for instance, requires the programmer to trace the root (cause) of the fault, which usually is located in a prior place in your code than the line that eventually caused the segmentation fault.
It's very typical that the cause is a line for which the compiler had issued a warning that you ignored, and the line that caused the segmentation fault the line that eventually threw the error.
Fixing the warning leads to fixing the problem... A classic!
A demonstration of the above... Consider the following code:
#include <stdio.h>
int main(void) {
char* str = "Hello, World!!";
int idx;
// Colossal amount of code here, irrelevant to 'idx'
printf("%c\n", str[idx]);
return 0;
}
which when compiled with "Wextra" flag passed to GCC, gives:
main.c: In function 'main':
main.c:9:21: warning: 'idx' is used uninitialized in this function [-Wuninitialized]
9 | printf("%c\n", str[idx]);
| ^
which I could ignore and execute the code anyway... And then I would witness a "grand" segmentation fault, as my IP Epicurus professor used to say:
Segmentation fault
In order to debug this in a real world scenario, one would start from the line that causes the segmentation fault and attempt to trace what is the root of the cause... They would have to search for what has happened to i
and str
inside that colossal amount of code over there...
Until, one day, they found themselves in the situation where they discover that idx
is used uninitialized, thus it has a garbage value, which results in indexing the string (way) beyond out of its bounds, which leads to a segmentation fault.
If only they hadn't ignored the warning, they would have found the bug immediately!
str[idx]
" isn't "Okay, where are str
and idx
defined?` –
Antidisestablishmentarianism idx
happen to be the value you expected on your test (not too unlikely if the expected value is 0), and actually happen to point to some sensitive data that should never be printed when deployed. –
Nanceynanchang The other answers are excellent and I don't want to repeat what they have said.
One other aspect to "why enable warnings" that hasn't properly been touched on is that they help enormously with code maintenance. When you write a program of significant size, it becomes impossible to keep the whole thing in your head at once. You typically have a function or three that you're actively writing and thinking about, and perhaps a file or three on your screen that you can refer to, but the bulk of the program exists in the background somewhere and you have to trust that it keeps working.
Having warnings on, and having them as energetic and in your face as possible, helps to alert you if something you change makes trouble for something that you can't see.
Take for example, the Clang warning -Wswitch-enum
. That triggers a warning if you use a switch on an enum and miss out one of the possible enum values. It's something you might think would be an unlikely mistake to make: you probably at least looked at the list of enum values when you wrote the switch statement. You might even have an IDE that generated the switch options for you, leaving no room for human error.
This warning really comes into its own when, six months later you add another possible entry to the enum. Again, if you're thinking about the code in question you'll probably be fine. But if this enum is used for multiple different purposes and it's for one of those that you need the extra option, it's very easy to forget to update a switch in a file you haven't touched for six months.
You can think of warnings in the same way as you'd think of automated test cases: they help you make sure that the code is sensible and doing what you need when you first write it, but they help even more to make sure that it keeps doing what you need while you prod at it. The difference is that test cases work very narrowly to the requirements of your code and you have to write them, while warnings work broadly to sensible standards for almost all code, and they're very generously supplied by the boffins who make the compilers.
Treating warnings as errors is just a means of self-discipline: you were compiling a program to test that shiny new feature, but you can't until you fix the sloppy parts. There is no additional information -Werror
provides. It just sets priorities very clearly:
Don't add new code until you fix problems in the existing code
It's really the mindset that's important, not the tools. Compiler diagnostics output is a tool. MISRA C (for embedded C) is another tool. It doesn't matter which one you use, but arguably compiler warnings is the easiest tool you can get (it's just one flag to set) and the signal-to-noise ratio is very high. So there's no reason not to use it.
No tool is infallible. If you write const float pi = 3.14;
, most tools won't tell you that you defined π with a bad precision which may lead to problems down the road. Most tools won't raise an eyebrow on if(tmp < 42)
, even if it's commonly known that giving variables meaningless names and using magic numbers is a way to disaster in big projects. You have to understand that any "quick test" code you write is just that: a test, and you have to get it right before you move on to other tasks, while you still see its shortcomings. If you leave that code as is, debugging it after you spend two months adding new features will be significantly harder.
Once you get into the right mindset, there is no point in using -Werror
. Having warnings as warnings will allow you to take an informed decision whether it still makes sense to run that debug session you were about to start, or to abort it and fix the warnings first.
clippy
linting tool for Rust will actually warn about the constant "3.14". It's actually an example in the docs. But as you might guess from the name, clippy
takes pride in being aggressively helpful. –
Amperage As someone who works with legacy embedded C code, enabling compiler warnings has helped show a lot of weakness and areas to investigate when proposing fixes. In GCC, using -Wall
and -Wextra
and even -Wshadow
have become vital. I'm not going to go every single hazard, but I'll list a few that have popped up that helped show code issues.
This one can easily point to unfinished work and areas that might not be using all of the passed variables which could be an issue. Let's look at a simple function that may trigger this:
int foo(int a, int b)
{
int c = 0;
if (a > 0)
{
return a;
}
return 0;
}
Just compiling this without -Wall
or -Wextra
returns no issues. -Wall
will tell you though that c
is never used:
foo.c: In function ‘foo’:
foo.c:9:20: warning: unused variable ‘c’ [-Wunused-variable]
-Wextra
will also tell you that your parameter b
doesn't do anything:
foo.c: In function ‘foo’:
foo.c:9:20: warning: unused variable ‘c’ [-Wunused-variable]
foo.c:7:20: warning: unused parameter ‘b’ [-Wunused-parameter] int foo(int a, int b)
This one bit hard and did not show up until -Wshadow
was used. Let's modify the example above to just add, but there just happens to be a global with the same name as a local which causes a lot of confusion when trying to use both.
int c = 7;
int foo(int a, int b)
{
int c = a + b;
return c;
}
When -Wshadow
was turned on, it's easy to spot this issue.
foo.c:11:9: warning: declaration of ‘c’ shadows a global declaration [-Wshadow]
foo.c:1:5: note: shadowed declaration is here
This doesn't require any extra flags in GCC, but it has still be the source of problems in the past. A simple function trying to print data, but has a formatting error could look like this:
void foo(const char * str)
{
printf("str = %d\n", str);
}
This doesn't print the string since the formatting flag is wrong and GCC will happily tell you this is probably not what you wanted:
foo.c: In function ‘foo’:
foo.c:10:12: warning: format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘const char *’ [-Wformat=]
These are just three of the many things the compiler can double check for you. There are a lot of others like using an uninitialized variable that others have pointed out.
possible loss of precision
" and "comparison between signed and unsigned
" warnings . I find it difficult to grasp how many "programmers" ignore these (in fact, I am not really sure why they are not errors) –
Edelstein sizeof
is unsigned, but the default integer type is signed. The sizeof
result type, size_t
, is typically used for anything related to type size, such as, e.g., alignment or array/container element count, while integers in general are intended to be used as "int
unless otherwise required". Considering just how many people are thus taught to use int
to iterate over their containers (comparing int
to size_t
), making it an error would break roughly everything. ;P –
Antidisestablishmentarianism This is a specific answer to C, and why this is far more important to C than to anything else.
#include <stdio.h>
int main()
{
FILE *fp = "some string";
}
This code compiles with a warning. What are and should be errors in just about every other language on the planet (barring assembly language) are warnings in C. Warnings in C are almost always errors in disguise. Warnings should be fixed, not suppressed.
With GCC, we do this as gcc -Wall -Werror
.
This was also the reason for the high rantyness about some Microsoft non-secure API warnings. Most people programming C have learned the hard way to treat warnings as errors and this stuff appeared that just wasn't the same kind of thing and wanted non-portable fixes.
Compiler warnings are your friend
I work on legacy Fortran 77 systems. The compiler tells me valuable things: argument data type mismatches on a subroutine call, and using a local variable before a value has been set into the variable, if I have a variable or subroutine argument that is not used. These are almost always errors.
When my code compiles cleanly, 97% it works. The other guy I work with compiles with all warnings off, spends hours or days in the debugger, and then asks me to help. I just compile his code with the warnings on and tell him what to fix.
You should always enable compiler warnings because the compiler can often tell you what's wrong with your code. To do this, you pass -Wall
-Wextra
to the compiler.
You should usually treat warnings as errors because the warnings usually signify that there's something wrong with your code. However, it's often very easy to ignore these errors. Therefore, treating them as errors will cause the build to fail so you can't ignore the errors. To treat warnings as errors, pass -Werror
to the compiler.
I once worked for a large (Fortune 50) company that manufactured electronic testing equipment.
The core product of my group was an MFC program that, over the years, came to generate literally hundreds of warnings. Which were ignored in almost all cases.
This is a frigging nightmare when bugs occur.
After that position, I was lucky enough to be hired as the first developer in a new startup.
I encouraged a 'no warning' policy for all builds, with compiler warning levels set to be pretty noisy.
Our practice was to use #pragma warning - push/disable/pop for code that the developer was sure was really fine, along with a log statement at the debug level, just in case.
This practice worked well for us.
#pragma warning
doesn't just suppress warnings, it serves the dual purposes of quickly communicating to other programmers that something is intentional and not accidental, and acts as a search tag for quickly locating potentially problematic areas when something breaks but fixing the errors/warnings doesn't fix it. –
Antidisestablishmentarianism The compiler warnings in C++ are very useful for some reasons.
It permits to show you where you can have made a mistake which can impact the final result of your operations. For example, if you didn't initialize a variable or if you use "=" instead of "==" (there are just examples)
It permits also to show you where your code is not conforming to the standard of C++. It's useful, because if the code is conforming to the actual standard it will be easy to move the code into an other platform, for example.
In general, the warnings are very useful to show you where you have mistakes in your code which can affect the result of your algorithm or prevent some error when the user will use your program.
A warning is an error waiting to happen. So you must enable compiler warnings and tidy up your code to remove any warning.
Ignoring warnings means you left sloppy code that not only could cause problems in the future for someone else, but it will also make important compile messages less noticed by you.
The more compiler output, the less anyone will notice or bother. The cleaner the better. It also means you know what you are doing. Warnings are very unprofessional, careless, and risky.
There's only one problem with treating warnings as errors: When you're using code coming from other sources (e.g., Microsoft libraries, open source projects), they didn't do their job right, and compiling their code generates tons of warnings.
I always write my code so it doesn't generate any warnings or errors, and clean it up until it compiles without generating any extraneous noise. The garbage I have to work with appalls me, and I'm astounded when I have to build a big project and watch a stream of warnings go by where the compilation should only be announcing which files it processed.
I also document my code, because I know the real lifetime cost of software comes mostly from maintenance, not from writing it initially, but that's a different story...
-Wall
and you use -Wall -Wextra
. –
Nanceynanchang Some warnings may mean a possible semantic error in code or a possible UB. E.g. ;
after if()
, an unused variable, a global variable masked by local, or comparison of signed and unsigned. Many warnings are related to the static code analyzer in the compiler or to breaches of the ISO standard detectable at compile time, which "require diagnostics". While those occurrences may be legal in one particular case, they would be the result of design issues most of the time.
Some compilers, e.g., GCC, have a command-line option to activate "warnings as errors" mode. It's a nice, if cruel, tool to educate novice coders.
The fact that C++ compilers accept compiling code that obviously results in undefined behavior at all is a major flaw in the compilers. The reason they don't fix this is because doing so would probably break some usable builds.
Most of the warnings should be fatal errors that prevent the build from completing. The defaults to just display errors and do the build anyway are wrong and if you don't override them to treat warnings as errors and leave some warnings then you will likely end up with your program crashing and doing random things.
int i; if (fun1()) i=2; if (fun2()) i=3; char s="abcde"[i];
This code exhibits undefined behaviour if and only if both fun1()
and fun2()
can return false
on the same function execution. Which may or may not be true, but how is the compiler to tell? –
Nanceynanchang All percentages are deviating from reality and not meant to be taken seriously.
99% of warnings are completely useless for correctness. However, the 1% makes your code not work (often in rare cases). Importantly other answers miss.
The warnings are from compiler developers. There is a 'C' standard and conformance. But the warning are a sign from compiler developers about problems you are giving them. Ie, these can be things the compiler writers know lead to inefficient or erroneous constructs. It is like ignoring a plumber that say you can not put a toilet there and doing telling them to do it anyways.
The next person who enables warnings will think you are incompetent because you didn't enable warnings. They have no idea that 99% of the code is correct and think that only 50% is.
Another issue often caught by warnings is dead code. Ie, code that can never do anything. This is likely a reason that people hate inheriting code with warnings. 75% of what they are looking at is probably useless.
Warning free code gives other people confidence that code is portable and adaptable to tooling, code updates and general bit rot. Warning free code gives other developer confidence that the code they are looking at is not crazy spaghetti or subtle boloney. They might also just catch an error or two.
You should definitely enable compiler warnings as some compilers are bad at reporting some common programming mistakes, including the following:
So as these functions can be detected and reported, just usually not by default; so this feature must be explicitly requested via compiler options.
Take it easy: you don't have to, it is not necessary. -Wall and -Werror was designed by code refactoring maniacs for themselves: it was invented by compiler developers to avoid breaking existing builds after compiler or programming language updates on the user side. The feature is nothing, but all about the decision to break or not to break the build.
It is totally up to your preference to use it or not. I use it all the time because it helps me to fix my mistakes.
-Wall and -Werror was designed by code-refactoring maniacs for themselves.
[citation needed] –
Newborn -Wall
and -Werror
, it's just asking if it's a good idea. Which, from your last sentence, it sounds like you're saying it is. –
Moa © 2022 - 2024 — McMap. All rights reserved.