To summarize the previous observations, there are exactly two possible causes of undefined behavior relating to your list of operations:
- division by zero
- overflow (whether negative or positive) – your example of negating
std::numeric_limits<int64_t>::min()
– whether by division or otherwise – is just an example of that.
Only the arithmetic operators (the first five from your list) are affected by either of these issues, all others have well-defined behavior for all inputs.
What I want to do is expand on the dangers of integer overflow and undefined behavior. First, I'd highly recommend you to watch Undefined Behavior is Awesome by Piotr Padlewski, and the Garbage In, Garbage Out talk by Chandler Carruth.
Also, consider how integer overflow is a recurring theme in CVEs (software vulnerability reports). The integer overflow itself does not usually cause direct damage, but many other problems can ensue as a result of the overflow. You could liken the overflow to a pin prick, that by itself is mostly harmless, but can help dangerous toxins and germs to bypass your body's immune system.
There was at least one hole in OpenSSH which was directly related to integer overflow, for example, and this one did not even involve any "crazy" compiler optimizations, or, for that matter, any optimizations at all.
Finally, things like UBSAN (the undefined behavior sanitizer in Clang/GCC) exist. If you allow signed integer overflow in one place, and try to get meaningful results from UBSAN, you may get unexpected traps and/or too many false positives.
TL;DR: Avoid all undefined behavior.
John Zwinck has mentioned adding range checks as a remedy, carefully avoiding any intermediate operations that would overflow. Assuming you only have to support GCC, there are also two command line options that should help you a lot, if you feel lazy:
-ftrapv
will cause signed integer overflow to trap.
-fwrapv
will cause signed integers to wrap on overflow.
Which one is safer? Actually, this highly depends on your application domain. Your opinion seems to be that less chance of crashing equals "safer". It could be so, however, consider the above-mentioned OpenSSH vulnerability. What would you rather have an SSH server do when fed garbage data, and possibly shellcode, from the remote client?
- A) terminate (as would happen with
-ftrapv
)
- B) proceed and possibly execute the shellcode (as would happen with
-fwrapv
)
I'm pretty sure most admins would go for A), even more so if the process to be terminated is not the process listening on the actual socket(s) but has been specifically fork()
ed to handle the current connection, so there isn't even much of a DoS. In other words, while -fwrapv
gives you defined behavior, it does not necessarily mean that the behavior is expected at the point of use, and therefore "safe".
Also, I recommend that you avoid making false dichotomies in your mind such as process crash vs. proceeding with garbage data. You can choose from a wide range of error handling strategies if you add the right checks, whether using special return values or exception handling, to safely get out of a tight space without having to stop servicing requests altogether.
<<
/>>
anyway. – Orcusstd::bad_alloc
and the like. But you dont have them either. – Reformismstd::numeric_limits<int64_t>::min() /-1
but allow all other forms of overflow. – Cultivable%
also as the result is the remainder of a problematic division. – Metaphysic<<
,>>
which have UB/IDB. – Metaphysic