Is it possible to test whether a type supports negative zero in C++ at compile time?
Asked Answered
I

4

8

Is there a way to write a type trait to determine whether a type supports negative zero in C++ (including integer representations such as sign-and-magnitude)? I don't see anything that directly does that, and std::signbit doesn't appear to be constexpr.

To clarify: I'm asking because I want to know whether this is possible, regardless of what the use case might be, if any.

Incriminate answered 23/2, 2019 at 8:5 Comment(8)
As usual, I' more interested in why you need this? What is the real and original problem you need to solve? Unless this is just plain curiosity (in which case you should mention it) then please ask about your real problem directly instead. Right now your question is more of an XY problem.Immortal
You should be aware that the non-2s-complement signed integer option is most likely going away from C++. See open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0907r0.html for details (which also references a similar effort in C).Pretended
@Pretended - Frankly, I hope it doesn't. If technology evolves in a way that makes non-2s complement integral types useful, then the C and C++ standards will explicitly discourage their use (or the specification will have to reintroduce ability to use them). It's one thing to say that current implementations work in a particular way. It's another thing to say that compiler vendors or programmers should ASSUME it in their development. Hardware designs do tend to introduce surprises for software developers, for all sorts of reasons.Westbrook
@Someprogrammerdude - I'm curious as to what you think the X might be in this case that makes you think this is an XY problem.Incriminate
@Peter, I didn't comment on whether it was a good/bad idea. Though I personally consider it good to get rid of cruft that no-one uses (and hasn't for quite a while), I can see your viewpoint. I merely wanted to point out that, at least for integers, this question may not matter long term. In any case, the standard is meant to progress, both in adding new features and removing questionable ones (such as gets in ISO C).Pretended
@mistertribs The XY problem is, in short, about asking for help to fix a single solution to an unknown problem. If asking about the actual problem instead, we might be able to help with other (and possibly better or simpler) solutions.Immortal
@Someprogrammerdude - I am aware of what an XY problem is, I have come across the term before your comment, and in any case I am capable of following links and reading. The question is as stated, I want to know whether it is possible or not. This is not an XY problem and I don't know why you would jump to that conclusion.Incriminate
Something like -std::numeric_limits<T>::max()==std::numeric_limits<T>::lowest() could work.Mesomorphic
A
2

The best one can do is to rule out the possibility of signed zero at compile time, but never be completely positive about its existence at compile time. The C++ standard goes a long way to prevent checking binary representation at compile time:

  • reinterpret_cast<char*>(&value) is forbidden in constexpr.
  • using union types to circumvent the above rule in constexpr is also forbidden.
  • Operations on zero and negative zero of integer types behave exactly the same, per-c++ standard, with no way to differentiate.
  • For floating-point operations, division by zero is forbidden in a constant expression, so testing 1/0.0 != 1/-0.0 is out of the question.

The only thing one can test is if the domain of an integer type is dense enough to rule-out signed zero:

template<typename T>
constexpr bool test_possible_signed_zero()
{
    using limits = std::numeric_limits<T>;
    if constexpr (std::is_fundamental_v<T> &&
           limits::is_exact &&
           limits::is_integer) {
        auto low = limits::min();
        auto high = limits::max();
        T carry = 1;
        // This is one of the simplest ways to check that
        // the max() - min() + 1 == 2 ** bits
        // without stepping out into undefined behavior.
        for (auto bits = limits::digits ; bits > 0 ; --bits) {
            auto adder = low % 2 + high %2 + carry;
            if (adder % 2 != 0) return true;
            carry = adder / 2;
            low /= 2;
            high /= 2;
        }
        return false;
    } else {
        return true;
    }
}

template <typename T>
class is_possible_signed_zero:
 public std::integral_constant<bool, test_possible_signed_zero<T>()>
{};
template <typename T>
constexpr bool is_possible_signed_zero_v = is_possible_signed_zero<T>::value;

It is only guaranteed that if this trait returns false then no signed zero is possible. This assurance is very weak, but I can't see any stronger assurance. Also, it says nothing constructive about floating point types. I could not find any reasonable way to test floating point types.

Amphicoelous answered 24/2, 2019 at 10:17 Comment(4)
instead of checking max() - min() + 1 == 2 ** bits it'll be simpler to change the expression to min() == 2 ** bits - max() - 1 which can be done as min() == -max() - 1 without any arbitrary precision mathFriary
This is undefined behavior: 2 ** bits since it overflows to zero in type T, if T is signed. This - max() -1 is undefined behavior for systems with sign bit and negative zero when -max() == min(). I did not say that arbitrary precision is the shortest way to calculate it, just the safest way to avoid overflow (undefined behavior) bugs.Amphicoelous
min() == -max() - 1 doesn't invoke UB. The result is the same as evaluating your expression in higher precisionFriary
@phuclv, on a system where -numeric_limits<int>::max() == numeric_limits<int>::min(), writing (-max() - 1) is the same as (numeric_limits<int>::min() - 1). Underflowing int is undefined behavior see en.cppreference.com/w/cpp/language/… When signed integer arithmetic operation overflows (the result does not fit in the result type), the behavior is undefinedAmphicoelous
S
3

Unfortunately, I cannot imagine a way for that. The fact is that C standard thinks that type representations should not be a programmer's concern (*), but is only there to tell implementors what they should do.

As a programmer all you have to know is that:

  • 2-complement is not the only possible representation for negative integer
  • negative 0 could exist
  • an arithmetic operation on integers cannot return a negative 0, only bitwise operation can

(*) Opinion here: Knowing the internal representation could lead programmers to use the old good optimizations that blindly ignored the strict aliasing rule. If you see a type as an opaque object that can only be used in standard operations, you will have less portability questions...

Samarium answered 23/2, 2019 at 8:56 Comment(1)
since C++20 2-complement is the only representationWoll
A
2

The best one can do is to rule out the possibility of signed zero at compile time, but never be completely positive about its existence at compile time. The C++ standard goes a long way to prevent checking binary representation at compile time:

  • reinterpret_cast<char*>(&value) is forbidden in constexpr.
  • using union types to circumvent the above rule in constexpr is also forbidden.
  • Operations on zero and negative zero of integer types behave exactly the same, per-c++ standard, with no way to differentiate.
  • For floating-point operations, division by zero is forbidden in a constant expression, so testing 1/0.0 != 1/-0.0 is out of the question.

The only thing one can test is if the domain of an integer type is dense enough to rule-out signed zero:

template<typename T>
constexpr bool test_possible_signed_zero()
{
    using limits = std::numeric_limits<T>;
    if constexpr (std::is_fundamental_v<T> &&
           limits::is_exact &&
           limits::is_integer) {
        auto low = limits::min();
        auto high = limits::max();
        T carry = 1;
        // This is one of the simplest ways to check that
        // the max() - min() + 1 == 2 ** bits
        // without stepping out into undefined behavior.
        for (auto bits = limits::digits ; bits > 0 ; --bits) {
            auto adder = low % 2 + high %2 + carry;
            if (adder % 2 != 0) return true;
            carry = adder / 2;
            low /= 2;
            high /= 2;
        }
        return false;
    } else {
        return true;
    }
}

template <typename T>
class is_possible_signed_zero:
 public std::integral_constant<bool, test_possible_signed_zero<T>()>
{};
template <typename T>
constexpr bool is_possible_signed_zero_v = is_possible_signed_zero<T>::value;

It is only guaranteed that if this trait returns false then no signed zero is possible. This assurance is very weak, but I can't see any stronger assurance. Also, it says nothing constructive about floating point types. I could not find any reasonable way to test floating point types.

Amphicoelous answered 24/2, 2019 at 10:17 Comment(4)
instead of checking max() - min() + 1 == 2 ** bits it'll be simpler to change the expression to min() == 2 ** bits - max() - 1 which can be done as min() == -max() - 1 without any arbitrary precision mathFriary
This is undefined behavior: 2 ** bits since it overflows to zero in type T, if T is signed. This - max() -1 is undefined behavior for systems with sign bit and negative zero when -max() == min(). I did not say that arbitrary precision is the shortest way to calculate it, just the safest way to avoid overflow (undefined behavior) bugs.Amphicoelous
min() == -max() - 1 doesn't invoke UB. The result is the same as evaluating your expression in higher precisionFriary
@phuclv, on a system where -numeric_limits<int>::max() == numeric_limits<int>::min(), writing (-max() - 1) is the same as (numeric_limits<int>::min() - 1). Underflowing int is undefined behavior see en.cppreference.com/w/cpp/language/… When signed integer arithmetic operation overflows (the result does not fit in the result type), the behavior is undefinedAmphicoelous
M
2

Somebody's going to come by and point out this is all-wrong standards-wise.

Anyway, decimal machines aren't allowed anymore and through the ages there's been only one negative zero. As a practical matter, these tests suffice:

INT_MIN == -INT_MAX && ~0 == 0

but your code doesn't work for two reasons. Despite what the standard says, constexprs are evaluated on the host using host rules, and there exists an architecture where this crashes at compile time.

Trying to massage out the trap is not possible. ~(unsigned)0 == (unsigned)-1 reliably tests for 2s compliment, so it's inverse does indeed check for one's compliment*; however, ~0 is the only way to generate negative zero on ones compliment, and any use of that value as a signed number can trap so we can't test for its behavior. Even using platform specific code, we can't catch traps in constexpr, so forgetaboutit.

*barring truly exotic arithmetic but hey

Everybody uses #defines for architecture selection. If you need to know, use it.

If you handed me an actually standards complaint compiler that yielded a compile error on trap in a constexpr and evaluated with target platform rules rather than host platform rules with converted results, we could do this:

target.o: target.c++
    $(CXX) -c target.c++ || $(CC) -DTRAP_ZERO -c target.c++

bool has_negativezero() {
#ifndef -DTRAP_ZERO
        return INT_MIN == -INT_MAX && ~0 == 0;
#else
        return 0;
#endif
}
Mould answered 6/3, 2019 at 15:47 Comment(10)
As I commented on another answer ~0 is unsafe with respect to the C standard and I could't find any wording or guarantee on the C++ standard. (1) on a one's complement system ~0 is negative zero; (2) negative zero might trap according to the C standard open-std.org/jtc1/SC22/wg14/www/docs/n1548.pdf (6.2.6.2 Integer types), quote: "as is whether the value ... with sign bit and all value bits 1 (for ones’ complement), is a trap representation or a normal value".Amphicoelous
@MichaelVeksler: You are correct on both counts. But the fact that constexprs get evaluated on the host machine with host arithmetic rules burns harder. Yes I know it's not supposed to. I've hunted bugs that boiled down to this too many times.Mould
in that case, we both agree that it's a buggy area in some compilers, and that the correct compiler behavior (for constexpr) would be to emit diagnostic (for such trap cases) and neither accept the code nor crash. Am I correct?Amphicoelous
@MichaelVeksler: Yes. If the compiler reliably emitted a diagnostic I could write code that actually works by using multiple build steps, but it doesn't. :(Mould
~0 == 0 is true on a one's complement machine but not on a sign-magnitude oneFriary
@Mould what example? On sign-magnitude systems ~0 is equal to -INT_MAX so the best way you can get is return INT_MIN == -INT_MAX && (~0 == 0 || ~0 == -INT_MAX);Friary
@phuclv: I looked up sign-magnitude systems and couldn't find any even remotely likely to have a 21th century C++ compiler. As this is the practical answer, I'm not interested in platforms that don't exist anymore.Mould
the question asks about "negative zero" which means that the OP is interested in all systems that C++ standard allow. If you don't care about platforms that don't exist anymore then there's no point testing it because even one's complement systems already died decades agoFriary
@phuclv: I'm pretty sure I saw a one's compliment this century.Mould
@Mould sure you saw it with your eyes? I'm pretty sure they were in the last century, just like sign-magnitude computers, except UNISYS that went much longer due to legacy issues. No modern architectures in the last ~4 decades use those 2 ancient sign representations, to the extent that the majority of the code assumes that computers always run using 2's complement, since it's impossible to find consumer one's complement hardware even in the long past. There are a proposal to remove obsolete sign systems in C++20Friary
F
1

The standard std::signbit function in C++ has a constructor that receives an integral value

  • bool signbit( IntegralType arg ); (4) (since C++11)

So you can check with static_assert(signbit(-0)). However there's a footnote on that (emphasis mine)

  1. A set of overloads or a function template accepting the arg argument of any integral type. Equivalent to (2) (the argument is cast to double).

which unfortunately means you still have to rely on a floating-point type with negative zero. You can force the use of IEEE-754 with signed zero with std::numeric_limits<double>::is_iec559

Similarly std::copysign has the overload Promoted copysign ( Arithmetic1 x, Arithmetic2 y ); that can be used for this purpose. Unluckily both signbit and copysign are not constexpr according to the current standards although there are some proposals to do that

Yet Clang and GCC can already consider those constexpr if you don't want to wait for the standard to update. Here's their results


Systems with a negative zero also have a balanced range, so can just check if the positive and negative ranges have the same magnitude

if constexpr(-std::numeric_limits<int>::max() != std::numeric_limits<int>::min() + 1) // or
if constexpr(-std::numeric_limits<int>::max() == std::numeric_limits<int>::min())
    // has negative zero

In fact -INT_MAX - 1 is also how libraries defined INT_MIN in two's complement

But the simplest solution would be eliminating non-two's complement cases, which are pretty much non-existent nowadays

static_assert(-1 == ~0, "This requires the use of 2's complement");

Related:

Friary answered 6/3, 2019 at 5:51 Comment(3)
@MichaelVeksler fair enough. Fixed thatFriary
The test of -1 == ~0 is not convincing since: (1) on a one's complement system ~0 is negative zero; (2) negative zero might trap according to the C standard open-std.org/jtc1/SC22/wg14/www/docs/n1548.pdf (6.2.6.2 Integer types), quote: "as is whether the value ... with sign bit and all value bits 1 (for ones’ complement), is a trap representation or a normal value". The C++ standard is not clear about that.Amphicoelous
@MichaelVeksler On a one's complement system ~0 is negative zero, which isn't equal to -1 (whose value is ~1). On a sign-magnitude one ~0 is -INT_MAX which is also not -1. So it works on both cases. But you're right that it won't work in case of a trap zero representationFriary

© 2022 - 2024 — McMap. All rights reserved.