Why int plus uint returns uint?
Asked Answered
C

3

14

int plus unsigned int returns an unsigned int. Should it be so?

Consider this code:

#include <boost/static_assert.hpp>
#include <boost/typeof/typeof.hpp>
#include <boost/type_traits/is_same.hpp>

class test
{
    static const int          si = 0;
    static const unsigned int ui = 0;

    typedef BOOST_TYPEOF(si + ui) type;
    BOOST_STATIC_ASSERT( ( boost::is_same<type, int>::value ) ); // fails
};


int main()
{
    return 0;
}
Carburize answered 6/4, 2012 at 18:28 Comment(4)
Maybe you should ask the guys who designed the language.Alver
There is another post on the site on the same topic you can find here: https://mcmap.net/q/322936/-comparison-operation-on-unsigned-and-signed-integers Hope this helps!Trousers
+1 for providing a complete test case. sscce.orgMadalynmadam
possible duplicate of How do promotion rules work when the signedness on either side of a binary operator differ?Kaufman
I
17

If by "should it be" you mean "does my compiler behave according to the standard": yes.

C++2003: Clause 5, paragraph 9:

Many binary operators that expect operands of arithmetic or enumeration type cause conversions and yield result types in a similar way. The purpose is to yield a common type, which is also the type of the result. This pattern is called the usual arithmetic conversions, which are defined as follows:

  • blah
  • Otherwise, blah,
  • Otherise, blah, ...
  • Otherwise, if either operand is unsigned, the other shall be converted to unsigned.

If by "should it be" you mean "would the world be a better place if it didn't": I'm not competent to answer that.

Isthmus answered 6/4, 2012 at 18:36 Comment(2)
I am sure the C++ standards committee aims to make the world a better place, so by "should it be" I actually mean "why the world wouldn't be a better place if not so".Carburize
@Carburize - Then it is so (int)0x70000000 + (unsigned int)0x70000000 will result in a positive value.Madalynmadam
K
3

Unsigned integer types mostly behave as members of a wrapping abstract algebraic ring of values which are equivalent mod 2^N; one might view an N-bit unsigned integer not as representing a particular integer, but rather the set of all integers with a particular value in the bottom N bits. For example, if one adds together two binary numbers whose last 4 digits are ...1001 and ...0101, the result will be ...1110. If one adds ...1111 and ...0001, the result will be ...0000; if one subtracts ...0001 from ...0000 the result will be ...1111. Note that concepts of overflow or underflow don't really mean anything, since the upper-bit values of the operands are unknown and the upper-bit values of the result are of no interest. Note also that adding a signed integer whose upper bits are known to one whose upper bits are "don't know/don't care" should yield a number whose upper bits are "don't know/don't care" (which is what unsigned integer types mostly behave as).

The only places where unsigned integer types fail to behave as members of a wrapping algebraic ring is when they participate in comparisons, are used in numerical division (which implies comparisons), or are promoted to other types. If the only way to convert an unsigned integer type to something larger was to use an operator or function for that purpose, the use of such an operator or function could make clear that it was making assumptions about the upper bits (e.g. turning "some number whose lower bits are ...00010110" into "the number whose lower bits are ...00010110 and whose upper bits are all zeroes). Unfortunately, C doesn't do that. Adding a signed value to an unsigned value of equal size yields a like-size unsigned value (which makes sense with the interpretation of unsigned values above), but adding a larger signed integer to an unsigned type will cause the compiler to silently assume that all upper bits of the latter are zeroes. This behavior can be especially vexing in cases where, depending upon a compilers' promotion rules, some compilers may deem two expressions as having the same size while others may view them as different sizes.

Koblenz answered 30/3, 2015 at 16:8 Comment(1)
Downvoter: care to comment? Unsigned integer types are often used to effectively compute the lower bits and ignore the upper bits of a larger value (e.g. using a uint8_t value to tally up the sum of a bunch of bytes). My description does not match the normal one, but it is consistent with such usage and represents the intention of most programmers which relies upon unsigned integer wrapping behavior; it is also consistent with compiler's generated code where e.g. uint16_t x,y,z; ... x=y*z;, after computing y*z, is required to ignore all but the lower 16 bits of the result.Koblenz
G
1

It is likely that the behavior stems from the logic behind pointer types (memory location, e.g. std::size_t) plus a memory location difference (std::ptrdiff_t) is also a memory location.

In other words, std::size_t = std::size_t + std::ptrdiff_t.

When this logic is translated to underlaying types this means, unsigned long = unsigned long + long, or unsigned = unsigned + int.

The "other" explanation from @supercat is also possibly correct.

What is clear is that unsigned integer were not designed or should not be interpreted to be mathematical positive numbers, no even in principle. See https://www.youtube.com/watch?v=wvtFGa6XJDU

Gadoid answered 26/12, 2017 at 12:19 Comment(4)
No. size_t and ptrdiff_t are way newer than the C type promotion rules.Palaearctic
@MicheldeRuiter, No to what? They are typedefs to some underlying builtin type, which will follow the C promotion rules. I am just saying that they behave consistent in a way in the analogy.Gadoid
How can the behavior stem from things that didn't exist? This just doesn't answer the question. But the link is valuable!Palaearctic
@MicheldeRuiter, the analogy between unsigned and address and signed as offset exists before C++, that is what I wanted to point out. (And since this is a C++ question it is ok to bring up standard typedes.)Gadoid

© 2022 - 2024 — McMap. All rights reserved.