Is there any code to find the maximum value of integer (accordingly to the compiler) in C/C++ like Integer.MaxValue
function in java?
In C++:
#include <limits>
then use
int imin = std::numeric_limits<int>::min(); // minimum value
int imax = std::numeric_limits<int>::max();
std::numeric_limits
is a template type which can be instantiated with other types:
float fmin = std::numeric_limits<float>::min(); // minimum positive value
float fmax = std::numeric_limits<float>::max();
In C:
#include <limits.h>
then use
int imin = INT_MIN; // minimum value
int imax = INT_MAX;
or
#include <float.h>
float fmin = FLT_MIN; // minimum positive value
double dmin = DBL_MIN; // minimum positive value
float fmax = FLT_MAX;
double dmax = DBL_MAX;
min
are the minimum positive value, where as the integer min
are the minimum value. Same goes for the C macros/constants. –
Runt uint64_t
and int64_t
, not of int
. –
Jodiejodo #include <limits>
and int imax = std::numeric_limits<int>::max();
, but I get the error Can't resolve struct member 'max'
. Any ideas as to why this occurs, and how to fix it? I am using CLion IDE, with CMake and C++ 11 on Ubuntu 14.04. I think it is linked to this issue –
Interclavicle (unsigned)-1/2
–
Valley I know it's an old question but maybe someone can use this solution:
int size = 0; // Fill all bits with zero (0)
size = ~size; // Negate all bits, thus all bits are set to one (1)
So far we have -1 as result 'till size is a signed int.
size = (unsigned int)size >> 1; // Shift the bits of size one position to the right.
As Standard says, bits that are shifted in are 1 if variable is signed and negative and 0 if variable would be unsigned or signed and positive.
As size is signed and negative we would shift in sign bit which is 1, which is not helping much, so we cast to unsigned int, forcing to shift in 0 instead, setting the sign bit to 0 while letting all other bits remain 1.
cout << size << endl; // Prints out size which is now set to maximum positive value.
We could also use a mask and xor but then we had to know the exact bitsize of the variable. With shifting in bits front, we don't have to know at any time how many bits the int has on machine or compiler nor need we include extra libraries.
cout << "INT_MAX:\t" << (int) ((~((unsigned int) 0)) >> 1) << '\n' << "UINT_MAX:\t" << ~((unsigned int) 0) << endl;
–
Bate #include <climits>
#include <iostream>
using namespace std;
int main() {
cout << INT_MAX << endl;
}
numeric_limits<int>::max()
- works also in template contexts, but (for some unfathomable reason to me) cannot be used as a compile-time constant. INT_MAX
- is a macro, pretty useless within template functions, but can be used as a compile-time constant. –
Fredela numeric_limits
has to be usable for non-integer types as well. –
Fredela constexpr
. –
Runt Why not write a piece of code like:
int max_neg = ~(1 << 31);
int all_ones = -1;
int max_pos = all_ones & max_neg;
Here is a macro I use to get the maximum value for signed integers, which is independent of the size of the signed integer type used, and for which gcc -Woverflow won't complain
#define SIGNED_MAX(x) (~(-1 << (sizeof(x) * 8 - 1)))
int a = SIGNED_MAX(a);
long b = SIGNED_MAX(b);
char c = SIGNED_MAX(c); /* if char is signed for this target */
short d = SIGNED_MAX(d);
long long e = SIGNED_MAX(e);
O.K. I neither have rep to comment on previous answer (of Philippe De Muyter) nor raise it's score, hence a new example using his define for SIGNED_MAX trivially extended for unsigned types:
// We can use it to define limits based on actual compiler built-in types also:
#define INT_MAX SIGNED_MAX(int)
// based on the above, we can extend it for unsigned types also:
#define UNSIGNED_MAX(x) ( (SIGNED_MAX(x)<<1) | 1 ) // We reuse SIGNED_MAX
#define UINT_MAX UNSIGNED_MAX(unsigned int) // on ARM: 4294967295
// then we can have:
unsigned int width = UINT_MAX;
Unlike using this or that header, here we use the real type from the compiler.
#include <iostrema>
int main(){
int32_t maxSigned = -1U >> 1;
cout << maxSigned << '\n';
return 0;
}
It might be architecture dependent but it does work at least in my setup.
© 2022 - 2024 — McMap. All rights reserved.
int
withlong long int
in Gregories answer... – Darksome-pedantic
) support it. – Darksomeint
with "integer". There are several integer types;int
is just one of them. – Jodiejodo