I have a set of bit flags that are used in a program I am porting from C to C++.
To begin...
The flags in my program were previously defined as:
/* Define feature flags for this DCD file */
#define DCD_IS_CHARMM 0x01
#define DCD_HAS_4DIMS 0x02
#define DCD_HAS_EXTRA_BLOCK 0x04
...Now I've gather that #defines for constants (versus class constants, etc.) are generally considered bad form.
This raises questions about how best to store bit flags in c++ and why c++ doesn't support assignment of binary text to an int, like it allows for hex numbers to be assigned in this way (via "0x"). These questions are summarized at the end of this post.
I could see one simple solution is to simply create individual constants:
namespace DCD {
const unsigned int IS_CHARMM = 1;
const unsigned int HAS_4DIMS = 2;
const unsigned int HAS_EXTRA_BLOCK = 4;
};
Let's call this idea 1.
Another idea I had was to use an integer enum:
namespace DCD {
enum e_Feature_Flags {
IS_CHARMM = 1,
HAS_4DIMS = 2,
HAS_EXTRA_BLOCK = 8
};
};
But one thing that bothers me about this is that its less intuitive when it comes to higher values, it seems... i.e.
namespace DCD {
enum e_Feature_Flags {
IS_CHARMM = 1,
HAS_4DIMS = 2,
HAS_EXTRA_BLOCK = 8,
NEW_FLAG = 16,
NEW_FLAG_2 = 32,
NEW_FLAG_3 = 64,
NEW_FLAG_4 = 128
};
};
Let's call this approach option 2.
I'm considering using Tom Torf's macro solution:
#define B8(x) ((int) B8_(0x##x))
#define B8_(x) \
( ((x) & 0xF0000000) >( 28 - 7 ) \
| ((x) & 0x0F000000) >( 24 - 6 ) \
| ((x) & 0x00F00000) >( 20 - 5 ) \
| ((x) & 0x000F0000) >( 16 - 4 ) \
| ((x) & 0x0000F000) >( 12 - 3 ) \
| ((x) & 0x00000F00) >( 8 - 2 ) \
| ((x) & 0x000000F0) >( 4 - 1 ) \
| ((x) & 0x0000000F) >( 0 - 0 ) )
converted to inline functions, e.g.
#include <iostream>
#include <string>
....
/* TAKEN FROM THE C++ LITE FAQ [39.2]... */
class BadConversion : public std::runtime_error {
public:
BadConversion(std::string const& s)
: std::runtime_error(s)
{ }
};
inline double convertToUI(std::string const& s)
{
std::istringstream i(s);
unsigned int x;
if (!(i >> x))
throw BadConversion("convertToUI(\"" + s + "\")");
return x;
}
/** END CODE **/
inline unsigned int B8(std::string x) {
unsigned int my_val = convertToUI(x.insert(0,"0x").c_str());
return ((my_val) & 0xF0000000) >( 28 - 7 ) |
((my_val) & 0x0F000000) >( 24 - 6 ) |
((my_val) & 0x00F00000) >( 20 - 5 ) |
((my_val) & 0x000F0000) >( 16 - 4 ) |
((my_val) & 0x0000F000) >( 12 - 3 ) |
((my_val) & 0x00000F00) >( 8 - 2 ) |
((my_val) & 0x000000F0) >( 4 - 1 ) |
((my_val) & 0x0000000F) >( 0 - 0 );
}
namespace DCD {
enum e_Feature_Flags {
IS_CHARMM = B8("00000001"),
HAS_4DIMS = B8("00000010"),
HAS_EXTRA_BLOCK = B8("00000100"),
NEW_FLAG = B8("00001000"),
NEW_FLAG_2 = B8("00010000"),
NEW_FLAG_3 = B8("00100000"),
NEW_FLAG_4 = B8("01000000")
};
};
Is this crazy? Or does it seem more intuitive? Let's call this choice 3.
So to recap, my over-arching questions are:
1. Why doesn't c++ support a "0b" value flag, similar to "0x"?
2. Which is the best style to define flags...
i. Namespace wrapped constants.
ii. Namespace wrapped enum of unsigned ints assigned directly.
iii. Namespace wrapped enum of unsigned ints assigned using readable binary string.
Thanks in advance! And please don't close this thread as subjective, because I really want to get help on what the best style is and why c++ lacks built in binary assignment capability.
EDIT 1
A bit of additional info. I will be reading a 32-bit bitfield from a file and then testing it with these flags. So bear that in mind when you post suggestions.