Why doesn't cocoa use the same enum declaration style everywhere?
Asked Answered
G

3

12

I was wondering what is the rationale behind different styles of enum declaration on cocoa?

Like this:

enum { constants.. }; typedef NSUInteger sometype;

Is the reason to use typedef to get assigments to NSUInteger to work without casting?

Sometimes the typedef is either of NSInteger/NSUInteger, why not use NSInteger always? Is there real benefit using NSUInteger?

Enum tagnames are still used sometimes, like here on _NSByteOrder.

This answer was very useful too: What is a typedef enum in Objective-C?.

Glottalized answered 11/2, 2010 at 18:19 Comment(1)
typedef and enum are very differen... I don't really understand your question? enums are used to set up collections of named integer constants. typedef are used to give a data type a new name, often to support different architectures...Petree
B
12

Several reasons:

Reason 1: Flexibility:

enum lickahoctor { yes = 0, no = 1, maybe = 2 };

declares an enumeration. You can use the values yes, no and maybe anywhere and assign them to any integral type. You can also use this as a type, by writing

enum lickahoctor myVar = yes;

This makes it nice because if a function takes a parameter with the type enum lickahoctor you'll know that you can assign yes, no or maybe to it. Also, the debugger will know, so it'll display the symbolic name instead of the numerical value. Trouble is, the compiler will only let you assign values you've defined in enum lickahoctor to myVar. If you for example want to define a few flags in the base class, then add a few more flags in the subclass, you can't do it this way.

If you use an int instead, you don't have that problem. So you want to use some sort of int, so you can assign arbitrary constants.

Reason 2: Binary compatibility:

The compiler chooses a nice size that fits all the constants you've defined in an enum. There's no guarantee what you will get. So if you write a struct containing such a variable directly to a file, there is no guarantee that it will still be the same size when you read it back in with the next version of your app (according to the C standard, at least -- it's not quite that bleak in practice).

If you use some kind of int instead, the platform usually guarantees a particular size for that number. Especially if you use one of the types guaranteed to be a particular size, like int32_t/uint32_t.

Reason 3: Readability and self-documentation

When you declare myVar above, it's immediately obvious what values you can put in it. If you just use an int, or an uint32_t, it isn't. So what you do is you use

enum { yes, no, maybe };
typedef uint32_t lickahoctor;

to define a nice name for the integer somewhere near the constants that will remind people that a variable of this type can hold this value. But you still get the benefit of a predictable, fixed size and the ability to define additional values in subclasses, if needed.

Reason 4: Support for bitfields

enum-typed variables only support assigning exactly one value from their options. So if you're trying to implement a bit field, you can't type it as a bitfield. Furthermore, you need to use unsigned variables to avoid sign extension from screwing you over.

Blessington answered 12/2, 2010 at 20:42 Comment(3)
NSInteger isn't guaranteed to be a particular size, it's guaranteed to be different on i386 and x86_64. If you wrote out a binary file with a lickahoctor in on a Core Duo, upgraded your Mac and read it back in, hilarity would ensue.Sergent
Of course, Graham. Brainfart there. Thanks for correcting that.Blessington
In the interest of people googling this article, I've changed mention of NSInteger to uint32_t above. Still, Apple often uses NSInteger, which means you get an int32_t on 32 bit, but an int64_t on 64 bit. This means code that's compiled for 32 and 64 bit is binary-incompatible to each other (can't write this to disk) but at least it's reliable no matter how many more constants you add later to the enum.Blessington
T
2

Is the reason to use typedef to get assigments to NSUInteger to work without casting?

The typedef is used to specify the base type for the enumeration values. You can always cast a enumeration value to another type as long as you truncate the value, by casting to a smaller type (NSUInteger to unsigned short).

NSInteger and NSUInteger were introduced to ease the 64 bits migration of applications, by providing a architecture/platform independent type for both signed and unsigned integers. This way, no matter how many bits the CPU has, applications do no need to be rewritten.

Sometimes the typedef is either of NSInteger/NSUInteger, why not use NSInteger always? Is there real benefit using NSUInteger?

The choice depends on the values in the enumeration. Some enumerations have a lot of values, so they need all the bits available:

  • NSInteger offers 2^31 positive and negative values (on 32 bits architecture).
  • NSUInteger offers 2^32 positive values (on 32 bits architecture).
  • If you enumeration is meant to only contain positive values, then use NSUInteger.
  • If you enumeration is meant to contain both positive and negative values, then use NSInteger.
  • NSUInteger is usually used for flag enumeration, as it provides 32 distinct values (on 32 bits architecture) to be combined at will.

I don't know if there a rule of choice in Apple development's team for that. I hope so...

Templetempler answered 12/2, 2010 at 10:21 Comment(2)
You mean 2 to the power of 31 and 32, right? Also, both types offer 232 distinct values; the difference is that with a signed type (such as NSInteger), half of them (231) are negative, and about half (231 - 1) are positive. With an unsigned type, all 232 are non-negative. And that's the real difference: If all your enumeration values, ever, are going to be positive, you can declare the type as NSUInteger. Conversely, if you're going to have negative values, you must use NSInteger.Sulfate
You are right. I was meaning flag values. I have update my answer accordingly.Templetempler
T
1

Whilst you could use something like

  typedef enum { constants... } sometype;

there is no guarantee about the eventual bitsize of the datatype. Well, thats not strictly true, but its true enough. Its better for APIs to be defined in concrete data sizes, than with something that can change depending on the compiler settings being used.

Terrie answered 11/2, 2010 at 22:30 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.