I am trying to comprehend how development is affected when developing for both 32-bit and 64-bit architectures. From what I have researched thus far, I understand an int
is always 4 bytes regardless of the architecture of the device running the app. But an NSInteger
is 4 bytes on a 32-bit device and 8 bytes on a 64-bit device. I get the impression NSInteger
is "safer" and recommended but I'm not sure what the reasoning is for that.
My question is, if you know the possible value you're using is never going to be large (maybe you're using it to index into an array of 200 items or store the count of objects in an array), why define it as an NSInteger
? That's just going to take up 8 bytes when you won't use it all. Is it better to define it as an int
in those cases? If so, in what case would you want to use an NSInteger
(as opposed to int
or long
etc)? Obviously if you needed to utilize larger numbers, you could with the 64-bit architecture. But if you needed it to also work on 32-bit devices, would you not use long long
because it's 8 bytes on 32-bit devices as well? I don't see why one would use NSInteger
, at least when creating an app that runs on both architectures.
Also I cannot think of a method which takes in or returns a primitive type - int
, and instead utilizes NSInteger
, and am wondering if there is more to it than just the size of the values. For example, (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section
. I'd like to understand why this is the case. Assuming it's possible to have a table with 2,147,483,647 rows, what would occur on a 32-bit device when you add one more - does it wrap around to a -2,147,483,647? And on a 64-bit device it would be 2,147,483,648. (Why return a signed value? I'd think it should be unsigned since you can't have a negative number of rows.)
Ultimately, I'd like to obtain a better understanding of actual use of these number data types, perhaps some code examples would be great!