When to use size_t vs uint32_t?
Asked Answered
H

1

10

When to use size_t vs uint32_t? I saw a a method in a project that receives a parameter called length (of type uint32_t) to denote the length of byte data to deal with and the method is for calculating CRC of the byte data received. The type of the parameter was later refactored to size_t. Is there a technical superiority to using size_t in this case?

e.g.

- (uint16_t)calculateCRC16FromBytes:(unsigned char *)bytes length:(uint32_t)length;

- (uint16_t)calculateCRC16FromBytes:(unsigned char *)bytes length:(size_t)length;
Hyacinthhyacintha answered 23/2, 2015 at 22:0 Comment(5)
AFAICS, there'd be no meaningful reason to do this. A given CRC is defined to work at a particular word size, and that's exactly what uint32_t is. From a semantic point-of-view, size_t doesn't correspond to an explicit size.Ephor
I'm guessing it's because you'd want size_t to be the largest data type the platform can support natively (i.e. you want a large range while remaining fast). E.g. on a 32 bit system, you'd want it to be 32 bits and on a 64 bit system, you'd want it to be 64 bits but you wouldn't want size_t to be a 64 bit type on a 32 bit system (or a uint32_t on a 16 bit system). Otherwise, it's probably just a good indicator to show that the parameter represents some sort of size quantity. I don't know why they specifically chose to use size_t here though.Royroyal
Thanks all - I added more details to the question.Hyacinthhyacintha
@Royroyal so per your rationale, is it better to use size_t over NSUInteger and are they technically the same?Hyacinthhyacintha
Ok, with the added context, this makes a lot more sense. The size_t isn't being used for the CRC maths itself, it's just the loop bound. That's totally reasonable.Ephor
L
11

According to the C specification

size_t ... is the unsigned integer type of the result of the sizeof operator

So any variable that holds the result of a sizeof operation should be declared as size_t. Since the length parameter in the sample prototype could be the result of a sizeof operation, it is appropriate to declare it as a size_t.

e.g.

unsigned char array[2000] = { 1, 2, 3 /* ... */ };
uint16_t result = [self calculateCRC16FromBytes:array length:sizeof(array)];

You could argue that the refactoring of the length parameter was pointlessly pedantic, since you'll see no difference unless:
a) size_t is more than 32-bits
b) the sizeof the array is more than 4GB

Liggitt answered 23/2, 2015 at 22:37 Comment(3)
BUT: You may run on a system where size_t is 16 or 64 bits. It's size is implementation defined. uint32_t, OTOH, is guaranteed(?) 32 bits wide. A CRC has a specified size. Running a 32 bit CRC on a 16 bit size_t will cause problems.Mimamsa
@ColeJohnson Yes and no. First please note that it was the length parameter that was changed. You can apply a CRC to any length of data. So the only question is whether the length of the input should be specified as a 32-bit number always, or should be specified as a size_t. You could argue that size_t won't work if a) size_t is 16-bits b) the buffer is more than 64KB. However, that assumes that there are 16-bit systems that run objective-C code.Liggitt
size_t will always be a type that can hold the size of the largest array your system can handle, so even that last argument is moot.Concuss

© 2022 - 2024 — McMap. All rights reserved.