When to use size_t vs uint32_t? I saw a a method in a project that receives a parameter called length (of type uint32_t) to denote the length of byte data to deal with and the method is for calculating CRC of the byte data received. The type of the parameter was later refactored to size_t. Is there a technical superiority to using size_t in this case?
e.g.
- (uint16_t)calculateCRC16FromBytes:(unsigned char *)bytes length:(uint32_t)length;
- (uint16_t)calculateCRC16FromBytes:(unsigned char *)bytes length:(size_t)length;
uint32_t
is. From a semantic point-of-view,size_t
doesn't correspond to an explicit size. – Ephorsize_t
to be the largest data type the platform can support natively (i.e. you want a large range while remaining fast). E.g. on a 32 bit system, you'd want it to be 32 bits and on a 64 bit system, you'd want it to be 64 bits but you wouldn't wantsize_t
to be a 64 bit type on a 32 bit system (or auint32_t
on a 16 bit system). Otherwise, it's probably just a good indicator to show that the parameter represents some sort of size quantity. I don't know why they specifically chose to usesize_t
here though. – Royroyalsize_t
isn't being used for the CRC maths itself, it's just the loop bound. That's totally reasonable. – Ephor