Sorry for the confusing title, but I can't think of a better way to explain it.
While browsing the source code of BitConverter
recently, I came across a strange segment of code:
public static unsafe int ToInt32(byte[] value, int startIndex)
{
fixed (byte* pbyte = &value[startIndex])
{
if (startIndex % 4 == 0) // data is aligned
return *((int*)pbyte);
else
{
if (IsLittleEndian)
{
return (*pbyte) | (*(pbyte + 1) << 8) | (*(pbyte + 2) << 16) | (*(pbyte + 3) << 24);
}
else
{
return (*pbyte << 24) | (*(pbyte + 1) << 16) | (*(pbyte + 2) << 8) | (*(pbyte + 3));
}
}
}
}
How can casting pbyte
to an int*
(line 6) violate data alignment in this scenario? I left it out for brevity, but the code has proper argument validation so I'm pretty sure it can't be a memory access violation. Does it lose precision when casted?
In other words, why can't the code be simplified to:
public static unsafe int ToInt32(byte[] value, int startIndex)
{
fixed (byte* pbyte = &value[startIndex])
{
return *(int*)pbyte;
}
}
Edit: Here is the section of code in question.
pbyte + 1
,pbyte + 2
, etc. (3 of which are not going to be "aligned") just so you can avoid that non-alignment. Seems pretty overkill to me. – Pandybatint
- #1238463 - so what alternative implementation do you suggest compared to reading byte-by-byte? – Donndonna