Let me reply regarding sha 256 as well as sha 512.
in short:
The algorithm itself is endian agnostic. The endian sensitive parts are when data is imported from a byte buffer to the algorithm working variables and when it is exported back to the digest result - also a byte buffer. If the import / export include casting, then endian matters.
Where could casting occur:
In sha 512 there is a working buffer of 128 bytes.
In my code its defined like this:
union
{
U64 w [80]; (see U64 example below)
byte buffer [128];
};
Input data is copied to this byte buffer and then work is done on W. This means the data was casted to some 64 bit type. This data will have to be swapped. in my case its swapped for little endian machines.
A better method would be to prepare a get macro that takes each byte and places it in its correct place in the u64 type.
When the algorithm is done the digest result is output from the working variables to some byte buffer, if this is done by memcpy it will also have to be swapped.
Another casting could occur when implementing sha 512 - which is designed for 64 bit machines - on 32 bit machines. In my case I have a 64 bit type that is defined:
typedef struct {
uint high;
uint low;
} U64;
Assume I define it for little endian as well, as follows:
typedef struct {
uint low;
uint high;
} U64;
And then the k algorithm init is done like this:
static const SHA_U64 k[80] =
{
{0xD728AE22, 0x428A2F98}, {0x23EF65CD, 0x71374491}, ...
...
...
}
But i need the logic value of k[0].high to be the same in any machine.
So in this example I will need another k array with high and low values swapped.
After the data is stored in the working parameters any bitwise manipulation would have the same result on both big/little endian machines.
Good method would be to avoid any casting:
Import bytes from input buffer to your working parameters using macro.
Work with logical values without thinking about the memory mapping.
Export output to digest result with a macro.
Macro for taking 32 bits from a byte buffer to int32 (BE = big endian):
#define GET_BE_BYTES_FROM32(a)
((((NQ_UINT32) (a)[0]) << 24) |
(((NQ_UINT32) (a)[1]) << 16) |
(((NQ_UINT32) (a)[2]) << 8) |
((NQ_UINT32) (a)[3]))
#define GET_LE_BYTES_FROM32(a)
((((NQ_UINT32) (a)[3]) << 24) |
(((NQ_UINT32) (a)[2]) << 16) |
(((NQ_UINT32) (a)[1]) << 8) |
((NQ_UINT32) (a)[0]))
struct
once in little and once in big endian, and you hash them, they will of course produce different hashes. But that's because they're different binary data, not because the hash cares. – Bogtrotter