The only thing you can send over a TCP socket is bytes. You cannot send an integer over a TCP socket without first creating some byte representation for that integer. The C/C++ type, integer
, can be stored in memory in whatever way the platform likes. If that just happens to be the form in which you need to send it over the TCP socket, then fine. But if it's not, then you have to convert into the form the protocol requires before you send it and into your native format after you receive it.
As a bit of a sloppy analogy, consider they way I communicate with you. My native language might be Spanish and who knows what goes on in my brain. Internally, I might represent the number three as "tres" or some weird pattern of neurons. Who knows? But when I communicate with you, I must represent the number three as "3" or "three" because that's the protocol you and I have agreed to, the English language. So unless I'm a terrible English speaker, how I internally store the number three won't affect my communication with you.
Since this group requires me to produce streams of English characters to talk to you, I must convert my internal number representations to streams of English characters. Unless I'm terrible at doing that, how I store numbers internally will not affect the streams of English characters I produce.
So unless you do foolish things, this will never matter. Since you will be sending and receiving bytes over the TCP socket, the memory format of the integer
type won't matter, since you won't be sending or receiving instances of the C/C++ integer
type but logical integers.
For example, if the protocol specification for the data you are sending over TCP says that you need to send a four-byte integer in little-endian format, then you should write code to do that. If the code takes your platform's endianness into consideration, that would be purely as an optimization that should not affect code behavior.