When acting on a UDP socket, what can cause sendto() to send fewer bytes than requested?
Asked Answered
G

2

6

When acting on a UDP socket, what can cause sendto() to send fewer bytes than requested?

The motivation for asking this is to figure out the precautions I need to take to ensure I always get a complete message sent in a single call to sendto(), and to understand what further steps I need to take to get a message into a single IP packet. Do I just need to ensure my message is below some size, and if so, how big is that size? Besides OS-specific UDP datagram size limits and MTUs, are there other forces at work (e.g. i/o buffer capacities, capricious OSes)?

Having asked the principle question above, and in the title of this post, I'll continue with some related follow-on questions, and then put things into context at the end.

Further Questions

Going into more detail, again assuming we're acting on a UDP socket:

  1. Does each successful call to sendto() result in exactly 1 UDP datagram being sent? (I appreciate this may fragment into multiple IP packets)

  2. Will each successful call to recvfrom() retrieve exactly 1 UDP datagram?

  3. If a single message takes N calls to sendto() to send, will it take exactly N calls to recvfrom() to receive, even if the receiving machine is a different platform? (I appreciate the datagram order will be unpredictable)

  4. Suppose I attempt to send a message whose size is equal to or less than the smaller of the maximum UDP datagram sizes supported by the local and remote systems, (and baring some error which would result in a return value of -1) is sendto() guaranteed to send my whole message in one go? Or might it report that it's sent fewer bytes than I've asked it to send? If so, why? Back to question 1.

  5. In addition to the suppositions in question 4, supposing that my message is no bigger than the (MTU - UDP header - IP header) size, is the UDP datagram that results guaranteed to fit into 1 IP packet (on my local network at least)?

Context

I've just started writing my first UDP-based communications protocol (cross platform: e.g. linux, mac, windows, ios, android, and more). I'm a socket newbie but am aware of the 'cost' that comes from using such a simple protocol as UDP and have researched algorithms/strategies for:

I'm trying to break down all my communications into atomic messages (i.e. single, self-contained UDP datagrams) which might (but not necessarily) be required to fit into a single IP packet (e.g. 1500bytes). Real-time assessment of throughput and packet-loss will determine if I have to shrink datagrams to fit into single IP packets (this would incur a size penalty from additional headers). Some of this will be over wifi/radio links, so I'm hoping to determine the 'optimal' datagram size adaptively. I know the MTU of all my interfaces, and appreciate that outside my local network things packets may get split further, but that's out of my control, so I can live with it.

But everything hinges on being able to construct an atomic message and have 100% confidence that I can send it successfully with a single call to sendto(), and receive if with a single call to recvfrom(). All my application-level reliability, splitting, encoding and encryption information lives in my own protocol header, and I can't re-split messages after a call to sendto() comes back short. E.g. think of a message checksum: if the whole message doesn't get through in one go, the checksum in the header is no longer valid for the portion of the message that has been sent.

Gelt answered 4/6, 2014 at 18:56 Comment(0)
S
5

everything hinges on being able to construct an atomic message and have 100% confidence that I can send it successfully with a single call to sendto(), and receive if with a single call to recvfrom()

UDP guarantees that. Datagrams arrive intact and entire, or not at all. All you need is to ensure that your socket send and receive buffers are large enough, and if you're traversing routers don't send more than 534 bytes per datagram: this is the generally accepted limit.

Sacken answered 4/6, 2014 at 22:11 Comment(0)
S
2

I have experience with many protocols that make this exact fundamental assumption that any UDP packet sent will be sent in one call and received in 1 call (or not at all). Any fragmentation that occurs due to MTUs etc. are not seen at the application level. Many real-time streaming protocols limit packet size to limit delays due to fragmentation but this all under the covers. Just make sure the receive buffer is large enough.

Squirm answered 4/6, 2014 at 19:32 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.