Java dropping half of UDP packets
Asked Answered
L

5

6

I have a simple client/server setup. The server is in C and the client that is querying the server is Java.

My problem is that, when I send bandwidth-intensive data over the connection, such as Video frames, it drops up to half the packets. I make sure that I properly fragment the udp packets on the server side (udp has a max payload length of 2^16). I verified that the server is sending the packets (printf the result of sendto()). But java doesn't seem to be getting half the data.

Furthermore, when I switch to TCP, all the video frames get through but the latency starts to build up, adding several seconds delay after a few seconds of runtime.

Is there anything obvious that I'm missing? I just can't seem to figure this out.

Laissezfaire answered 14/3, 2010 at 1:41 Comment(2)
Are you sure it's "Java" and not your network? Also, UDP makes no guarantees about packet delivery, order, or duplicates - unlike TCP.Unrobe
The fact that the use of TCP results in latency tells me that you're trying to pour into your network more data than it can carry. Since you have logs from the server side sending packets, you should be able to get a general idea how much data you're sending per second. Is it compatible with the capacity of your network?Utile
D
9

Get a network tool like Wireshark so you can see what is happening on the wire.

UDP makes no retransmission attempts, so if a packet is dropped somewhere, it is up to the program to deal with the loss. TCP will work hard to deliver all packets to the program in order, discarding dups and requesting lost packets on its own. If you are seeing high latency, I'd bet you'll see a lot of packet loss with TCP as well, which will show up as retransmissions from the server. If you don't see TCP retransmissions, perhaps the client isn't handling the data fast enough to keep up.

Divide answered 14/3, 2010 at 2:4 Comment(2)
I opened up wireshark and dumped the packets using TCP. I didn't see any dropped packets. The RTT to ACK (round trip time to ACK) was ~40ms. My Java client uses a dedicated thread to parse the bytes.Laissezfaire
Thanks for your help, it has been solved. It was a problem in the java receiver thread (the thread was blocked for a short time not listening for packets). Your TCP dropped packet monitor suggestion was good.Laissezfaire
Y
3

Any UDP based application protocol will inevitably be susceptible to packet loss, reordering and (in some circumstances) duplicates. The "U" in UDP could stands for "Unreliable" as in Unreliable Datagram Protocol. (OK, it really stands for "User" ... but it is certainly a good way to remember UDP's characteristics.)

UDP packet losses typically occur because your traffic is exceeding the buffering capacity of one or more of the "hops" between the server and client. When this happens, packets are dropped ... and since you are using UDP, there is no transport protocol-level notification that this is occurring.

If you use UDP in an application, the application needs to take account of UDP's unreliable nature, implementing its own mechanisms for dealing with dropped and out-of-order packets and for doing its own flow control. (An application that blasts out UDP packets with no thought to the effect that this may have on an already overloaded network is a bad network citizen.)

(In the TCP case, packets are probably being dropped as well, but TCP is detecting and resending the dropped packets, and the TCP flow control mechanism is kicking in to slow down the rate of data transmission. The net result is "latency".)

EDIT - based on the OP's comment, the cause of his problem was that the client was not "listening" for a period, causing the packets to (presumably) be dropped by the client's OS. The way to address this is to:

  1. use a dedicated Java thread that just reads the packets and queues them for processing, and

  2. increase the size of the kernel packet queue for the socket.

But even when you take these measures you can still get packets dropped. For example, if the machine is overloaded, the application may not get execution time-slices frequently enough to read and queue all packets before the kernel has to drop them.

EDIT 2 - There is some debate about whether UDP is susceptible to duplicates. It is certainly true that UDP has no innate duplicate detection or prevention. But it is also true that the IP packet routing fabric that is the internet is unlikely to spontaneously duplicate packets. So duplicates, if they do occur, are likely to occur because the sender has decided to resend a UDP packet. Thus, to my mind while UDP is susceptible to problems with duplicates, it does not cause them per se ... unless there is a bug in the OS protocol stack or in the IP fabric.

Yahwistic answered 14/3, 2010 at 1:56 Comment(10)
If two packets end up taking different routes to the client, could you not then end up with reordered packets (from the client's point of view)?Boutte
@Eric Yes with UDP, no with TCP.Divide
Why not duplicates? AFAIK, duplicates are indeed possible.Nagle
A duplicate should only occur if the sender sends two packets with identical contents (e.g. resends). And in that case, the packets are conceptually different, even if there is no way for the receiver to detect this. AFAIK, a router will not resend a given UDP packet.Yahwistic
UDP != "Unreliable" Datagram Protocol (even though it is unreliable) - it's "User" Datagram Protocol - faqs.org/rfcs/rfc768.htmlUnrobe
@Fred: That's what I thought :-)Boutte
'the IP packet routing fabric that is the internet is unlikely to spontaneously duplicate packets'. I don't know about that. Surely the mere presence of multiple paths makes it possible that the same packet can arrive twice via different paths? Quite a lot of the TCP specification is there to protect against duplicate packets.Homogeneity
TCP needs to do that because it retransmits packets.Yahwistic
@EJP - "Surely the mere presence of multiple paths makes it possible that the same packet can arrive twice via different paths?" I don't see the logic in that. The mere existence of multiple paths does not imply that a single packet will be sent by more than one of them.Yahwistic
(a) It doesn't imply that it will but it does imply that it can. (b) RFC 793 #1.5: 'The TCP must recover from data that is damaged, lost, duplicated, or delivered out of order by the internet communication system.' #3.7: 'Duplicate segments may arrive due to network or TCP retransmission.'Homogeneity
K
2

Although UDP supports packets up to 65535 bytes in length (including the UDP header, which is 8 bytes - but see note 1), the underlying transports between you and the destination do not support IP packets that long. For example, Ethernet frames have a maximum size of 1500 bytes - taking into account overhead for the IP and UDP headers, that means that any UDP packet with a data payload length of more than about 1450 is likely to be fragmented into multiple IP datagrams.

A maximum size UDP packet is going to be fragmented into at least 45 separate IP datagrams - and if any one of those fragments is lost, the entire UDP packet is lost. If your underlying packet loss rate is 1%, your application will see a loss rate of about 36%!

If you want to see less packets lost, don't send huge packets - limit your data in each packet to about 1400 bytes (or even do your own "Path MTU discovery" to figure out the maximum size you can safely send without fragmentation).


  1. Of course, UDP is also subject to the limitations of IP, and IP datagrams have a maximum size of 65535, including the IP header. The IP header ranges in size from 20 to 60 bytes, so the maximum amount of application data transportable within a UDP packet might be as low as 65467.
Keith answered 14/3, 2010 at 3:6 Comment(2)
Several inaccuracies there, see my answer.Homogeneity
Good practical advice - even if there is some debate over the size of the UDP header (eight bytes, from memory). For most streaming applications, keeping UDP datagram size small enough to fit in an ethernet frame has a number of advantages: minimises data loss when a frame is lost; reduces latency as don't need to wait for the whole datagram; and - for some applications - processing load can be reduced by turning off UDP checksumming for the stream and relying on ethernet error detection to discard faulty frames/datagrams. MPEG is often sent as 7 x 188 byte TS frames per UDP datagram.Waechter
B
0

The problem might be to do with your transmit buffer getting filled up in your UDPSocket. Only send the amount of bytes in one go indicated by UDPSocket.getSendBufferSize(). Use setSendBufferSize(int size) to increase this value.

If #send() is used to send a DatagramPacket that is larger than the setting of SO_SNDBUF then it is implementation specific if the packet is sent or discarded.

Boeotian answered 14/3, 2010 at 2:41 Comment(0)
H
0

IP supports packets up to 65535 bytes including a 20 byte IP packet header. UDP supports datagrams up to 65507 bytes, plus the 20 byte IP header and the 8 byte UDP header. However the network MTU is the practical limit, and don't forget that that includes not just these 28 bytes but also the Ethernet frame header. The real practical limit for unfragmented UDP is the minimum MTU of 576 bytes less all the overheads.

Homogeneity answered 14/3, 2010 at 4:35 Comment(1)
Ethernet has frames, not packets. The Ethernet MTU is 1500 bytes of payload, which defines the maximum size of an IP fragment; it doesn't include the frame header and trailing. 576 is the minimum MTU a node must support in order to transmit IP fragments, it is not the real practical limit for unfragmented UDP. This limit have to be determined via a path MTU discovery. Also, it is possible to send datagrams bigger than the MTU without making it unpractical, until either datagrams or packet loss get too big.Permian

© 2022 - 2024 — McMap. All rights reserved.