Tuning UDT's congestion control
Asked Answered
D

0

10

I have an embedded device running Linux that serves sensor data across a LAN, but never WANs. Occasionally it may reside on one end of a http://en.wikipedia.org/wiki/Long_fat_network.

The architecture I inherited uses TCP, but I'd like to add what amounts to real-time video over UDP. I don't care about dropped packets or ordering. I only want to know on the client side when I've dropped, and on the server side if I'm sending too quickly. I never want to retransmit.

Is there anywhere else I should look? UDT is currently too slow given my initial benchmarks. A naive UDP with-sequence-number client/server can sustain ~80 Mbit/s on this embedded system, whereas untuned UDT is running about 30 Mbit/s. If I use its SOCK_DGRAM interfaces, UDT appears to fall back too aggressively to the point at which it's usually running at 16 Mbit/s. Has anyone successfully tuned UDT's CCC for this kind of an application? The highest throughput I've seen is 35 Mbit/s with UDT's sample applications.

Should I just skip ahead to RTP? http://en.wikipedia.org/wiki/Real-time_Transport_Protocol

Dish answered 17/7, 2012 at 2:15 Comment(2)
Cat, did you ever solve this problem. I also use embedded Linux systems. My systems use cell modems so we see very low TCP throughput. I'm checking out UDT to see if that can improve the throughput.Dermatoid
I gave up and assumed that UDT was too CPU-intensive for the 400 MHz ARM XScale CPU. TCP throughput was fine, however!Dish

© 2022 - 2024 — McMap. All rights reserved.