I found three commands that helped me reduce the delay of live streams. The first command its very basic and straight-forward, the second one combines other options which might work differently on each environment, and the last command is a hacky version that I found in the documentation, it was useful at the beginning but currently the first option is more stable and suitable to my needs.
1. Basic using -fflags nobuffer
This format flag reduces the latency introduced by buffering during initial input streams analysis. This command will reduce noticeable the delay and will not introduce audio glitches.
ffplay -fflags nobuffer -rtsp_transport tcp rtsp://<host>:<port>
2. Advanced -flags low_delay
and other options.
We can combine the previous -fflags nobuffer
format flag with other generic options and advanced options for a more elaborated command:
-flags low_delay
this codec generic flag will force low delay.
-framedrop
: to drop video frames if video is out of sync. Enabled by default if the master clock is not set to video. Use this option to enable frame dropping for all master clock sources
-strict experimental
, finally -strict
specifies how strictly to follow the standards. The experimental
option allows non standardized experimental things, experimental (unfinished/work in progress/not well tested) decoders and encoders. This option is optional and remember that experimental decoders can pose a security risk, do not use this for decoding untrusted input.
ffplay -fflags nobuffer -flags low_delay -framedrop \
-strict experimental -rtsp_transport tcp rtsp://<host>:<port>
This command might introduce some audio glitches, but rarely.
Also you can try adding:
-avioflags direct
to reduce buffering, and
-fflags discardcorrupt
to discard corrupted packets, but I think is very aggressive approach. This might break the audio-video synchronization
ffplay -fflags nobuffer -fflags discardcorrupt -flags low_delay \
-framedrop -avioflags direct -rtsp_transport tcp rtsp://<host>:<port>
3. A hacky option (found on the old documentation)
This is an debugging solution based on setting -probesize
and -analyzeduration
to low values to help your stream start up more quickly.
-probesize 32
sets the probing size in bytes (i.e. the size of the data to analyze to get stream information). A higher value will enable detecting more information in case it is dispersed into the stream, but will increase latency. Must be an integer not lesser than 32. It is 5000000 by default.
analyzeduration 0
specifies how many microseconds are analyzed to probe the input. A higher value will enable detecting more accurate information, but will increase latency. It defaults to 5000000 microseconds (5 seconds).
-sync ext
sets the master clock to an external source to try and stay realtime. Default is audio. The master clock is used to control audio-video synchronization. This means this options sets the audio-video synchronization to a type (i.e. type=audio/video/ext).
ffplay -probesize 32 -analyzeduration 0 -sync ext -rtsp_transport tcp rtsp://<host>:<port>
This command might introduce some audio glitches sometimes.
The -rtsp_transport
can be setup as udp
or tcp
according to your streaming. For this example I'm using tcp
.
ffplay -probesize 500000 http://192.168.1.2:8090/test.webm
(to 500 Kb, experiment with this value, default value is 5Mb if I'm not mistaken.) – Sturrock