mirror of
https://gitlab.freedesktop.org/gstreamer/gstreamer.git
synced 2024-09-21 03:20:22 +00:00
8ed7ab178b
Turns out that the "big-gap"-logic of the jitterbuffer has been horribly broken. For people using lost-events, an RTP-stream with a gap in sequencenumbers, would produce exactly that many lost-events immediately. So if your sequence-numbers jumped 20000, you would get 20000 lost-events in your pipeline... The test that looks after this logic "test_push_big_gap", basically incremented the DTS of the buffer equal to the gap that was introduced, so that in fact this would be more of a "large pause" test, than an actual gap/discontinuity in the sequencenumbers. Once the test was modified to not increment DTS (buffer arrival time) with a similar gap, all sorts of crazy started happening, including adding thousands of timers, and the logic that should have kicked in, the "handle_big_gap_buffer"-logic, was not called at all, why? Because the number max_dropout is calculated using the packet-rate, and the packet-rate logic would, in this particular test, report that the new packet rate was over 400000 packets per second!!! I believe the right fix is to don't try and update the packet-rate if there is any jumps in the sequence-numbers, and only do these calculations for nice, sequential streams. |
||
---|---|---|
.. | ||
gstrtpbin.c | ||
gstrtpbin.h | ||
gstrtpdtmfmux.c | ||
gstrtpdtmfmux.h | ||
gstrtpfunnel.c | ||
gstrtpfunnel.h | ||
gstrtpjitterbuffer.c | ||
gstrtpjitterbuffer.h | ||
gstrtpmanager.c | ||
gstrtpmux.c | ||
gstrtpmux.h | ||
gstrtpptdemux.c | ||
gstrtpptdemux.h | ||
gstrtprtxqueue.c | ||
gstrtprtxqueue.h | ||
gstrtprtxreceive.c | ||
gstrtprtxreceive.h | ||
gstrtprtxsend.c | ||
gstrtprtxsend.h | ||
gstrtpsession.c | ||
gstrtpsession.h | ||
gstrtpssrcdemux.c | ||
gstrtpssrcdemux.h | ||
Makefile.am | ||
meson.build | ||
rtpjitterbuffer.c | ||
rtpjitterbuffer.h | ||
rtpsession.c | ||
rtpsession.h | ||
rtpsource.c | ||
rtpsource.h | ||
rtpstats.c | ||
rtpstats.h |