mirror of
https://gitlab.freedesktop.org/gstreamer/gstreamer.git
synced 2024-11-18 15:51:11 +00:00
9c94f1187c
The scenario where you have a gap in a steady flow of packets of say 10 seconds (500 packets of with duration of 20ms), the jitterbuffer will idle up until it receives the first buffer after the gap, but will then go on to produce 499 lost-events, to "cover up" the gap. Now this is obviously wrong, since the last possible time for the earliest lost-events to be played out has obviously expired, but the fact that the jitterbuffer has a "length", represented with its own latency combined with the total latency downstream, allows for covering up at least some of this gap. So in the case of the "length" being 200ms, while having received packet 500, the jitterbuffer should still create a timeout for packet 491, which will have its time expire at 10,02 seconds, specially since it might actually arrive in time! But obviously, waiting for packet 100, that had its time expire at 2 seconds, (remembering that the current time is 10) is useless... The patch will create one "big" lost-event for the first 490 packets, and then go on to create single ones if they can reach their playout deadline. See https://bugzilla.gnome.org/show_bug.cgi?id=667838 |
||
---|---|---|
.. | ||
.gitignore | ||
gstrtpbin-marshal.list | ||
gstrtpbin.c | ||
gstrtpbin.h | ||
gstrtpjitterbuffer.c | ||
gstrtpjitterbuffer.h | ||
gstrtpmanager.c | ||
gstrtpptdemux.c | ||
gstrtpptdemux.h | ||
gstrtpsession.c | ||
gstrtpsession.h | ||
gstrtpssrcdemux.c | ||
gstrtpssrcdemux.h | ||
Makefile.am | ||
rtpjitterbuffer.c | ||
rtpjitterbuffer.h | ||
rtpsession.c | ||
rtpsession.h | ||
rtpsource.c | ||
rtpsource.h | ||
rtpstats.c | ||
rtpstats.h |