Due to us not properly acknowleding the time when the last RTX was sent
when scheduling a new one, it can easily happen that due to the packet
you are requesting have a PTS that is slightly old (but not too old when
adding the latency of the jitterbuffer), both its calculated second and
third (etc.) timeout could already have passed. This would lead to a burst
of RTX requests, which acts completely against its purpose, potentially
spending a lot more bandwidth than needed.
This has been properly reproduced in the test:
test_rtx_not_bursting_requests
The good news is that slightly re-thinking the logic concerning
re-requesting RTX, made it a lot simpler to understand, and allows us
to remove two members of the RtpTimer which no longer serves any purpose
due to the refactoring. If desirable the whole "delay" concept can actually
be removed completely from the timers, and simply just added to the timeout
by the caller of the API. But that can be a change for a another time.
The only external change (other than the improved behavior around bursting
RTX) is that the "delay" field now stricly represents the delay between
the PTS of the RTX-requested packet and the time it is requested on,
whereas before this calculation was more about the theoretical calculated
delay. This is visible in three other RTX-tests where the delay had
to be adjusted slightly. I am confident however that this change is
correct.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/789>
This action signal will delegate to clear-ssrc onto the rtpssrcdemux element
associated with the session. This allow rtpbin users to clear pads and
elements for a specific ssrc that is known to no longer be in use. This
happens when a pad is reused in rtpsrc or ristsrc.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/736>
In order to support the symbol g_enum_to_string in various
project using GStreamer ( gst-validate etc.), the glib minimum
version should be 2.56.0.
Remove compat code as glib requirement
is now > 2.56
Version used by Ubuntu 18.04 LTS
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/774>
Since !348, the jitterbuffer was only removed with the session. This restores
the original behaviour and removes the jitterbuffer when the stream is
removed. This avoid accumulating jitterbuffer objects into the bin when a
session is reused.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/735>
The rtpjitterbuffer is now part of the session elements, we no longer need
to do the ref_sink dance when signalling it. It is already owned by the bin
when signalled. Also, the code that handles generic session elements already
handle the ref_sink() calls since:
03dc22951b
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/735>
Until now, do_expected_timeout() was shortly dropping the JBUF_LOCK in order
to push RTX event event without causing deadlock. As a side effect, some
CPU hung would happen as the timerqueue would get filled while looping over
the due timers. To mitigate this, we were processing the lost timer first and
placing into a queue the remainign to be processed later.
In the gap caused by an unlock, we could endup receiving one of the seqnum
present in the pending timers. In that case, the timer would not be found and
a new one was created. When we then update the expected timer, the seqnum
would already exist and the updated timer would be lost.
In this patch we remove the unlock from do_expected_timeout() and place all
pending RTX event into a queue (instead of pending timer). Then, as soon as
we have selected a timer to wait (or if there is no timer to wait for) we send
all the upstream RTX events. As we no longer unlock, we no longer need to pop
more then one timer from the queue, and we do so with the lock held, which
blocks any new colliding timers from being created.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/616>
When a GST_EVENT_FLUSH_START reaches the jitterbuffer, there is a chance that
our task is currently blocking waiting for a timer.
There was two problems:
* That wait wasn't checking for flushing situations
* The flushing handling wasn't waking up that conditional (to check whether it
should abort)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/608>
As we override the GLib item with our own structure, we cannot use any
function from GList or GQueue that would try to free the RTPJitterBufferItem.
In this patch, we move away from g_queue_new() which forces using
g_queue_free(). This this function could use g_slice_free() if there is any items
left in the queue. Passing the wrong size to GSLice may cause data corruption
and crash.
A better approach would be to use a proper intrusive linked list
implementation but that's left as an exercise for the next person
running into crashes caused by this.
Be ware that this regression was introduced 6 years ago in the following
commit [0], the call to flush() looked useless, as there was a g_queue_free()
afterward.
Signed-off-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
[0] 479c7642fd
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/573>
The problem was this:
Due to the highly irregular arrival of RTX-packet the max-misorder variable
could be pushed very low. (-10).
If you then at some point get a big in the sequence-numbers (62 in the
test) you end up sending RTX-requests for some of those packets, and then
if the sender answers those requests, you are going to get a bunch of
RTX-packets arriving. (-13 and then 5 more packets in the test)
Now, if max-misorder is pushed very low at this point, these RTX-packets
will trigger the handle_big_gap_buffer() logic, and because they arriving
so neatly in order, (as they would, since they have been requested like
that), the gst_rtp_jitter_buffer_reset() will be called, and two things
will happen:
1. priv->next_seqnum will be set to the first RTX packet
2. the 5 RTX-packet will be pushed into the chain() function
However, at this point, these RTX-packets are no longer valid, the
jitterbuffer has already pushed lost-events for these, so they will now
be dropped on the floor, and never make it to the waiting loop-function.
And, since we now have a priv->next_seqnum that will never arrive
in the loop-function, the jitterbuffer is now stalled forever, and will
not push out another buffer.
The proposed fixes:
1. Don't use RTX in calculation of the packet-rate.
2. Don't use RTX in large-gap logic, as they are likely to be dropped.
RTP session starts a new thread for RTCP and names it
"rtpsession-rtcp-thread" which happens to be longer than the maximum 16B
allowed by pthread_setname_np and causes the naming to fail.
See docs for more details.
This commit simply shortens the thread's name so it can actually be set.
Changing the types from boolean to guint due to the ++ operand used on
them, and only call JBUF_SIGNAL_QUEUE after settling down,
or else you end up signaling the waiting code in chain() for every buffer
pushed out.
This is a concept that only applies when a buffer arrives in the chain
function, and it has already been scheduled as part of a "multi"-lost
timer.
However, "multi"-lost timers are now a thing of the past, making this
whole concept superflous, and this buffer is now simply counted as "late",
having already been pushed out (albeit as a lost-event).
There is a problem with the code today, where a single timer will
be scheduled for a series of lost packets, and then if the first packet
in that series arrives, it will cause a rescheduling of that timer, going
from a "multi"-timer to a single-timer, causing a lot of the packets
in that timer to be unaccounted for, and creating a situation in where
the jitterbuffer will never again push out another packet.
This patch solves the problem by instead of scheduling those lost packets
as another timer, it instead asks to have that lost-event pushed straight
out.
This very much goes with the intent of the code here: These packets are
so desperately late that no cure exists, and we might as well get the
lost-event out of the way and get on with it.
This change has some interesting knock-on effect being presented in
later commits. It completely removes the concept of "already-lost", so
that is why that test has been disabled in this commit, to be
removed later.
When calling gst_rtp_jitter_buffer_reset you pass in a seqnum.
This is considered the starting-point for a new stream.
However, the old behavior would unref this buffer, basically lying to
the thread that is pushing out buffers saying that it can expect
this buffer, when it would never arrive. The resulting effect being no
more buffer pushed out of the jitterbuffer, and it would buffer
incoming data indefinitely.
By instead inserting the buffer in the gap_packets queue, the _reset()
function will take responsibility for using that as the first buffer
of the new stream.
Fixes#703
We do not have a way to know the format modifiers to use with string
functions provided by the system. G_GUINT64_FORMAT and other string
modifiers only work for glib string formatting functions. We cannot
use them for string functions provided by the stdlib. See:
https://developer.gnome.org/glib/stable/glib-Basic-Types.html#glib-Basic-Types.description
```
../gst/rtpmanager/gstrtpjitterbuffer.c: In function 'gst_jitter_buffer_sink_parse_caps':
../gst/rtpmanager/gstrtpjitterbuffer.c:1523:32: error: unknown conversion type character 'l' in format [-Werror=format=]
|| sscanf (mediaclk, "direct=%" G_GUINT64_FORMAT, &clock_offset) != 1)
^~~~~~~~~~
In file included from /home/nirbheek/cerbero/build/dist/windows_x86/include/glib-2.0/glib/gtypes.h:32,
from /home/nirbheek/cerbero/build/dist/windows_x86/include/glib-2.0/glib/galloca.h:32,
from /home/nirbheek/cerbero/build/dist/windows_x86/include/glib-2.0/glib.h:30,
from /home/nirbheek/cerbero/build/dist/windows_x86/include/gstreamer-1.0/gst/gst.h:27,
from /home/nirbheek/cerbero/build/dist/windows_x86/include/gstreamer-1.0/gst/rtp/gstrtpbuffer.h:27,
from ../gst/rtpmanager/gstrtpjitterbuffer.c:108:
/home/nirbheek/cerbero/build/dist/windows_x86/lib/glib-2.0/include/glibconfig.h:69:28: note: format string is defined here
#define G_GUINT64_FORMAT "llu"
^
../gst/rtpmanager/gstrtpjitterbuffer.c:1523:32: error: too many arguments for format [-Werror=format-extra-args]
|| sscanf (mediaclk, "direct=%" G_GUINT64_FORMAT, &clock_offset) != 1)
^~~~~~~~~~
```
See also: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/379
* Organize GstRtpFunnelPad and GstRtpFunnel separately
* Use G_GNUC_UNUSED instead of (void) casts
* Don't call an event "caps"
* Use semicolons after GST_END_TEST (helps gst-indent)
The proper way of capping on max-streams is to do it in rtpssrcdemux.
This patch uses the newly introduced property on rtpssrcdemux. Previous
behavior would not prevent rtpssrcdemux spawning new pads for every new
ssrc and potentialy causing performance trouble during teardown.