One of the jitterbuffers functions is to try and make sense of weird
network behavior.
It is quite unhelpful for the jitterbuffer to start dropping packets
itself when what you are trying to achieve is better network resilience.
In the case of a skew, this could often mean the sender has restarted
in some fashion, and then dropping the very first buffer of this "new"
stream could often mean missing valuable information, like in the case
of video and I-frames.
This patch simply reverts back to the old behavior, prior to 8d955fc32b
and includes the simplest test I could write to demonstrate the behavior,
where a single packet arrives "perfectly", then a 50ms gap happens,
and then two more packets arrive in perfect order after that.
# Conflicts:
# tests/check/elements/rtpjitterbuffer.c
This commit adds custom element messages for when gstrtpjitterbuffer
drops an incoming rtp packets due to for example arriving too late.
Applications can listen to these messages on the bus which enables
actions to be taken when packets are dropped due to for example high
network jitter.
Two properties has been added, one to enable posting drop messages and
one to set a minimum time between each message to enable throttling the
posting of messages as high drop rates.
When the queue is full (and adding more packets would risk a seqnum
roll-over), the best approach is to just start pushing out packets
from the other side. Just pushing out the packets results in the
timers being left hanging with old seqnums, so it's safer to just
execute them immediately in this case. It does limit the timer space
to the time it takes to receiver about 32k packets, but without
extended sequence number, this is the best RTP can do.
This also results in the test no longer needed to have timeouts or
timers as pushing packets in drives everything.
Fixes#619
If the jitterbuffer head change, there is no need to systematically
wakeup the timer thread. The timer thread will be waken up on if
an earlier timeout has been pushed. This prevent some more spurious
wakeup when the system is loaded. As a side effect, cranking the clock
may set the clock at an earlier position.
Implement a single timer queue for all timers. The goal is to always use
ordered queues for storing timers. This way, extracting timers for
execution becomes O(1). This also allow separating the clock wait
scheduling from the timer itself and ensure that we only wake up the
timer thread when strictly needed.
The knew data structure is still O(n) on insertions and reschedule,
but we now use proximity optimization so that normal cases should be
really fast. The GList structure is also embeded intot he RtpTimer
structure to reduce the number of allocations.
Add a property which explicitly maps splitmuxsink pads to the
muxer pads they should connect to, overriding the implicit logic
that tries to match pads but yields arbitrary names.
RTP and RTCP packets can be muxed together on the same channel (see
RFC5761) and can arrive in the same buffer list.
The GStreamer rtpsession element support RFC5761, so add a test to cover
this case for buffer lists too.
Buffers with different timestamps (e.g. packets belonging to different
frames) can arrive together in the same buffer list,
Add a test to cover this case.
When a new source fails to pass the probation period (i.e. new packets
have non-consecutive sequence numbers), then no buffer shall be pushed
downstream. Add a test to validate this case.
Add a test to verify that stats about received packets are correct when
using buffer lists in the rtpsession receive path.
Split get_session_source_stats() in two to be able to get stats from
a GstRtpSession object directly.
The primary video stream is used to select fragment cut points
at keyframe boundaries. Auxilliary video streams may be
broken up at any packet - so fragments may not start with a keyframe
for those streams.
If the conflict is detected when sending a packet, then also send an
upstream event to tell the source to reconfigure itself.
Also ignore the collision if we see more than one collision from the same
remote source to avoid problems on loops.
Add a new property "do-aggregate"* to the H.264 RTP payloader which
enables STAP-A aggregation as per [RFC-6184][1]. With aggregation enabled,
packets are bundled instead of sent immediately, up until the MTU size.
Bundles also end at access unit boundaries or when packets have to be
fragmented.
*: The property-name is kept generic since it might apply more widely,
e.g. STAP-B or MTAP.
[1]: https://tools.ietf.org/html/rfc6184#section-5.7
Closes https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/434
In the case of reordered packets, calculating skew would cause
pts values to be off. Only calculate skew when packets come
in as expected. Also, late RTX packets should not trigger
clock skew adjustments.
Fixes#612
If, say, a rtx-packet arrives really late, this can have a dramatic
effect on the jitterbuffer clock-skew logic, having it being reset
and losing track of the current dts-to-pts calculations, directly affecting
the packets that arrive later.
This is demonstrated in the test, where a RTX packet is pushed in really
late, and without this patch the last packet will have its PTS affected
by this, where as a late RTX packet should be redundant information, and
not affect anything.
This patch corrects the delay set on EXPECTED timers that are added when
processing gaps. Previously the delay could be too small so that
'timout + delay' was much less than 'now', causing the following retries
to be scheduled too early. (They were sent earlier than
rtx-retry-timeout after the previous timeout.)
Turns out that the "big-gap"-logic of the jitterbuffer has been horribly
broken.
For people using lost-events, an RTP-stream with a gap in sequencenumbers,
would produce exactly that many lost-events immediately.
So if your sequence-numbers jumped 20000, you would get 20000 lost-events
in your pipeline...
The test that looks after this logic "test_push_big_gap", basically
incremented the DTS of the buffer equal to the gap that was introduced,
so that in fact this would be more of a "large pause" test, than an
actual gap/discontinuity in the sequencenumbers.
Once the test was modified to not increment DTS (buffer arrival time) with
a similar gap, all sorts of crazy started happening, including adding
thousands of timers, and the logic that should have kicked in, the
"handle_big_gap_buffer"-logic, was not called at all, why?
Because the number max_dropout is calculated using the packet-rate, and
the packet-rate logic would, in this particular test, report that
the new packet rate was over 400000 packets per second!!!
I believe the right fix is to don't try and update the packet-rate if
there is any jumps in the sequence-numbers, and only do these calculations
for nice, sequential streams.
sethostent() seems to be using a global state and we endup with leaks from
that API when called through shout_init(). We had the option to only
ignore the shout case, but the impression is that if we have shout and
another sethostend user, as it's a global state, we may endup with a
different stack trace for the same leak. So in the end, we just ignore
memory allocated by sethostent in general.
In this change we now protect the internal srcpads list using the
stream lock and limit usage of the internal stream lock to
preventing data flowing on the other src pad type while creating
and signalling the new pad.
This fixes a deadlock with RTPBin shutdown lock. These two locks would
end up being taken in two different order, which caused a deadlock. More
generally, we should not rely on a streamlock when handling out-of-band
data, so as a side effect, we should not take a stream lock when
iterating internal links.
We recently added code to remove outdate NACK to avoid using bandwidth
for packet that have no chance of arriving on time. Though, this had a
side effect, which is that it was to get an early RTCP packet with no
feedback into it. This was pretty useless but also had a side effect,
which is that the RTX RTT value would never be updated. So we we stared
having late RTX request due to high RTT, we'd never manage to recover.
This fixes the regression by making sure we keep at least one NACK in
this situation. This is really light on the bandwidth and allow for
quick recover after the RTT have spiked higher then the jitterbuffer
capacity.