When we have a large number of missing packets, generate one lost event for all
the packets that have no chance of being pushed out in time.
Fix and activate unit test for large gaps.
Keep track of the current time in the timeout loop.
Loop over all timers and trigger all the expired ones, we can do this in the
same loop that selects the new best timer.
Also update the timers when retransmission is disabled. We need to
do this because when we added LOST timers when we detected missing packets and
we need to remove those timers when the packet finally arrives.
We keep the DTS and PTS in running-time inside the jitterbuffer. Make sure to
transform it back to a buffer timestamp before pushing out the buffer.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=707931
Restructure handling of incomming packet and the gap with the expected seqnum
and register all timers from the _chain function.
Convert a timer to a LOST packet timer when the max amount of retransmission
requests has been reached.
If we change the seqnum of an existing timer and we were waiting for
that timer, unschedule it. If we change the timeout of an existing timer and we
were waiting on it, only unschedule when the new time is smaller.
Make the jitterbuffer schedule the timeouts based on the DTS instead
of the PTS. This makes it all smoother with reordered frames and gives
the decoder time to reorder the frames in time.
Refactor the jitterbuffer code. Make separate function for peeking a buffer,
pushing the next buffer, waiting for timeouts and handling the timeouts.
The main loop now tries to push as many buffers as it can until it runs out of
buffers or when it detects a seqnum discont. Then it will wait for some event to
happen before attempting to push more buffers.
Make methods to register timeouts in an array. These timeouts are registered
when we detect a missing packet, sync for the first packet or when we find an
estimation for the end-of-stream.
This greatly simplifies and clarifies the code and also makes it possible to
register more complicated timeout schemes later.
Don't throw away the first RTCP packet if it arrives before the first
RTP packet but remember and use it to signal sync once we get the
RTP packet.
See https://bugzilla.gnome.org/show_bug.cgi?id=691400
Move the code that combines the last SR packet and the current jitterbuffer sync
values into a sync structure, into its own function. We want to reuse this bit
later.
The scenario where you have a gap in a steady flow of packets of
say 10 seconds (500 packets of with duration of 20ms), the jitterbuffer
will idle up until it receives the first buffer after the gap, but will
then go on to produce 499 lost-events, to "cover up" the gap.
Now this is obviously wrong, since the last possible time for the earliest
lost-events to be played out has obviously expired, but the fact that
the jitterbuffer has a "length", represented with its own latency combined
with the total latency downstream, allows for covering up at least some
of this gap.
So in the case of the "length" being 200ms, while having received packet
500, the jitterbuffer should still create a timeout for packet 491, which
will have its time expire at 10,02 seconds, specially since it might
actually arrive in time! But obviously, waiting for packet 100, that had
its time expire at 2 seconds, (remembering that the current time is 10)
is useless...
The patch will create one "big" lost-event for the first 490 packets,
and then go on to create single ones if they can reach their
playout deadline.
See https://bugzilla.gnome.org/show_bug.cgi?id=667838
Add private replacements for deprecated functions such as
g_mutex_new(), g_mutex_free(), g_cond_new() etc., mostly
to avoid the deprecation warnings. We'll change these
over to the new API once we depend on glib >= 2.32.
... to at least having it trigger a/v synchronization, possibly without
using provided values which are still not considered sane
(as previously dropped).
GCC 4.6.x spits warnings about variables that are unused but set. Such
variables have been removed where trivial but with comments left behind
for informational purposes in some cases.
gst_rtp_session_chain_recv_rtcp () was changed in commit 490113d4
to always return GST_FLOW_OK instead of the return value of
rtp_session_process_rtcp (), so we'll keep it that way.
1) We need to lock and get a strong ref to the parent, if still there.
2) If it has gone away, we need to handle that gracefully.
This is necessary in order to safely modify a running pipeline. Has been
observed when a streaming thread is doing a buffer_alloc() while an
application thread sends an event on a pad further downstream, and from
within a pad probe (holding STREAM_LOCK) carries out the pipeline plumbing
while the streaming thread has its buffer_alloc() in progress.