The problem here was that the jitterbuffer lock was unlocked to push
the event, but that caused another thread to remove the timer currently
being processed, probably because the amount of rtx events
(and therefore timers) was getting too high. The solution is to
unlock and push the event only after timer processing has finished.
fixes https://bugzilla.gnome.org/show_bug.cgi?id=711131
When flush-stop arrives before we process the result of the _push() in the
loop function, we might pause even though we are not flushing anymore. Fix this
race by waiting for the srcpad loop function to completely pause after doing the
flush-start.
If we were not waiting for the missing seqnum when we insert the lost packet
event in the jitterbuffer, we end up not updating the next_seqnum and wait
forever for the lost packets to arrive. Instead, keep track of the amount of
packets contained by the jitterbuffer item and update the next expected
seqnum only after pushing the buffer/event. This makes sure we correctly handle
GAPS in the sequence numbers.
Always prepare a lost event in the jitterbuffer, it is to wake up and make the
pushing thread continue. We drop the event when we are not supposed to push lost
events downstream.
Schedule the lost event by placing it inside the jitterbuffer with the seqnum
that was lost so that the pushing thread can interleave and push it properly.
Make the jitterbuffer operate on a structure containing all the packet
information. This avoids mapping the buffer multiple times just to get the RTP
information. It will also make it possible to store other miniobjects such as
events later.
Improve the order of the timeout events, if there are timers with the same
timeout, we want to trigger the lowest seqnum first. For this we need to loop
over the complete array of timers to find the best one before triggering the
timeout.
First send the lost event, then update the next_seqnum counter and then
send the signal to the pushing thread that it can retry to push a buffer. This
avoids pushing out buffers before the lost event is pushed.
There is no need to unschedule the timer in flush-start, flush-stop will remove
the timers and unschedule.
Unschedule the current timer before attempting to join the timer thread.
Keep a separate delay in the timer so that we still know the original timestamp
of the packet that this timer refers to. We can then place the correct
running-time in the Retransmission event.
Instead of pushing the lost event from the chain function, schedule a timeout
that will push the lost event from the timer thread. This avoid blocking the
upstream thread while we push and sync the event.
When we have a large number of missing packets, generate one lost event for all
the packets that have no chance of being pushed out in time.
Fix and activate unit test for large gaps.
Keep track of the current time in the timeout loop.
Loop over all timers and trigger all the expired ones, we can do this in the
same loop that selects the new best timer.
Also update the timers when retransmission is disabled. We need to
do this because when we added LOST timers when we detected missing packets and
we need to remove those timers when the packet finally arrives.
We keep the DTS and PTS in running-time inside the jitterbuffer. Make sure to
transform it back to a buffer timestamp before pushing out the buffer.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=707931
Restructure handling of incomming packet and the gap with the expected seqnum
and register all timers from the _chain function.
Convert a timer to a LOST packet timer when the max amount of retransmission
requests has been reached.
If we change the seqnum of an existing timer and we were waiting for
that timer, unschedule it. If we change the timeout of an existing timer and we
were waiting on it, only unschedule when the new time is smaller.
Make the jitterbuffer schedule the timeouts based on the DTS instead
of the PTS. This makes it all smoother with reordered frames and gives
the decoder time to reorder the frames in time.
Refactor the jitterbuffer code. Make separate function for peeking a buffer,
pushing the next buffer, waiting for timeouts and handling the timeouts.
The main loop now tries to push as many buffers as it can until it runs out of
buffers or when it detects a seqnum discont. Then it will wait for some event to
happen before attempting to push more buffers.
Make methods to register timeouts in an array. These timeouts are registered
when we detect a missing packet, sync for the first packet or when we find an
estimation for the end-of-stream.
This greatly simplifies and clarifies the code and also makes it possible to
register more complicated timeout schemes later.
Don't throw away the first RTCP packet if it arrives before the first
RTP packet but remember and use it to signal sync once we get the
RTP packet.
See https://bugzilla.gnome.org/show_bug.cgi?id=691400
Move the code that combines the last SR packet and the current jitterbuffer sync
values into a sync structure, into its own function. We want to reuse this bit
later.
The scenario where you have a gap in a steady flow of packets of
say 10 seconds (500 packets of with duration of 20ms), the jitterbuffer
will idle up until it receives the first buffer after the gap, but will
then go on to produce 499 lost-events, to "cover up" the gap.
Now this is obviously wrong, since the last possible time for the earliest
lost-events to be played out has obviously expired, but the fact that
the jitterbuffer has a "length", represented with its own latency combined
with the total latency downstream, allows for covering up at least some
of this gap.
So in the case of the "length" being 200ms, while having received packet
500, the jitterbuffer should still create a timeout for packet 491, which
will have its time expire at 10,02 seconds, specially since it might
actually arrive in time! But obviously, waiting for packet 100, that had
its time expire at 2 seconds, (remembering that the current time is 10)
is useless...
The patch will create one "big" lost-event for the first 490 packets,
and then go on to create single ones if they can reach their
playout deadline.
See https://bugzilla.gnome.org/show_bug.cgi?id=667838
Add private replacements for deprecated functions such as
g_mutex_new(), g_mutex_free(), g_cond_new() etc., mostly
to avoid the deprecation warnings. We'll change these
over to the new API once we depend on glib >= 2.32.
... to at least having it trigger a/v synchronization, possibly without
using provided values which are still not considered sane
(as previously dropped).
GCC 4.6.x spits warnings about variables that are unused but set. Such
variables have been removed where trivial but with comments left behind
for informational purposes in some cases.
gst_rtp_session_chain_recv_rtcp () was changed in commit 490113d4
to always return GST_FLOW_OK instead of the return value of
rtp_session_process_rtcp (), so we'll keep it that way.
1) We need to lock and get a strong ref to the parent, if still there.
2) If it has gone away, we need to handle that gracefully.
This is necessary in order to safely modify a running pipeline. Has been
observed when a streaming thread is doing a buffer_alloc() while an
application thread sends an event on a pad further downstream, and from
within a pad probe (holding STREAM_LOCK) carries out the pipeline plumbing
while the streaming thread has its buffer_alloc() in progress.
since we are using the clock for sync, we need to also provide a clock for good
measure. The reason is that even if downstream elements provide a clock, we
don't want to have that clock selected because it might not be running yet.
When using RTP_JITTER_BUFFER_MODE_BUFFER, make sure that the ringbuffer doesn't
get stuck buffering forever when there isn't enough data left to fill the
buffer.
Set ->active to TRUE in _init so it can be set to FALSE after creating the
jitterbuffer and it won't be mistakenly reset to TRUE in the change_state
function.
This is needed to start the jitterbuffer as inactive when rtpbin is buffering.