They are very confusing for people, and more often than not
also just not very accurate. Seeing 'last reviewed: 2005' in
your docs is not very confidence-inspiring. Let's just remove
those comments.
Make a new method to disable the jitterbuffer buffering.
Rework the update_estimated_eos() method. Calculate how much time
there is left to play. If we have less than the delay of the
jitterbuffer, we disabled buffering because we might never be able to
fill the complete jitterbuffer again.
If we receive an EOS event, disable buffering. We will drain the
buffer and eventually push the EOS event out.
When we reach the estimated NPT timeout and we didn't receive an EOS
event, make one and queue it so that it can be pushed.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=728017
Rework the logic to make buffering messages a little, make sure we
don't make the same message multiple times.
Consider the buffer full when EOS was received.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=728017
When we are buffering, we can't block and wait for the serialized query
to complete because the jitterbuffer will not try to forward the query
while buffering. Instead, just refuse the query.
gstrtpjitterbuffer.c: In function 'gst_rtp_jitter_buffer_loop':
gstrtpjitterbuffer.c:2978:3: error: 'result' may be used uninitialized in this function
while (result == GST_FLOW_OK);
^
Several conditional statements perform comparison on RTP sequence
numbers without taking the sequence number rollover into account.
Instead, use the gst_rtp_buffer_compare_seqnum function to perform the
comparison.
https://bugzilla.gnome.org/show_bug.cgi?id=725159
If the expected packet (do_next_seqnum is TRUE) is the one we requested
for retranmission earlier, do the logic to update the retransmission
statistics as well before setting up the timers for the next expected
packet.
Also reset the retransmission counter if the timer is reused for another
seqnum.
Use the round-trip-time and average jitter to dynamically calculate the
retransmission interval and expected packet arrival time.
Based on patches from Torrie Fischer <torrie.fischer@collabora.co.uk>
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=711412
Don't use the current time calculated from the tmieout loop for when we
last scheduled the NACK because it might be unscheduled because of a max
packet misorder and then we don't accurately calculate the current time.
Instead, take the current element running time using the clock.
Don't reset the expected output seqnum when clearing the pt map because this
could stall the jitterbuffer forever.
Add a unit test for this.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=709800
The problem here was that the jitterbuffer lock was unlocked to push
the event, but that caused another thread to remove the timer currently
being processed, probably because the amount of rtx events
(and therefore timers) was getting too high. The solution is to
unlock and push the event only after timer processing has finished.
fixes https://bugzilla.gnome.org/show_bug.cgi?id=711131
When flush-stop arrives before we process the result of the _push() in the
loop function, we might pause even though we are not flushing anymore. Fix this
race by waiting for the srcpad loop function to completely pause after doing the
flush-start.
If we were not waiting for the missing seqnum when we insert the lost packet
event in the jitterbuffer, we end up not updating the next_seqnum and wait
forever for the lost packets to arrive. Instead, keep track of the amount of
packets contained by the jitterbuffer item and update the next expected
seqnum only after pushing the buffer/event. This makes sure we correctly handle
GAPS in the sequence numbers.
Always prepare a lost event in the jitterbuffer, it is to wake up and make the
pushing thread continue. We drop the event when we are not supposed to push lost
events downstream.
Schedule the lost event by placing it inside the jitterbuffer with the seqnum
that was lost so that the pushing thread can interleave and push it properly.
Make the jitterbuffer operate on a structure containing all the packet
information. This avoids mapping the buffer multiple times just to get the RTP
information. It will also make it possible to store other miniobjects such as
events later.
Improve the order of the timeout events, if there are timers with the same
timeout, we want to trigger the lowest seqnum first. For this we need to loop
over the complete array of timers to find the best one before triggering the
timeout.
First send the lost event, then update the next_seqnum counter and then
send the signal to the pushing thread that it can retry to push a buffer. This
avoids pushing out buffers before the lost event is pushed.
There is no need to unschedule the timer in flush-start, flush-stop will remove
the timers and unschedule.
Unschedule the current timer before attempting to join the timer thread.
Keep a separate delay in the timer so that we still know the original timestamp
of the packet that this timer refers to. We can then place the correct
running-time in the Retransmission event.
Instead of pushing the lost event from the chain function, schedule a timeout
that will push the lost event from the timer thread. This avoid blocking the
upstream thread while we push and sync the event.
When we have a large number of missing packets, generate one lost event for all
the packets that have no chance of being pushed out in time.
Fix and activate unit test for large gaps.
Keep track of the current time in the timeout loop.
Loop over all timers and trigger all the expired ones, we can do this in the
same loop that selects the new best timer.
Also update the timers when retransmission is disabled. We need to
do this because when we added LOST timers when we detected missing packets and
we need to remove those timers when the packet finally arrives.
We keep the DTS and PTS in running-time inside the jitterbuffer. Make sure to
transform it back to a buffer timestamp before pushing out the buffer.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=707931
Restructure handling of incomming packet and the gap with the expected seqnum
and register all timers from the _chain function.
Convert a timer to a LOST packet timer when the max amount of retransmission
requests has been reached.
If we change the seqnum of an existing timer and we were waiting for
that timer, unschedule it. If we change the timeout of an existing timer and we
were waiting on it, only unschedule when the new time is smaller.
Make the jitterbuffer schedule the timeouts based on the DTS instead
of the PTS. This makes it all smoother with reordered frames and gives
the decoder time to reorder the frames in time.
Refactor the jitterbuffer code. Make separate function for peeking a buffer,
pushing the next buffer, waiting for timeouts and handling the timeouts.
The main loop now tries to push as many buffers as it can until it runs out of
buffers or when it detects a seqnum discont. Then it will wait for some event to
happen before attempting to push more buffers.
Make methods to register timeouts in an array. These timeouts are registered
when we detect a missing packet, sync for the first packet or when we find an
estimation for the end-of-stream.
This greatly simplifies and clarifies the code and also makes it possible to
register more complicated timeout schemes later.
Don't throw away the first RTCP packet if it arrives before the first
RTP packet but remember and use it to signal sync once we get the
RTP packet.
See https://bugzilla.gnome.org/show_bug.cgi?id=691400
Move the code that combines the last SR packet and the current jitterbuffer sync
values into a sync structure, into its own function. We want to reuse this bit
later.
The scenario where you have a gap in a steady flow of packets of
say 10 seconds (500 packets of with duration of 20ms), the jitterbuffer
will idle up until it receives the first buffer after the gap, but will
then go on to produce 499 lost-events, to "cover up" the gap.
Now this is obviously wrong, since the last possible time for the earliest
lost-events to be played out has obviously expired, but the fact that
the jitterbuffer has a "length", represented with its own latency combined
with the total latency downstream, allows for covering up at least some
of this gap.
So in the case of the "length" being 200ms, while having received packet
500, the jitterbuffer should still create a timeout for packet 491, which
will have its time expire at 10,02 seconds, specially since it might
actually arrive in time! But obviously, waiting for packet 100, that had
its time expire at 2 seconds, (remembering that the current time is 10)
is useless...
The patch will create one "big" lost-event for the first 490 packets,
and then go on to create single ones if they can reach their
playout deadline.
See https://bugzilla.gnome.org/show_bug.cgi?id=667838
Add private replacements for deprecated functions such as
g_mutex_new(), g_mutex_free(), g_cond_new() etc., mostly
to avoid the deprecation warnings. We'll change these
over to the new API once we depend on glib >= 2.32.
... to at least having it trigger a/v synchronization, possibly without
using provided values which are still not considered sane
(as previously dropped).
GCC 4.6.x spits warnings about variables that are unused but set. Such
variables have been removed where trivial but with comments left behind
for informational purposes in some cases.
gst_rtp_session_chain_recv_rtcp () was changed in commit 490113d4
to always return GST_FLOW_OK instead of the return value of
rtp_session_process_rtcp (), so we'll keep it that way.
1) We need to lock and get a strong ref to the parent, if still there.
2) If it has gone away, we need to handle that gracefully.
This is necessary in order to safely modify a running pipeline. Has been
observed when a streaming thread is doing a buffer_alloc() while an
application thread sends an event on a pad further downstream, and from
within a pad probe (holding STREAM_LOCK) carries out the pipeline plumbing
while the streaming thread has its buffer_alloc() in progress.
since we are using the clock for sync, we need to also provide a clock for good
measure. The reason is that even if downstream elements provide a clock, we
don't want to have that clock selected because it might not be running yet.
When using RTP_JITTER_BUFFER_MODE_BUFFER, make sure that the ringbuffer doesn't
get stuck buffering forever when there isn't enough data left to fill the
buffer.
Set ->active to TRUE in _init so it can be set to FALSE after creating the
jitterbuffer and it won't be mistakenly reset to TRUE in the change_state
function.
This is needed to start the jitterbuffer as inactive when rtpbin is buffering.
There is no need to set the latency in the jittebuffer in _init, we will set
that later when going to PAUSED.
Set the jitterbuffer active and not buffering when starting.
When deactivating jitterbuffers when the buffering starts, keep the current
percent of the jitterbuffer and also set the jitterbuffer in the buffering state
so that we know when it's filled again.
Add property to get the buffering percentage of the jitterbuffer.
When we are in buffer mode, adjust the buffering low/high thresholds based on
the total configured latency. If we don't and there is a huge queue or element
with a big latency downstream we might drain the complete queue immediately and
start buffering again.
Return the next timestamp in the jitterbuffer.
Use the min-timestamp of the jitterbuffers to calculate an offset so that the
next timestamp is pushed with a timestamp equal to running_time.
Start producing timestamps from 0 in the buffering case too.
Add signal to pause the jitterbuffer. This will be emitted from gstrtpbin when
one of the jitterbuffers is buffering.
Make rtpbin collect the buffering messages and post a new buffering message with
the min value.
Remove the stats callback from jitterbuffer but pass a percent integer to
functions that affect the buffering state of the jitterbuffer. This allows us
then to post buffering messages from outside of the jitterbuffer lock.
Add callback for buffering stats.
Configure the latency in the jitterbuffer instead of passing it with _insert.
Calculate buffering levels when pushing and popping
Post buffering messages.
Release the jbuf lock before emiting the request-pt-map signal to avoid
deadlocks. We also need to catch the shutdown case when locking again.
Fixes#593354
When we receive a reordered packet with the same timestamp as the previous one
(which can happen for fragmented packets) don't consider the packet as lost but
instead wait for the reordered packet to arrive.
Switch the warning-level, so that a reordering does not get a warning, only
an actual produced lost-packet.
Fixes#594251
The priv->clock_rate variable could become -1 between when its checked to not
be -1 and when its used, causing an assertion. Fixed by taking the mutex
earlier in the chain() function.
Fixes#593955
When we construct a timestamp that would result in a timestamp that is earlier
than when the packet was received, reset the skew calculation as this is
probably a sign that the sender restarted or paused.
Fixes#593354