If we change the seqnum of an existing timer and we were waiting for
that timer, unschedule it. If we change the timeout of an existing timer and we
were waiting on it, only unschedule when the new time is smaller.
Make the jitterbuffer schedule the timeouts based on the DTS instead
of the PTS. This makes it all smoother with reordered frames and gives
the decoder time to reorder the frames in time.
Refactor the jitterbuffer code. Make separate function for peeking a buffer,
pushing the next buffer, waiting for timeouts and handling the timeouts.
The main loop now tries to push as many buffers as it can until it runs out of
buffers or when it detects a seqnum discont. Then it will wait for some event to
happen before attempting to push more buffers.
Make methods to register timeouts in an array. These timeouts are registered
when we detect a missing packet, sync for the first packet or when we find an
estimation for the end-of-stream.
This greatly simplifies and clarifies the code and also makes it possible to
register more complicated timeout schemes later.
Don't throw away the first RTCP packet if it arrives before the first
RTP packet but remember and use it to signal sync once we get the
RTP packet.
See https://bugzilla.gnome.org/show_bug.cgi?id=691400
Move the code that combines the last SR packet and the current jitterbuffer sync
values into a sync structure, into its own function. We want to reuse this bit
later.
The scenario where you have a gap in a steady flow of packets of
say 10 seconds (500 packets of with duration of 20ms), the jitterbuffer
will idle up until it receives the first buffer after the gap, but will
then go on to produce 499 lost-events, to "cover up" the gap.
Now this is obviously wrong, since the last possible time for the earliest
lost-events to be played out has obviously expired, but the fact that
the jitterbuffer has a "length", represented with its own latency combined
with the total latency downstream, allows for covering up at least some
of this gap.
So in the case of the "length" being 200ms, while having received packet
500, the jitterbuffer should still create a timeout for packet 491, which
will have its time expire at 10,02 seconds, specially since it might
actually arrive in time! But obviously, waiting for packet 100, that had
its time expire at 2 seconds, (remembering that the current time is 10)
is useless...
The patch will create one "big" lost-event for the first 490 packets,
and then go on to create single ones if they can reach their
playout deadline.
See https://bugzilla.gnome.org/show_bug.cgi?id=667838
Add private replacements for deprecated functions such as
g_mutex_new(), g_mutex_free(), g_cond_new() etc., mostly
to avoid the deprecation warnings. We'll change these
over to the new API once we depend on glib >= 2.32.
... to at least having it trigger a/v synchronization, possibly without
using provided values which are still not considered sane
(as previously dropped).
GCC 4.6.x spits warnings about variables that are unused but set. Such
variables have been removed where trivial but with comments left behind
for informational purposes in some cases.
gst_rtp_session_chain_recv_rtcp () was changed in commit 490113d4
to always return GST_FLOW_OK instead of the return value of
rtp_session_process_rtcp (), so we'll keep it that way.
1) We need to lock and get a strong ref to the parent, if still there.
2) If it has gone away, we need to handle that gracefully.
This is necessary in order to safely modify a running pipeline. Has been
observed when a streaming thread is doing a buffer_alloc() while an
application thread sends an event on a pad further downstream, and from
within a pad probe (holding STREAM_LOCK) carries out the pipeline plumbing
while the streaming thread has its buffer_alloc() in progress.
since we are using the clock for sync, we need to also provide a clock for good
measure. The reason is that even if downstream elements provide a clock, we
don't want to have that clock selected because it might not be running yet.
When using RTP_JITTER_BUFFER_MODE_BUFFER, make sure that the ringbuffer doesn't
get stuck buffering forever when there isn't enough data left to fill the
buffer.
Set ->active to TRUE in _init so it can be set to FALSE after creating the
jitterbuffer and it won't be mistakenly reset to TRUE in the change_state
function.
This is needed to start the jitterbuffer as inactive when rtpbin is buffering.
There is no need to set the latency in the jittebuffer in _init, we will set
that later when going to PAUSED.
Set the jitterbuffer active and not buffering when starting.
When deactivating jitterbuffers when the buffering starts, keep the current
percent of the jitterbuffer and also set the jitterbuffer in the buffering state
so that we know when it's filled again.
Add property to get the buffering percentage of the jitterbuffer.