Rework the packet queue so that the most common action (insert a packet
at the tail of the queue) goes very fast.
Report if a packet was inserted at the head instead of the tail so that
we can know when to retry _pop or _peek.
Make a new method to disable the jitterbuffer buffering.
Rework the update_estimated_eos() method. Calculate how much time
there is left to play. If we have less than the delay of the
jitterbuffer, we disabled buffering because we might never be able to
fill the complete jitterbuffer again.
If we receive an EOS event, disable buffering. We will drain the
buffer and eventually push the EOS event out.
When we reach the estimated NPT timeout and we didn't receive an EOS
event, make one and queue it so that it can be pushed.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=728017
It is possible that the DTS is invalid (when we receive RTP packets from
TCP, for example). As a fallback, use the reconstructed PTS value to
calculate the buffer level.
Add a new timestamp mode that assumes the local and remote clock are
synchronized. It takes the first timestamp as a base time and then uses the RTP
timestamps for the output PTS.
Make the jitterbuffer operate on a structure containing all the packet
information. This avoids mapping the buffer multiple times just to get the RTP
information. It will also make it possible to store other miniobjects such as
events later.
Make the jitterbuffer schedule the timeouts based on the DTS instead
of the PTS. This makes it all smoother with reordered frames and gives
the decoder time to reorder the frames in time.
Only run the skew estimation code when we have a new RTP timestamp. If we have
the same RTP timestamp, we simply use the previous estimation. This works
because the new observation with the same RTP timestamp has to have a bigger
receiver time and is thus not going to influence the estimation except for
causing more jitter.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=640023
... when operating in non slave mode, and reset if detected.
This should avoid some (large) bogus outgoing timestamp due to jumps
in rtp time, as result of PAUSE/PLAY or seek or ...
* use G_DEFINE_TYPE
* adjust to new GstBuffer and corresponding rtp and rtcp buffer interfaces
* misc caps and segment handling changes
FIXME: also relies on being able to pass caps along with a buffer,
which has no evident equivalent yet, so that either needs one,
or still needs quite some code path modification to drag along caps.
When the jitterbuffer contains -1 timestamps, make sure we still calculate the
buffer fill level by skipping the -1 buffers.
Try to be more resilient to weird input timestamps.
If we detect backward timestamps on the server, don't try to resync when we
don't have an input timestamp (such as when using RTSP over TCP) instead, do
nothing but assume the timestamp was ok, it will correct itself when time goes
forwards.
When deactivating jitterbuffers when the buffering starts, keep the current
percent of the jitterbuffer and also set the jitterbuffer in the buffering state
so that we know when it's filled again.
Add property to get the buffering percentage of the jitterbuffer.
Return the next timestamp in the jitterbuffer.
Use the min-timestamp of the jitterbuffers to calculate an offset so that the
next timestamp is pushed with a timestamp equal to running_time.
Start producing timestamps from 0 in the buffering case too.
Pass the current running time to the jitterbuffer when pausing or resuming so
that it calculate the right offsets.
Small cleanups and comments.
Set the default rtspsrc latency to 2 seconds.
Add signal to pause the jitterbuffer. This will be emitted from gstrtpbin when
one of the jitterbuffers is buffering.
Make rtpbin collect the buffering messages and post a new buffering message with
the min value.
Remove the stats callback from jitterbuffer but pass a percent integer to
functions that affect the buffering state of the jitterbuffer. This allows us
then to post buffering messages from outside of the jitterbuffer lock.
Add callback for buffering stats.
Configure the latency in the jitterbuffer instead of passing it with _insert.
Calculate buffering levels when pushing and popping
Post buffering messages.
When we construct a timestamp that would result in a timestamp that is earlier
than when the packet was received, reset the skew calculation as this is
probably a sign that the sender restarted or paused.
Fixes#593354