The lost-event was using a different time-domain (dts) than the outgoing
buffers (pts). Given certain network-conditions these two would become
sufficiently different and the lost-event contained timestamp/duration
that was really wrong. As an example GstAudioDecoder could produce
a stream that jumps back and forth in time after receiving a lost-event.
The previous behavior calculated the pts (based on the rtptime) inside the
rtp_jitter_buffer_insert function, but now this functionality has been
refactored into a new function rtp_jitter_buffer_calculate_pts that is
called much earlier in the _chain function to make pts available to
various calculations that wrongly used dts previously
(like the lost-event).
There are however two calculations where using dts is the right thing to
do: calculating the receive-jitter and the rtx-round-trip-time, where the
arrival time of the buffer from the network is the right metric
(and is what dts in fact is today).
The patch also adds two tests regarding B-frames or the
“rtptime-going-backwards”-scenario, as there were some concerns that this
patch might break this behavior (which the tests shows it does not).
The head of the queue is the oldest packet (as in lowest seqnum), the tail is
the newest packet. To calculate the fill level, we should calculate tail-head
while considering wraparounds. Not the other way around.
Other code is already doing this in the correct order.
https://bugzilla.gnome.org/show_bug.cgi?id=764889
This also happens in the very beginning when we receive the first packet, a
warning would be very confusing here. In all places where we should warn about
this, we would've printed a warning already before.
Add a need-resync state, this is when we need to try to lock on to a
time/RTPtime pair.
Always check the RTP timestamps and if they go backwards, mark ourselves
as need-resync.
Only resync when need-resync is TRUE and we have a valid time. Otherwise
we keep the old values. This avoids locking on to an invalid time and
causing us to timestamp everything with -1.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=730417
We never initialize clock_rate explicitly, therefore it is 0 by default. The
parameter is a uint32 and the only caller ensure that it is >0, therefore it
won't become -1 ever.
If we are inserting a packet into the jitter queue we need to keep
looping through the items until the right position is found. Currently,
the code stops as soon as an event is found in the queue.
Regarding events, we should only move packets before an event if there
is another packet before the event that has a larger seqnum.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=730078
Rework the packet queue so that the most common action (insert a packet
at the tail of the queue) goes very fast.
Report if a packet was inserted at the head instead of the tail so that
we can know when to retry _pop or _peek.
Make a new method to disable the jitterbuffer buffering.
Rework the update_estimated_eos() method. Calculate how much time
there is left to play. If we have less than the delay of the
jitterbuffer, we disabled buffering because we might never be able to
fill the complete jitterbuffer again.
If we receive an EOS event, disable buffering. We will drain the
buffer and eventually push the EOS event out.
When we reach the estimated NPT timeout and we didn't receive an EOS
event, make one and queue it so that it can be pushed.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=728017
It is possible that the DTS is invalid (when we receive RTP packets from
TCP, for example). As a fallback, use the reconstructed PTS value to
calculate the buffer level.
Add a new timestamp mode that assumes the local and remote clock are
synchronized. It takes the first timestamp as a base time and then uses the RTP
timestamps for the output PTS.
Make the jitterbuffer operate on a structure containing all the packet
information. This avoids mapping the buffer multiple times just to get the RTP
information. It will also make it possible to store other miniobjects such as
events later.
Make the jitterbuffer schedule the timeouts based on the DTS instead
of the PTS. This makes it all smoother with reordered frames and gives
the decoder time to reorder the frames in time.
Only run the skew estimation code when we have a new RTP timestamp. If we have
the same RTP timestamp, we simply use the previous estimation. This works
because the new observation with the same RTP timestamp has to have a bigger
receiver time and is thus not going to influence the estimation except for
causing more jitter.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=640023
... when operating in non slave mode, and reset if detected.
This should avoid some (large) bogus outgoing timestamp due to jumps
in rtp time, as result of PAUSE/PLAY or seek or ...
* use G_DEFINE_TYPE
* adjust to new GstBuffer and corresponding rtp and rtcp buffer interfaces
* misc caps and segment handling changes
FIXME: also relies on being able to pass caps along with a buffer,
which has no evident equivalent yet, so that either needs one,
or still needs quite some code path modification to drag along caps.
When the jitterbuffer contains -1 timestamps, make sure we still calculate the
buffer fill level by skipping the -1 buffers.
Try to be more resilient to weird input timestamps.
If we detect backward timestamps on the server, don't try to resync when we
don't have an input timestamp (such as when using RTSP over TCP) instead, do
nothing but assume the timestamp was ok, it will correct itself when time goes
forwards.
When deactivating jitterbuffers when the buffering starts, keep the current
percent of the jitterbuffer and also set the jitterbuffer in the buffering state
so that we know when it's filled again.
Add property to get the buffering percentage of the jitterbuffer.
Return the next timestamp in the jitterbuffer.
Use the min-timestamp of the jitterbuffers to calculate an offset so that the
next timestamp is pushed with a timestamp equal to running_time.
Start producing timestamps from 0 in the buffering case too.
Pass the current running time to the jitterbuffer when pausing or resuming so
that it calculate the right offsets.
Small cleanups and comments.
Set the default rtspsrc latency to 2 seconds.