This reverts commit f1ceaab02f.
This broke atomic file writes in "buffer" mode. It did make
sure that any streamheaders are prepended to each file in
buffer mode as well, but that's not really needed in practice,
whereas atomic file writes are, so let's restore the status
quo ante for now since this was primarily a code cleanup anyway,
and if anyone needs to streamheaders in buffer mode too they
can make a patch to implement that differently. Re-implementing
the atomic writes in the element also seems way too much work.
https://bugzilla.gnome.org/show_bug.cgi?id=766990
GstAudioRingBuffer doesn't needs us to have at least 2 segments. We make
sure that if our buffer parameters are such that the maxlength is not at
least 2x fragsize, we still request the ringbuffer to keep that much
space so it continues to work.
https://bugzilla.gnome.org/show_bug.cgi?id=770446
We were just picking the timestamp of the last buffer pushed into our
adapter before we had enough data to push out.
This fixes things to figure out how large each frame is and what
duration it covers, so we can set both the timestamp and duration
correctly.
Also adds some DISCONT handling.
The basic idea is this:
1. For *larger* rtx-rtt, weigh a new measurement as before
2. For *smaller* rtx-rtt, be a bit more conservative and weigh a bit less
3. For very large measurements, consider them "outliers"
and count them a lot less
The idea being that reducing the rtx-rtt is much more harmful then
increasing it, since we don't want to be underestimating the rtt of the
network, and when using this number to estimate the latency you need for
you jitterbuffer, you would rather want it to be a bit larger then a bit
smaller, potentially losing rtx-packets. The "outlier-detector" is there
to prevent a single skewed measurement to affect the outcome too much.
On wireless networks, these are surprisingly common.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
Assuming equidistant packet spacing when that's not true leads to more
loss than necessary in the case of reordering and jitter. Typically this
is true for video where one frame often consists of multiple packets
with the same rtp timestamp. In this case it's better to assume that the
missing packets have the same timestamp as the last received packet, so
that the scheduled lost timer does not time out too early causing the
packets to be considered lost even though they may arrive in time.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
There is no need to schedule another EXPECTED timer if we're already
past the retry period. Under normal operation this won't happen, but if
there are more timers than the jitterbuffer is able to process in
real-time, scheduling more timers will just make the situation worse.
Instead, consider this packet as lost and move on. This scenario can
occur with high loss rate, low rtt and high configured latency.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
This patch fixes an issue with the estimated gap duration when there is
a gap immediately after a lost timer has been processed. Previously
there was a discrepancy beteen the gap in seqnum and gap in dts which
would cause wrong calculated duration. The issue would only be seen with
retranmission enabled since when it's disabled lost timers are only
created when a packet is received and the actual gap length and last dts
is known.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
Stats should also be collected for unsuccessful packets.
rtx-rtt is very important for determining the necessary configured
latency on the jitterbuffer. It's especially important to be able to
increase the latency when retransmitted packets arrive too late and are
considered lost. This patch includes these late packets in the
calculation of the various rtx stats, making them more correct and
useful.
Also in the case where the original packet arrives after a NACK is sent,
the received RTX packet should update the stats since it provides useful
information about RTT.
The RTT is only updated if and only if all requested retranmissions are
received. That way the RTT is guaranteed to make sense. If not we don't
know which request the packet is a response to and the RTT may be bogus.
A consequence of this patch is that RTT is not updated for a request
when one of the RTX packets for that seqnum is lost, but that since
measured RTT will be more accurate.
The implementation store the RTX information from the timed out timers
and use this when the retransmitted packet arrives. For performance
these timers are stored separately from the "normal" timers in order to
not impact performance (see attached performance test).
https://bugzilla.gnome.org/show_bug.cgi?id=769768
When disabled we can save some iterations over timers.
There is probably an argument for rtx-delay-reorder to exist, but
for normal operations, handling jitter (reordering) is something a
jitterbuffer should do, and this variable feels like functionality that
is not "in-sync" with what the jitterbuffer is trying to achieve.
Example: You have 50ms jitter on your network, and are receiving
audio packets with 10ms durations. An audio packet should not be
considered late until its rtx-timeout has expired (and hence a rtx-event
is sent), but with rtx-delay-reorder, events will be sent pretty much
all the time due to the jitter on the network.
Point being: The jitterbuffer should adapt its size to the measured network
jitter, and then rtx-delay-reorder needs to adapt as well, or simply
get out of the way and let the other (better) rtx-mechanisms do their job.
Also change find_timer to only use seqnum as an argument, since there
will only ever be one timer per seqnum at any given time. In the
one case where the type matters, the caller simply checks the type.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
If jackd changes the buffer size or sample rate, jackaudiosink hangs
and can't be stopped. This also happens if jack is configured as slave
and a gstreamer pipeline is started on the slave machine while the jack
master isn't running yet. If the the jack master is started it changes
the buffer size / sample rate and jackaudiosink can't be stopped.
This fix calls jack_shutdown_cb when jack_sample_rate_cb or
jack_buffer_size_cb is called.
https://bugzilla.gnome.org/show_bug.cgi?id=771272
And actually calculate the field duration instead of a frame duration so
that we can properly timestamp output frames in fields=all mode.
This is probably still broken for reverse playback in telecine mode.
When start qmlglsink app, it will set NULL buffer to GstQSGTexture
in which case that qt_context_ will be a random value and cause
gst_gl_context_activate() fail.
https://bugzilla.gnome.org/show_bug.cgi?id=770925