streamheader and codec_data buffers fields are only meant to be
in the negotiated caps, not the template caps.
Fixes false-positive leaks of those buffers detected by the leaks
tracer, as template caps are static, and we decided to not include
code in gstreamer core to handle this unusual case of template caps
having buffers in them.
https://bugzilla.gnome.org/show_bug.cgi?id=768762
Some radio streams uses StreamTitle='' to reset the title after a
track stopped playing, e.g. while the host talks between tracks or
during news segments.
This change forces an empty tag object to be distributed if
StreamTitle or StreamUrl is received with empty value, thus allowing
downstream elements to get notified about this.
https://bugzilla.gnome.org/show_bug.cgi?id=778437
Add a new signal for formatting the filename, which receives
a GstSample containing the first buffer from the reference
stream that will be muxed into that file.
Useful for creating filenames that are based on the
running time or other attributes of the buffer.
To make it work, opening of files and setting filenames is
now deferred until there is some data to write to it,
which also requires some changes to how async state changes
and gap events are handled.
Now matroskamux mark all packets of audio-only streams as keyframes so
in test_block_group after pushing the test audio data 4 buffers are produced
and not more 2. The last buffer is the original data and must match with what
pushed. The remaining ones are matroskamux headers
https://bugzilla.gnome.org/show_bug.cgi?id=754696
* Changed PCMU->TEST for common macros
* Changed verify-functions (lost & rtx) into macros.
* Remove option to add marker-bit for test-buffers (not used anywhere)
* Add new push_test_buffer function that makes sure there are correlation
between dts and the time on the clock. (classic test-mistake)
* Established a generic starting-point for tests with the
construct_deterministic_initial_state function and use it where
applicable, which removes lots of "boilerplate" everywhere.
* Add basic lost-event test
* Remove as much "magic constants" as possible.
* Remove 3 tests that no longer are testing anything that others don't,
and was completely unmaintainable.
* Remove unnecessary use of the testclock
* Verify each test is testing what it actually says it does (and modify
where it doesn't)
In general, make the tests much smaller, better, more maintainable and
readable.
https://bugzilla.gnome.org/show_bug.cgi?id=774409
A new signal named on-bundled-ssrc is provided and can be
used by the application to redirect a stream to a different
GstRtpSession or to keep the RTX stream grouped within the
GstRtpSession of the same media type.
https://bugzilla.gnome.org/show_bug.cgi?id=772740
When doing rtx, the jitterbuffer will always add an rtx-timer for the next
sequence number.
In the case of the packet corresponding to that sequence number arriving,
that same timer will be reused, and simply moved on to wait for the
following sequence number etc.
Once an rtx-timer expires (after all retries), it will be rescheduled as
a lost-timer instead for the same sequence number.
Now, if this particular sequence-number now arrives (after the timer has
become a lost-timer), the reuse mechanism *should* now set a new
rtx-timer for the next sequence number, but the bug is that it does
not change the timer-type, and hence schedules a lost-timer for that
following sequence number, with the result that you will have a very
early lost-event for a packet that might still arrive, and you will
never be able to send any rtx for this packet.
Found by Erlend Graff - erlend@pexip.comhttps://bugzilla.gnome.org/show_bug.cgi?id=773891
The lost-event was using a different time-domain (dts) than the outgoing
buffers (pts). Given certain network-conditions these two would become
sufficiently different and the lost-event contained timestamp/duration
that was really wrong. As an example GstAudioDecoder could produce
a stream that jumps back and forth in time after receiving a lost-event.
The previous behavior calculated the pts (based on the rtptime) inside the
rtp_jitter_buffer_insert function, but now this functionality has been
refactored into a new function rtp_jitter_buffer_calculate_pts that is
called much earlier in the _chain function to make pts available to
various calculations that wrongly used dts previously
(like the lost-event).
There are however two calculations where using dts is the right thing to
do: calculating the receive-jitter and the rtx-round-trip-time, where the
arrival time of the buffer from the network is the right metric
(and is what dts in fact is today).
The patch also adds two tests regarding B-frames or the
“rtptime-going-backwards”-scenario, as there were some concerns that this
patch might break this behavior (which the tests shows it does not).
The new timeout is always going to be (timeout + delay), however, the
old behavior compared the current timeout to just (timeout), basically
being (delay) off.
This would happen if rtx-delay == rtx-retry-timeout, with the result that
a second rtx attempt for any buffers would be scheduled immediately instead
of after rtx-delay ms.
Simply calculate (new_timeout = timeout + delay) and then use that instead.
https://bugzilla.gnome.org/show_bug.cgi?id=773905
It's been broken for years, and it's unlikely it will ever
be fixed for collectpads/videomixer now that there's compositor
which works fine. So let's disable it, since all it does
is that it creates noise that distracts from other failures.
Also see the corresponding adder bug as it failed in the same way:
https://bugzilla.gnome.org/show_bug.cgi?id=708891
It seems that the forked processes all attempt to handle the listening
socket from the server, and only one has to shutdown the socket to break
the server completely.
Create a new server inside each test to avoid this.
https://bugzilla.gnome.org/show_bug.cgi?id=772656
The tests accumulate buffers in GstCheck's buffers list, and the list is
not (consistently) reset between tests. Do that and remove the now
conflicting unrefs for outbuffers.
https://bugzilla.gnome.org/show_bug.cgi?id=772644
Workaround source_root being the root directory of all projects in the subproject
case and remove now unneeded getpluginsdir
Bump meson requirement to 0.35
The basic idea is this:
1. For *larger* rtx-rtt, weigh a new measurement as before
2. For *smaller* rtx-rtt, be a bit more conservative and weigh a bit less
3. For very large measurements, consider them "outliers"
and count them a lot less
The idea being that reducing the rtx-rtt is much more harmful then
increasing it, since we don't want to be underestimating the rtt of the
network, and when using this number to estimate the latency you need for
you jitterbuffer, you would rather want it to be a bit larger then a bit
smaller, potentially losing rtx-packets. The "outlier-detector" is there
to prevent a single skewed measurement to affect the outcome too much.
On wireless networks, these are surprisingly common.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
Assuming equidistant packet spacing when that's not true leads to more
loss than necessary in the case of reordering and jitter. Typically this
is true for video where one frame often consists of multiple packets
with the same rtp timestamp. In this case it's better to assume that the
missing packets have the same timestamp as the last received packet, so
that the scheduled lost timer does not time out too early causing the
packets to be considered lost even though they may arrive in time.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
There is no need to schedule another EXPECTED timer if we're already
past the retry period. Under normal operation this won't happen, but if
there are more timers than the jitterbuffer is able to process in
real-time, scheduling more timers will just make the situation worse.
Instead, consider this packet as lost and move on. This scenario can
occur with high loss rate, low rtt and high configured latency.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
This patch fixes an issue with the estimated gap duration when there is
a gap immediately after a lost timer has been processed. Previously
there was a discrepancy beteen the gap in seqnum and gap in dts which
would cause wrong calculated duration. The issue would only be seen with
retranmission enabled since when it's disabled lost timers are only
created when a packet is received and the actual gap length and last dts
is known.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
Stats should also be collected for unsuccessful packets.
rtx-rtt is very important for determining the necessary configured
latency on the jitterbuffer. It's especially important to be able to
increase the latency when retransmitted packets arrive too late and are
considered lost. This patch includes these late packets in the
calculation of the various rtx stats, making them more correct and
useful.
Also in the case where the original packet arrives after a NACK is sent,
the received RTX packet should update the stats since it provides useful
information about RTT.
The RTT is only updated if and only if all requested retranmissions are
received. That way the RTT is guaranteed to make sense. If not we don't
know which request the packet is a response to and the RTT may be bogus.
A consequence of this patch is that RTT is not updated for a request
when one of the RTX packets for that seqnum is lost, but that since
measured RTT will be more accurate.
The implementation store the RTX information from the timed out timers
and use this when the retransmitted packet arrives. For performance
these timers are stored separately from the "normal" timers in order to
not impact performance (see attached performance test).
https://bugzilla.gnome.org/show_bug.cgi?id=769768