Sometimes all the buffers are received before the time we lock the
check_mutex, in which case g_cond_wait will wait forever for another
one. Just check if this is the case before waiting.
https://bugzilla.gnome.org/attachment.cgi?id=358397
This patch simplifies the tests (44% less code) and
makes them much more readable.
The provided SessionHarness also makes it much easier
to write new tests for rtpsession.
https://bugzilla.gnome.org/show_bug.cgi?id=791070
If the use-robust-muxing property is set, check if the
assigned muxer has reserved-max-duration and
reserved-duration-remaining properties, and if so set
the configured maximum duration to the reserved-max-duration
property, and monitor the remaining space to start
a new file if the reserved header space is about to run out -
even though it never ought to.
Switching to a new fragment because the input caps have
changed didn't properly end the previous file. Use the normal
EOS sequence to ensure that happens. Add a test that it works.
SoupSession's ssl-ca-file property is deprecated. Use the recommended
tls-database property.
This is a bit more complex as it requires creating a GTlsFileDatabase
object for an absolute (!) path to the CA certificates file.
https://bugzilla.gnome.org/show_bug.cgi?id=784005
Even though hooked up to the build system, it's clear that no one
has ever built or used this with GStreamer 1.x. It wants to link
against libgstinterfaces, which no longer exists. And uses 0.10-style
raw audio caps. And the last meaningful change was done in 2009.
Let's just remove it.
streamheader and codec_data buffers fields are only meant to be
in the negotiated caps, not the template caps.
Fixes false-positive leaks of those buffers detected by the leaks
tracer, as template caps are static, and we decided to not include
code in gstreamer core to handle this unusual case of template caps
having buffers in them.
https://bugzilla.gnome.org/show_bug.cgi?id=768762
Some radio streams uses StreamTitle='' to reset the title after a
track stopped playing, e.g. while the host talks between tracks or
during news segments.
This change forces an empty tag object to be distributed if
StreamTitle or StreamUrl is received with empty value, thus allowing
downstream elements to get notified about this.
https://bugzilla.gnome.org/show_bug.cgi?id=778437
Add a new signal for formatting the filename, which receives
a GstSample containing the first buffer from the reference
stream that will be muxed into that file.
Useful for creating filenames that are based on the
running time or other attributes of the buffer.
To make it work, opening of files and setting filenames is
now deferred until there is some data to write to it,
which also requires some changes to how async state changes
and gap events are handled.
Now matroskamux mark all packets of audio-only streams as keyframes so
in test_block_group after pushing the test audio data 4 buffers are produced
and not more 2. The last buffer is the original data and must match with what
pushed. The remaining ones are matroskamux headers
https://bugzilla.gnome.org/show_bug.cgi?id=754696
* Changed PCMU->TEST for common macros
* Changed verify-functions (lost & rtx) into macros.
* Remove option to add marker-bit for test-buffers (not used anywhere)
* Add new push_test_buffer function that makes sure there are correlation
between dts and the time on the clock. (classic test-mistake)
* Established a generic starting-point for tests with the
construct_deterministic_initial_state function and use it where
applicable, which removes lots of "boilerplate" everywhere.
* Add basic lost-event test
* Remove as much "magic constants" as possible.
* Remove 3 tests that no longer are testing anything that others don't,
and was completely unmaintainable.
* Remove unnecessary use of the testclock
* Verify each test is testing what it actually says it does (and modify
where it doesn't)
In general, make the tests much smaller, better, more maintainable and
readable.
https://bugzilla.gnome.org/show_bug.cgi?id=774409
A new signal named on-bundled-ssrc is provided and can be
used by the application to redirect a stream to a different
GstRtpSession or to keep the RTX stream grouped within the
GstRtpSession of the same media type.
https://bugzilla.gnome.org/show_bug.cgi?id=772740
When doing rtx, the jitterbuffer will always add an rtx-timer for the next
sequence number.
In the case of the packet corresponding to that sequence number arriving,
that same timer will be reused, and simply moved on to wait for the
following sequence number etc.
Once an rtx-timer expires (after all retries), it will be rescheduled as
a lost-timer instead for the same sequence number.
Now, if this particular sequence-number now arrives (after the timer has
become a lost-timer), the reuse mechanism *should* now set a new
rtx-timer for the next sequence number, but the bug is that it does
not change the timer-type, and hence schedules a lost-timer for that
following sequence number, with the result that you will have a very
early lost-event for a packet that might still arrive, and you will
never be able to send any rtx for this packet.
Found by Erlend Graff - erlend@pexip.comhttps://bugzilla.gnome.org/show_bug.cgi?id=773891
The lost-event was using a different time-domain (dts) than the outgoing
buffers (pts). Given certain network-conditions these two would become
sufficiently different and the lost-event contained timestamp/duration
that was really wrong. As an example GstAudioDecoder could produce
a stream that jumps back and forth in time after receiving a lost-event.
The previous behavior calculated the pts (based on the rtptime) inside the
rtp_jitter_buffer_insert function, but now this functionality has been
refactored into a new function rtp_jitter_buffer_calculate_pts that is
called much earlier in the _chain function to make pts available to
various calculations that wrongly used dts previously
(like the lost-event).
There are however two calculations where using dts is the right thing to
do: calculating the receive-jitter and the rtx-round-trip-time, where the
arrival time of the buffer from the network is the right metric
(and is what dts in fact is today).
The patch also adds two tests regarding B-frames or the
“rtptime-going-backwards”-scenario, as there were some concerns that this
patch might break this behavior (which the tests shows it does not).
The new timeout is always going to be (timeout + delay), however, the
old behavior compared the current timeout to just (timeout), basically
being (delay) off.
This would happen if rtx-delay == rtx-retry-timeout, with the result that
a second rtx attempt for any buffers would be scheduled immediately instead
of after rtx-delay ms.
Simply calculate (new_timeout = timeout + delay) and then use that instead.
https://bugzilla.gnome.org/show_bug.cgi?id=773905
It's been broken for years, and it's unlikely it will ever
be fixed for collectpads/videomixer now that there's compositor
which works fine. So let's disable it, since all it does
is that it creates noise that distracts from other failures.
Also see the corresponding adder bug as it failed in the same way:
https://bugzilla.gnome.org/show_bug.cgi?id=708891
It seems that the forked processes all attempt to handle the listening
socket from the server, and only one has to shutdown the socket to break
the server completely.
Create a new server inside each test to avoid this.
https://bugzilla.gnome.org/show_bug.cgi?id=772656
The tests accumulate buffers in GstCheck's buffers list, and the list is
not (consistently) reset between tests. Do that and remove the now
conflicting unrefs for outbuffers.
https://bugzilla.gnome.org/show_bug.cgi?id=772644
The basic idea is this:
1. For *larger* rtx-rtt, weigh a new measurement as before
2. For *smaller* rtx-rtt, be a bit more conservative and weigh a bit less
3. For very large measurements, consider them "outliers"
and count them a lot less
The idea being that reducing the rtx-rtt is much more harmful then
increasing it, since we don't want to be underestimating the rtt of the
network, and when using this number to estimate the latency you need for
you jitterbuffer, you would rather want it to be a bit larger then a bit
smaller, potentially losing rtx-packets. The "outlier-detector" is there
to prevent a single skewed measurement to affect the outcome too much.
On wireless networks, these are surprisingly common.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
Assuming equidistant packet spacing when that's not true leads to more
loss than necessary in the case of reordering and jitter. Typically this
is true for video where one frame often consists of multiple packets
with the same rtp timestamp. In this case it's better to assume that the
missing packets have the same timestamp as the last received packet, so
that the scheduled lost timer does not time out too early causing the
packets to be considered lost even though they may arrive in time.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
There is no need to schedule another EXPECTED timer if we're already
past the retry period. Under normal operation this won't happen, but if
there are more timers than the jitterbuffer is able to process in
real-time, scheduling more timers will just make the situation worse.
Instead, consider this packet as lost and move on. This scenario can
occur with high loss rate, low rtt and high configured latency.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
This patch fixes an issue with the estimated gap duration when there is
a gap immediately after a lost timer has been processed. Previously
there was a discrepancy beteen the gap in seqnum and gap in dts which
would cause wrong calculated duration. The issue would only be seen with
retranmission enabled since when it's disabled lost timers are only
created when a packet is received and the actual gap length and last dts
is known.
https://bugzilla.gnome.org/show_bug.cgi?id=769768