Always wait with starting the RTCP thread until either a RTP or RTCP
packet is sent or received. Special handling is needed to make sure the
RTCP thread is started when requesting an early RTCP packet.
We want to wait with starting the RTCP thread until it's needed in order
to not send RTCP packets for an inactive source.
https://bugzilla.gnome.org/show_bug.cgi?id=795139
This mode is useful for muxers that can take a long time to finalize a
file. Instead of blocking the whole upstream pipeline while the muxer is
doing its stuff, we can unlink it and spawn a new muxer+sink combination
to continue running normally.
This requires us to receive the muxer and sink (if needed) as factories,
optionally accompanied by their respective properties structures. Also
added the muxer-added and sink-added signals, in case custom code has to
be called for them.
https://bugzilla.gnome.org/show_bug.cgi?id=783754
If obtain_internal_source() returns a source that is not internal it
means there exists a non-internal source with the same ssrc. Such an
ssrc collision should be handled by sending a GstRTPCollision event
upstream and choose a new ssrc, but for now we simply drop the packet.
Trying to process the packet further will cause it to be pushed
usptream (!) since the source is not internal (see source_push_rtp()).
https://bugzilla.gnome.org/show_bug.cgi?id=795139
If there is an external source which is about to timeout and be removed
from the source hashtable and we receive feedback RTCP packet with the
media ssrc of the source, we unlock the session in
rtp_session_process_feedback before emitting 'on-feedback-rtcp' signal
allowing rtcp timer to kick in and grab the lock. It will get rid of
the source and rtp_session_process_feedback will be left with RTPSource
with ref count 0.
The fix is to grab the ref to the RTPSource object in
rtp_session_process_feedback.
https://bugzilla.gnome.org/show_bug.cgi?id=795139
These are the sources we send from, so there is no reason to
report receive statistics for them (as we do not receive on them,
and the remote side has no knowledge of them).
https://bugzilla.gnome.org/show_bug.cgi?id=795139
ULP FEC, as defined in RFC 5109, has the protected and protection
packets sharing the same ssrc, and a different payload type, and
implies rewriting the seqnums of the protected stream when encoding
the protection packets. This has the unfortunate drawback of not
being able to tell whether a lost packet was a protection packet.
rtpbasedepayload relies on gaps in the seqnums to set the DISCONT
flag on buffers it outputs. Before that commit, this created two
problems:
* The protection packets don't make it as far as the depayloader,
which means it will mark buffers as DISCONT every time the previous
packets were protected
* While we could work around the previous issue by looking at
the protection packets ignored and dropped in rtpptdemux, we
would still mark buffers as DISCONT when a FEC packet was lost,
as we cannot know that it was indeed a FEC packet, even though
this should have no impact on the decoding of the stream
With this commit, we consider that when using ULPFEC, gaps in
the seqnums are not a reliable indicator of whether buffers should
be marked as DISCONT or not, and thus rewrite the seqnums on
the decoding side as well to form a perfect sequence, this
obviously doesn't prevent the jitterbuffer from doing its job
as the ulpfec decoder is downstream from it.
https://bugzilla.gnome.org/show_bug.cgi?id=794909
This reverts commit af273b4de9.
While RFC 3264 (SDP) says that sendonly/recvonly are from the point of view of
the requester, the actual RTSP RFCs (RFC 2326 / 7826) disagree and say
the opposite, just like the ONVIF standard.
Let's follow those RFCs as we're doing RTSP here, and add a property at
a later time if needed to switch to the SDP RFC behaviour.
https://bugzilla.gnome.org/show_bug.cgi?id=793964
Tested on linux with X11/wayland and semi-tested on Windows.
Windows crashes on item destruction however this is better than nothing.
Fix up some win32 build issues on the way with mismatched {} and
G_STMT_{START,END}
Looping the test 500 times to only execute the test once every
33 times means we inited and deinited gstreamer 467 times
for no reason at all, which was annoying when running the test
with valgrind.
After investigating, we do dispose of the TLS connections
appropriately in the souphttpsrc test, which in turn
calls gnutls_deinit, but certificates get leaked anyway.
We expose a set of new elements:
* ULPFEC encoder / decoder
* A storage element, which should be placed before jitterbuffers,
and is used to store packets in order to attempt reconstruction
after the jitterbuffer has sent PacketLost events
* RED encoder / decoder (RFC 2198), these are necessary to
use FEC in webrtc, as browsers will propose and expect ulpfec
packets to be wrapped in red packets
With contributions from:
Mathieu Duponchelle <mathieu@centricular.com>
Sebastian Dröge <sebastian@centricular.com>
https://bugzilla.gnome.org/show_bug.cgi?id=792696
When the signal returns a floating reference, as its return type
is transfer full, we need to sink it ourselves before passing
it to gst_bin_add (which is transfer floating).
This allows us to unref it in bin_remove_element later on, and
thus to also release the reference we now own if the signal
returns a non-floating reference as well.
As we now still hold a reference to the element when removing it,
we also need to lock its state and setting it to NULL before
unreffing it
Also update the request_aux_sender test.
https://bugzilla.gnome.org/show_bug.cgi?id=792543
TOC support in mastroskamux has been deactivated for a couple of years. This commit updates it to recent GstToc evolutions and introduces toc unit tests for both matroska-mux and matroska-demux.
There are two UIDs for Chapters in Matroska's specifications:
- The ChapterUID is a mandatory unsigned integer which internally refers to a given chapter. Except for title & language which use dedicated fields, this UID can also be used to add tags to the Chapter. The tags come in a separate section of the container.
- The ChapterStringUID is an optional UTF-8 string which also uniquely refers to a chapter but from an external perspective. It can act as a "WebVTT cue identifier" which "can be used to reference a specific cue, for example from script or CSS".
During muxing, the ChapterUID is generated and checked for unicity, while the ChapterStringUID receives the user defined UID. In order to be able to refer to chapters from the tags section, we maintain an internal Toc tree with the generated ChapterUID.
When demuxing, the ChapterStringUIDs (if available) are assigned to the GstTocEntries UIDs and an internal toc mimicking the toc is used to keep track of the ChapterUIDs and match the tags with the appropriate GstTocEntries.
https://bugzilla.gnome.org/show_bug.cgi?id=790686
Sometimes all the buffers are received before the time we lock the
check_mutex, in which case g_cond_wait will wait forever for another
one. Just check if this is the case before waiting.
https://bugzilla.gnome.org/attachment.cgi?id=358397
This patch simplifies the tests (44% less code) and
makes them much more readable.
The provided SessionHarness also makes it much easier
to write new tests for rtpsession.
https://bugzilla.gnome.org/show_bug.cgi?id=791070
If the use-robust-muxing property is set, check if the
assigned muxer has reserved-max-duration and
reserved-duration-remaining properties, and if so set
the configured maximum duration to the reserved-max-duration
property, and monitor the remaining space to start
a new file if the reserved header space is about to run out -
even though it never ought to.
Switching to a new fragment because the input caps have
changed didn't properly end the previous file. Use the normal
EOS sequence to ensure that happens. Add a test that it works.
Except for gst/gl/gstglfuncs.h
It is up to the client app to include these headers.
It is coherent with the fact that gstreamer-gl.pc does not
require any egl.pc/gles.pc. I.e. it is the responsability
of the app to search these headers within its build setup.
For example gstreamer-vaapi includes explicitly EGL/egl.h
and search for it in its configure.ac.
For example with this patch, if an app includes the headers
gst/gl/egl/gstglcontext_egl.h
gst/gl/egl/gstgldisplay_egl.h
gst/gl/egl/gstglmemoryegl.h
it will *no longer* automatically include EGL/egl.h and GLES2/gl2.h.
Which is good because the app might want to use the gstgl api only
without the need to bother about gl headers.
Also added a test: cd tests/check && make libs/gstglheaders.check
https://bugzilla.gnome.org/show_bug.cgi?id=784779
SoupSession's ssl-ca-file property is deprecated. Use the recommended
tls-database property.
This is a bit more complex as it requires creating a GTlsFileDatabase
object for an absolute (!) path to the CA certificates file.
https://bugzilla.gnome.org/show_bug.cgi?id=784005
Recent GnuTLS disregards the Common Name and only looks at the Subject
Alternative Name extension. Since our test-cert has no SAN extension,
validation fails.
Generate a new certificate with SAN. In addition to 127.0.0.1, for good
measure make it valid for localhost and ::1, too.
https://bugzilla.gnome.org/show_bug.cgi?id=784005
Even though hooked up to the build system, it's clear that no one
has ever built or used this with GStreamer 1.x. It wants to link
against libgstinterfaces, which no longer exists. And uses 0.10-style
raw audio caps. And the last meaningful change was done in 2009.
Let's just remove it.
That example only tested the property probe interface, which has been removed.
The same kind of thing can now be done with the generic gst-device-monitor tool.
streamheader and codec_data buffers fields are only meant to be
in the negotiated caps, not the template caps.
Fixes false-positive leaks of those buffers detected by the leaks
tracer, as template caps are static, and we decided to not include
code in gstreamer core to handle this unusual case of template caps
having buffers in them.
https://bugzilla.gnome.org/show_bug.cgi?id=768762
Some radio streams uses StreamTitle='' to reset the title after a
track stopped playing, e.g. while the host talks between tracks or
during news segments.
This change forces an empty tag object to be distributed if
StreamTitle or StreamUrl is received with empty value, thus allowing
downstream elements to get notified about this.
https://bugzilla.gnome.org/show_bug.cgi?id=778437
Add a new signal for formatting the filename, which receives
a GstSample containing the first buffer from the reference
stream that will be muxed into that file.
Useful for creating filenames that are based on the
running time or other attributes of the buffer.
To make it work, opening of files and setting filenames is
now deferred until there is some data to write to it,
which also requires some changes to how async state changes
and gap events are handled.
Now matroskamux mark all packets of audio-only streams as keyframes so
in test_block_group after pushing the test audio data 4 buffers are produced
and not more 2. The last buffer is the original data and must match with what
pushed. The remaining ones are matroskamux headers
https://bugzilla.gnome.org/show_bug.cgi?id=754696
* Changed PCMU->TEST for common macros
* Changed verify-functions (lost & rtx) into macros.
* Remove option to add marker-bit for test-buffers (not used anywhere)
* Add new push_test_buffer function that makes sure there are correlation
between dts and the time on the clock. (classic test-mistake)
* Established a generic starting-point for tests with the
construct_deterministic_initial_state function and use it where
applicable, which removes lots of "boilerplate" everywhere.
* Add basic lost-event test
* Remove as much "magic constants" as possible.
* Remove 3 tests that no longer are testing anything that others don't,
and was completely unmaintainable.
* Remove unnecessary use of the testclock
* Verify each test is testing what it actually says it does (and modify
where it doesn't)
In general, make the tests much smaller, better, more maintainable and
readable.
https://bugzilla.gnome.org/show_bug.cgi?id=774409
A new signal named on-bundled-ssrc is provided and can be
used by the application to redirect a stream to a different
GstRtpSession or to keep the RTX stream grouped within the
GstRtpSession of the same media type.
https://bugzilla.gnome.org/show_bug.cgi?id=772740
When doing rtx, the jitterbuffer will always add an rtx-timer for the next
sequence number.
In the case of the packet corresponding to that sequence number arriving,
that same timer will be reused, and simply moved on to wait for the
following sequence number etc.
Once an rtx-timer expires (after all retries), it will be rescheduled as
a lost-timer instead for the same sequence number.
Now, if this particular sequence-number now arrives (after the timer has
become a lost-timer), the reuse mechanism *should* now set a new
rtx-timer for the next sequence number, but the bug is that it does
not change the timer-type, and hence schedules a lost-timer for that
following sequence number, with the result that you will have a very
early lost-event for a packet that might still arrive, and you will
never be able to send any rtx for this packet.
Found by Erlend Graff - erlend@pexip.comhttps://bugzilla.gnome.org/show_bug.cgi?id=773891
The lost-event was using a different time-domain (dts) than the outgoing
buffers (pts). Given certain network-conditions these two would become
sufficiently different and the lost-event contained timestamp/duration
that was really wrong. As an example GstAudioDecoder could produce
a stream that jumps back and forth in time after receiving a lost-event.
The previous behavior calculated the pts (based on the rtptime) inside the
rtp_jitter_buffer_insert function, but now this functionality has been
refactored into a new function rtp_jitter_buffer_calculate_pts that is
called much earlier in the _chain function to make pts available to
various calculations that wrongly used dts previously
(like the lost-event).
There are however two calculations where using dts is the right thing to
do: calculating the receive-jitter and the rtx-round-trip-time, where the
arrival time of the buffer from the network is the right metric
(and is what dts in fact is today).
The patch also adds two tests regarding B-frames or the
“rtptime-going-backwards”-scenario, as there were some concerns that this
patch might break this behavior (which the tests shows it does not).
The new timeout is always going to be (timeout + delay), however, the
old behavior compared the current timeout to just (timeout), basically
being (delay) off.
This would happen if rtx-delay == rtx-retry-timeout, with the result that
a second rtx attempt for any buffers would be scheduled immediately instead
of after rtx-delay ms.
Simply calculate (new_timeout = timeout + delay) and then use that instead.
https://bugzilla.gnome.org/show_bug.cgi?id=773905
It's been broken for years, and it's unlikely it will ever
be fixed for collectpads/videomixer now that there's compositor
which works fine. So let's disable it, since all it does
is that it creates noise that distracts from other failures.
Also see the corresponding adder bug as it failed in the same way:
https://bugzilla.gnome.org/show_bug.cgi?id=708891
It seems that the forked processes all attempt to handle the listening
socket from the server, and only one has to shutdown the socket to break
the server completely.
Create a new server inside each test to avoid this.
https://bugzilla.gnome.org/show_bug.cgi?id=772656
The tests accumulate buffers in GstCheck's buffers list, and the list is
not (consistently) reset between tests. Do that and remove the now
conflicting unrefs for outbuffers.
https://bugzilla.gnome.org/show_bug.cgi?id=772644
Workaround source_root being the root directory of all projects in the subproject
case and remove now unneeded getpluginsdir
Bump meson requirement to 0.35
The basic idea is this:
1. For *larger* rtx-rtt, weigh a new measurement as before
2. For *smaller* rtx-rtt, be a bit more conservative and weigh a bit less
3. For very large measurements, consider them "outliers"
and count them a lot less
The idea being that reducing the rtx-rtt is much more harmful then
increasing it, since we don't want to be underestimating the rtt of the
network, and when using this number to estimate the latency you need for
you jitterbuffer, you would rather want it to be a bit larger then a bit
smaller, potentially losing rtx-packets. The "outlier-detector" is there
to prevent a single skewed measurement to affect the outcome too much.
On wireless networks, these are surprisingly common.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
Assuming equidistant packet spacing when that's not true leads to more
loss than necessary in the case of reordering and jitter. Typically this
is true for video where one frame often consists of multiple packets
with the same rtp timestamp. In this case it's better to assume that the
missing packets have the same timestamp as the last received packet, so
that the scheduled lost timer does not time out too early causing the
packets to be considered lost even though they may arrive in time.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
There is no need to schedule another EXPECTED timer if we're already
past the retry period. Under normal operation this won't happen, but if
there are more timers than the jitterbuffer is able to process in
real-time, scheduling more timers will just make the situation worse.
Instead, consider this packet as lost and move on. This scenario can
occur with high loss rate, low rtt and high configured latency.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
This patch fixes an issue with the estimated gap duration when there is
a gap immediately after a lost timer has been processed. Previously
there was a discrepancy beteen the gap in seqnum and gap in dts which
would cause wrong calculated duration. The issue would only be seen with
retranmission enabled since when it's disabled lost timers are only
created when a packet is received and the actual gap length and last dts
is known.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
Stats should also be collected for unsuccessful packets.
rtx-rtt is very important for determining the necessary configured
latency on the jitterbuffer. It's especially important to be able to
increase the latency when retransmitted packets arrive too late and are
considered lost. This patch includes these late packets in the
calculation of the various rtx stats, making them more correct and
useful.
Also in the case where the original packet arrives after a NACK is sent,
the received RTX packet should update the stats since it provides useful
information about RTT.
The RTT is only updated if and only if all requested retranmissions are
received. That way the RTT is guaranteed to make sense. If not we don't
know which request the packet is a response to and the RTT may be bogus.
A consequence of this patch is that RTT is not updated for a request
when one of the RTX packets for that seqnum is lost, but that since
measured RTT will be more accurate.
The implementation store the RTX information from the timed out timers
and use this when the retransmitted packet arrives. For performance
these timers are stored separately from the "normal" timers in order to
not impact performance (see attached performance test).
https://bugzilla.gnome.org/show_bug.cgi?id=769768