The order of the devices iterator from the SDK is undefined and can
randomly change.
Keep the device-number property for backwards compatibility and
simplicity but prefer the persistent-id property and also use it for the
device provider implementation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3078>
GstDXGIGetDebugInterface() is unused when targeting UWP. We directly
call DXGIGetDebugInterface1() in that case.
Fixes build failure:
../gst-libs/gst/d3d11/gstd3d11device.cpp(271): error C2440: '=': cannot convert from 'HRESULT (__cdecl *)(UINT,const IID &,void **)' to 'DXGIGetDebugInterface_t'
../gst-libs/gst/d3d11/gstd3d11device.cpp(271): note: This conversion requires a reinterpret_cast, a C-style cast or function-style cast
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3118>
According to W3C
specification (https://w3c.github.io/webrtc-pc/#datachannel-send) we
should return InvalidStateError exception when trying to send when the
channel is not open. In the world of C/glib/gstreamer we don't have
exceptions but have to rely on gboolean/GError instead. Introducing
these calls for a change in function signature of the action signals
used to send data on the datachannel. Changing the signature of the
existing "send-string" and "send-data" signals would mean an immediate
breaking change so instead we deprecate them. Furthermore, there is no
way to express GError** as an argument to an action signal in a way
that fits language bindings (pointer-to-pointer simply does not work)
and we have to use regular functions instead.
Therefore we introduce gst_webrtc_data_channel_send_data_full() and
gst_webrtc_data_channel_send_string_full() while deprecating the old
functions and corresponding signals.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1958>
Always hold a reference to the soft volume element
provided by the playsinkaudioconvert bin helper, the
same as when volume is provided by a sink element,
or the soft volume element gets unreffed too soon.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3108>
This is a regression that was introduced in
cca2f555d1 (yes, 9 years ago).
The only place where a demuxer streaming thread should be stopped is when the
sinkpad is deactivated from pull mode (i.e. PAUSED->READY).
Attempting to stop the task in this function would cause this to happen when a
FLUSH_STOP or STREAM_START event is received... which can cause deadlocks.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3109>
It would constantly want to renegotiate (and spam the debug log) even
though the channel layout hasn't actually changed. We use the same
fallback in gst_ffmpegauddec_negotiate() already.
This happens with WMA files for example.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3103>
We need to call this to register the MusixBrainz tags before we use
them in an XMP schema.
Fixes this critical when attempting to run jpegparse on a JPEG
containing MusicBrainz XMP tags:
GStreamer-CRITICAL **: 20:41:07.885: gst_tag_get_type: assertion 'info != NULL' failed
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3092>
Currently if the user is not able to access the devices under /dev/media*,
either due to no media devices present on the system or simply no permission
to access the device, v4l2codecs initialises with no features or debug messages.
Since calling `GST_DEBUG="v4l2*:7" gst-inspect-1.0 v4l2codecs` is a typical way
to diagnose why element(s) failed to enumerate, we should be more verbose here
when the user is not able to access any /dev/media* device. So print a simple
debug message in this case to aid debugging.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3088>
The purpose of a deep buffer copy is to be able to release the source
buffer and all its dependencies. Attaching the parent buffer meta to
the newly created deep copy needlessly keeps holding a reference to the
parent buffer.
The issue this solves is the fact you need to allocate more
buffers, as you have free buffers being held for no reason. In the good
cases it will use more memory, in the bad case it will stall your
pipeline (since codecs often need a minimum number of buffers to
actually work).
Fixes#283
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2928>
Since commit a79a756b79 we could change to ignore-pcr automatically at 500ms
into a live stream when no PCR is seen by then. However the stream counting in
program change detection was wrongly considering ignore-pcr programs to have a
separate PCR PID, even though we are actually ignoring the PCR PID completely,
resulting in an erroneous program switch getting triggered from the different
stream count. This in turn would send an EOS and switch out the pads for what
actually is still the same program, while we intended to simply apply a
workaround for broken encoders.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3060>
If the SETUP request returns an IPv6 server address in the Transport
field, we would generate an incorrect URI, and multiudpsink would fail
to initialize:
```
rtspsrc gstrtspsrc.c:9780:dump_key_value:<source> key: 'Transport', value: 'RTP/AVP;unicast;source=fe80::dc27:25ff:fe5e:bd13:8080;client_port=62696-62697;server_port=4000-4001'
...
rtspsrc gstrtspsrc.c:4595:gst_rtspsrc_stream_configure_udp_sinks:<source> configure RTP UDP sink for fe80::dc27:25ff:fe5e:bd13:8080:4000
...
multiudpsink gstmultiudpsink.c:1229:gst_multiudpsink_configure_client:<udpsink0> error: Invalid address family (got 23)
```
We can't look at stream->is_ipv6 because we can't rely on the server
returning the right value there. In the issue reported about this,
server reported itself as `KuP RTSP Server/0.1`, and the SDP was:
```
c=IN IP4
m=video 54608 RTP/AVP 96
a=rtpmap:96 H264/90000
```
So we need to parse the string value and figure out the family
ourselves.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1058
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1819>
Fixes warning with meson 0.62:
gst-plugins-bad| subprojects/gst-plugins-bad/meson.build:546: WARNING:
Project targets '>= 0.62' but uses feature deprecated since '0.62.0':
pkgconfig.generate variable for builtin directories. They will be
automatically included when referenced
and more.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3086>
Update unit test for some mpd cases that were reporting
timestamps including the period start time, while
dashdemux2 expects that it needs to add the period
start time itself.
Fix the tests to not expect the period start time
to be included.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3025>
These values will be referred to as timestamp relative to period start
so need to subtract period start time from the values.
Fixes a problem with determining the start position when playing Live content
with SegmentTimeline, presentationTimeOffset and a non-0 period start time.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3025>
Starting with Meson 0.62, meson automatically populates the variables
list in the pkgconfig file if you reference builtin directories in the
pkgconfig file (whether via a custom pkgconfig variable or elsewhere).
We need this, because ${prefix}/libexec is a hard-coded value which is
incorrect on, for example, Debian.
Bump requirement to 0.62, and remove version compares that retained
support for older Meson versions.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1245
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3061>
Change the way streams are woken up to download more data.
Instead of checking the level on tracks that are being
output as data is dequeued, calculate a 'wakeup time'
at which it should download more data, and wake up
the stream when the global output position crosses
that threshold.
For efficiency, compute the earliest wakeup time
for all streams and store it on the period, so the
output loop can quickly check only a single value
to decide if something needs waking up.
Does the same buffering as the previous method,
but ensures that as we approach the end of
one period, the next period continues incrementally
downloading data so that it is fully buffered when
the period starts.
Fixes issues with multi-period VOD content where
download of the second period resumes only after
the first period is completely drained.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3055>
When pushing several buffers while the pipeline is in NULL state, meaning
that the action are executed "interlaced", previous code was deadlocking.
This new implementation makes it so the override is always on and we
expect all buffers to go through to be associated to a function, which
is a safe assumption.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3052>
Some servers can return playlists with "old" media playlists and different
Discont Sequence.
In those cases, the segment stream times would be negative when creating a new
time mapping. In order to properly handle such scenarios, shift the values to
stored accordingly to end up with non-negative reference stream time.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3054>
This allows users to let videorate fully fill the segments when received
EOS or on new segment, removing an arbitrary limit of 25 duplicates which
might not be what the user wants (for example on low FPS stream in GES,
that sometimes leaded to broken behavior)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3000>
The AV1 support multi spatial layers within one TU with different
resolutions, and only the highest spatial layer need to be output.
For example, there are two spatial layer, base level is 800x600
and higher level is 1920x1080. We need to decode both because the
higher level needs base layer as reference, but we only need to output
1920x1080 frames here.
The current manner always renegotiates the caps once we detect the
current picture resolution changes, so we renegotiate again and
again between different layers. That's a big waste and has very
low performance. We now only do the renegotiation for the highest
output layer. For other non output layers, we just keep a internal
buffer pool which is big enough to handle the surface allocation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2382>
As SPEC says, when multi spatial layer exists, we should only output
one frame with the highest spatial id from each TU. We now store the
highest spatial layer information in the base class in order to let
the sub class handle different layers easily.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2382>