The libwebp API doesn't match very well with the GstVideoEncoder
API, as it only delivers an unframed bitstream once all pictures
have been processed, which means we can only push a single buffer
manually on our srcpad on finish().
Supporting animated webp is still valuable, and the feature is
behind an opt-in property.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5994>
Remove optional sprop-stereo and sprop-maxcapture fields from Opus
remote offer caps before intersecting with local codec preferences.
According to https://datatracker.ietf.org/doc/html/rfc7587#section-7.1
those fields are sender-only informative, and don't affect
interoperability.
Fixes cases where the webrtc media will end up receive-only if the
local side wants to send stereo but the remote is sending mono, or
vice versa.
There may be other fields in other codecs, so the implementation
anticipates needing to add further fields and codecs in the future.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5993>
The pool currently defaults to performing a layout transition to
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, with some special exceptions for
video usages. This may not be a legal transition depending on the usage.
Provide an API to explicitly control the initial image layout.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5881>
If the ladspa plugin is enabled explicitly or via auto-features, the
liblrdf dependency can not be disabled.
As the RDF parsing currently provides hardly any features, the possibility
to disable it fairly useful.
Fixes: #3168
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5794>
With the way the runtime checks are currently set up, every single
openh264 release, no matter how minor, is considered an ABI break and
requires gst-plugins-bad recompilation. This is unnecessarily strict
because it doesn't allow downstream distributions to ship any openh264
bug fix version updates without breaking gstreamer's openh264 support.
Years ago, at the time when gstreamer's openh264 support was merged,
openh264 releases were done without a versioned soname (the library was
just libopenh264.so, unversioned). Since then, starting with version
1.3.0, openh264 has started using versioned sonames and the intent has
been to bump the soname every time there's a new release with an ABI
change.
This patch drops the strict version check. meson.build already has a
minimum requirement on openh264 version 1.3.0 where soname versioning
was added, which should be good enough to ensure that the library is
using soname versioning.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5780>
This avoid a build failure when compiling against OpenSSL 3.2.0. The
problem is when windows.h is included before WinSock2.h. Because
windows.h includes winsock.h[1]. Defining _WINSOCKAPI_ stops windows.h
including winsock.h.
Error:
```
[748/1041] Compiling C object ext/dtls/gstdtls.dll.p/gstdtlscertificate.c.obj
FAILED: ext/dtls/gstdtls.dll.p/gstdtlscertificate.c.obj
[...]
Windows Kits\10\include\10.0.17763.0\shared\ws2def.h(235): error C2011: 'sockaddr': 'struct' type redefinition
Windows Kits\10\include\10.0.17763.0\um\winsock.h(482): note: see declaration of 'sockaddr'
```
[1] https://stackoverflow.com/a/1372836
Closes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3167
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5770>
This is a bit of a hack solution has I think the correct solution is to
expose model caps on sinkpad (eventually sinkpads). Till then I think
this is reasonable.
- Add a property to onnxinference to set datatype.
- Fix internal buffer allocation size based on datatype.
- Extract method to remove alphe channel and convert to planar image
when requested. Also template the method to support writing to buffers
of different datatype.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5761>
in the case of an upstream element proposing a buffer pool,
use it to allocate the buffer image with the given parameters
set by the upstream element.
Besides the buffer pool handling is sync'd with GstBaseTransform
base class.
See the case of vulkanupload ! vulkanh264enc
Co-authored-by: Victor Jaquez <vjaquez@igalia.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5651>
The code seems to validate that the media-level fingerprint matches
the fingerprint of the previous media or of the whole session. There
is no such requirement in any RFC I found. The session-session one
is just meant to act as a fallback when there is no media-level
fingerprint.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1118>
A payload of 0x80 0x80 means that it's padding. It's not a good idea to
throw this away though, because of the cc_valid field.
According to CEA 10-B section 25.2.1, if cc_valid is zero, the run-in
clock and start bit should not be generated. In practice, this means
that any closed captions will be erased and the end-user TV will show
that captions are not available for this stream. This might have
undesired consequences, e.g. we were just showing a long line of
captions and we disable it before the user has had time to read it, or
you can't enable closed captions during silence/music intervals.
We cannot reliably detect whether there's a currently-silent closed
caption stream or just nothing, but we have this information coming from
upstream, so we can at least not discard it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5508>
The src caps of the libde265 is now fixed to I420, and so if the
stream is other format, such as 4:4:4 or 10 bits format, the pipeline
will crash because the dowstream element accesses the video buffer as
I420 format.
We now restrain the input caps to "main" profile, which only contains
4:2:0 8 bits stream.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5573>
This can happen with the dummy "noopenh264" library that the freedesktop
flatpak runtime ships, and Fedora is planning on shipping as well. In
both cases the dummy implementation gets replaced with the actual
openh264 library that's downloaded directly from Cisco, but just to be
on safe side, this patch makes it careful to check the return values to
avoid crashing if the underlying library hasn't been swapped out yet.
The patch is taken from freedesktop-sdk and was originally written by
Valentin David <valentin.david@codethink.co.uk>.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5581>
Previously we were checking for opencv dep in 2 different places,
and the checks would vary in terms of how complex and exhaustive
they were.
Move the check into the libs module and reuse the result later on.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3016>
This element refactors functionality from gstonnxinference element,
namely separating out the ONNX inference from the subsequent analysis.
The new element runs an ONNX model on each video frame, and then
attaches a TensorMeta meta with the output tensor data. This tensor data
will then be consumed by downstream elements such as gstobjectdetector.
At the moment, a provisional TensorMeta is used just in the ONNX
plugin, but in future this will upgraded to a GStreamer API for other
plugins to consume.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4916>
There are a bunch of plugins that you need for webrtc support, and
it's not obvious at all to users which those are.
With this commit, srtp, sctp and dtls options will be auto-enabled if
the webrtc option is enabled.
Requires meson 1.1
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5505>
Use gst_codec_utils_caps_get_mime_codec() in pbutils for codec
strings. That function gives more elaborate RFC 6381 compatible
strings than the helper functions in gstmdphelper.c, such as
"avc1.F4000D".
Remove the helper functions, as they were only used from dashsink.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5404>
There is no need to use DRM dumb pool if buffer to
render is already a DMABuf, just import it and render it.
This fixes a DMAbuf memory leakage when waylandsink downstream
element exports DMABuf while waylandsink is configured to be
DMABuf exporter (drm-device=/drv/dri/card0):
gst-launch-1.0 v4l2src io-mode=4 ! gtkwaylandsink drm-device=/dev/dri/card0
leakage identfied with command:
watch "cat /sys/kernel/debug/dma_buf/bufinfo | grep attached "
Fixes#2729
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5350>
There is no need to use DRM dumb pool if buffer to
render is already a DMABuf, just import it and render it.
This fixes a DMAbuf memory leakage when waylandsink downstream
element exports DMABuf while waylandsink is configured to be
DMABuf exporter (drm-device=/drv/dri/card0):
gst-launch-1.0 v4l2src io-mode=4 ! waylandsink drm-device=/dev/dri/card0
leakage identfied with command:
watch "cat /sys/kernel/debug/dma_buf/bufinfo | grep attached "
Fixes#2729
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5350>
* Library versioning should not be used for plugins since it will add
-{version}.dll suffix (and versioned libraries on Linux with symlink).
Then the library file name and plugin init function name mismatch
will result in blacklisted plugin.
* Don't define BUILDING_GST_CODECS, makes no sense
* Don't define G_LOG_DOMAIN, which should be used only for libraries,
not plugins
* Depends on gstcodecparsers libary, not gstcodecs.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5249>
Section 3.4 in RFC8835 states that if a WebRTC endpoint uses an HTTP
proxy to access the Internet it MUST include the "ALPN" header. This
commit adds this header.
By default the ALPN used when connecting to the TURN/TCP server via a
proxy is set to "webrtc". It can be changed by adding an alpn url
option for the http-proxy. For example:
http://user:pass@my.http.proxy.com:8080?alpn=c-webrtc
This will add the header "ALPN: c-webrtc" to the HTTP proxy CONNECT
request.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4212>
Pass GstVideoInfoDmaDrm or GstVideoInfo whenever possible, avoiding passing
strange combination of GstVieoFormat + modifier. Even though we don't have any
at the moment, this also allow supporting GstVideoFormat that are not supported
in our DRM integration.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5120>
lc3enc:
- encodes raw audio into lc3 format
- uses the default bitrate property and frame duration
from the caps to determine the byte count of
the encoded frames if it is not specified in
the downstream caps after negotiation
- uses the same byte count value for all the channels
- all the common session configuration parameters
are passed in the src caps
lc3dec:
- decodes an lc3 encoded audio
- sink caps should contain all the common session configuration
params
- uses frame_duration and frame_bytes (byte count) in the sink
caps as parameters along with sample rate and channel count
- byte count is same for all the channels
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4376>
`srt_rejectreason_str` doesn't give us a unique string for every
possible reason. Peers can define their own reasons and SRT just gives
us the string `"Application-defined rejection reason"` for all of them.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4948>
Adding Direct3D11 backend Qt6 QML videosink element, qml6d3d11sink.
Implementation details are similar to the qt6 plugin in -good
but there are a few notable differences.
* qml6d3d11sink accepts all GstD3D11 supported video formats (e.g., NV12).
* Scene graph (owned by qml6d3d11sink) will hold dedicated and sharable
RGBA texture which belongs to Qt6's Direct3D11 device, instead of sharing
GStreamer's own texture with Qt6.
* All rendering operations will be done by using GStreamer's Direct3D11 device.
Specifically, upstream texture will be copied (in case of RGBA)
or converted to the above mentioned Qt6's sharable texture.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3707>
Even if we don't yet know what the echo probe format is, we want to be able to
provide silence for the reverse path, so that when the probe becomes available,
there is no ambiguity around what time period the new set of samples are for.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4849>