There was a potential busy loop occuring because when we were taking
data from the internal ccbuffer, we were not resetting which field had
written data. This would mean that the next time data was retrieved
from ccbuffer, it was always from field 0 and never from field 1.
This only affects usage of cc_buffer_take_separated() which is only used
by cdp->raw cea608.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6423>
Cea608 (valid) padding removal is available on the input side of ccconverter
or configurable on cccombiner. cccombiner can now configure whether
valid or invalid cea608 padding is used and for valid padding, how long
after valid non-padding to keep sending valid padding.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6300>
The transport stream only returned the CAPS for the first matching PT entry
from the `ptmap`. Other SSRC with the same PT where not included. For a stream
which bundled multiple audio streams for instance, only the first SSRC was
knowed to the SSRC demux and downstream elements.
This commit adds all the `ssrc-` attributes from the matching PT entries.
The RTP jitter buffer can now find the CNAME corresponding its SSRC even if it
was not the first to be registered for a particular PT.
The RTP PT demux removes `ssrc-*` attributes cooresponding to other SSRCs
before pushing SSRC specific CAPS to downstream elements.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6119>
Syncrhonizing buffer commits to the streaming thread can lead to
dropped frames when frame callbacks are not processed before the
next frame is ready for rendering. Depending on the drift between
the wayland compositor and buffer source timings, this can lead to
periods of significant frame drop, especially when the media frame
rate is close to the display frame rate.
Cache buffers in the streaming thread and peform commits on the
display thread to eliminate the buffer commit racing.
The implementation is the same for both waylandsink and gtkwaylandsink,
so move it to the common wayland library under gst-lib.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6133>
Some subtitle "decoders" had a wrong category of "Parser", which `parsebin`
relies on to identify elements which do not *decode* streams but *parse* them.
This would cause such subtitle decoders to be plugged in within parsebin,
preventing the original stream to be properly used by (more efficient)
downstream decoders or subtitle renderers.
Fixes#1757
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6153>
In rtpbin we already systematically check for all property names
except latency, correct that.
In webrtcbin we need to check before trying to use the do-retransmission
property.
This is useful for the case where an element like identity gets passed
to rtpbin's request-jitterbuffer property, when the application wants
to use webrtcbin in an SFU situation, with no reordering and no added
latency
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6112>
* Bump the rank of the musepack v7/v8 FFmpeg demuxers to SECONDARY
* Bump the rank of the musepack v7/v8 FFmpeg audio decoders to SECONDARY
* Demote the rank of the musepackdec element to MARGINAL
This is for two reasons:
* The musepack library is no longer maintained, whereas the FFmpeg
implementation can/will receive fixes
* The `musepackdec` implementation was a all-in-one "parsing and decoding" blob
which doesn't play nicely with decodebin3 and others
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3033
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6074>
Sends a gap event if nothing to output for a given input buffer.
For example, an input buffer might not contain any caption data
for downstream requested field, then we need to inform downstream
of the case.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6073>
WebKit commit b12e7ed2ad3a ("[WPE] Upstream the new WPE platform API
https://bugs.webkit.org/show_bug.cgi?id=265286")[1] added a `WPEView` typedef
which clashes with our `WPEView` class.
Rename the `WPEView` class to `GstWPEThreadedView` to avoid the collision.
Also prefix the `WPEContextThread` class with `Gst` and rename the
source files to reflect the new class name and use lowercase while at it
for consistency
[1] b12e7ed2ad
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6065>
The libwebp API doesn't match very well with the GstVideoEncoder
API, as it only delivers an unframed bitstream once all pictures
have been processed, which means we can only push a single buffer
manually on our srcpad on finish().
Supporting animated webp is still valuable, and the feature is
behind an opt-in property.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5994>
Remove optional sprop-stereo and sprop-maxcapture fields from Opus
remote offer caps before intersecting with local codec preferences.
According to https://datatracker.ietf.org/doc/html/rfc7587#section-7.1
those fields are sender-only informative, and don't affect
interoperability.
Fixes cases where the webrtc media will end up receive-only if the
local side wants to send stereo but the remote is sending mono, or
vice versa.
There may be other fields in other codecs, so the implementation
anticipates needing to add further fields and codecs in the future.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5993>
The pool currently defaults to performing a layout transition to
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, with some special exceptions for
video usages. This may not be a legal transition depending on the usage.
Provide an API to explicitly control the initial image layout.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5881>
If the ladspa plugin is enabled explicitly or via auto-features, the
liblrdf dependency can not be disabled.
As the RDF parsing currently provides hardly any features, the possibility
to disable it fairly useful.
Fixes: #3168
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5794>
With the way the runtime checks are currently set up, every single
openh264 release, no matter how minor, is considered an ABI break and
requires gst-plugins-bad recompilation. This is unnecessarily strict
because it doesn't allow downstream distributions to ship any openh264
bug fix version updates without breaking gstreamer's openh264 support.
Years ago, at the time when gstreamer's openh264 support was merged,
openh264 releases were done without a versioned soname (the library was
just libopenh264.so, unversioned). Since then, starting with version
1.3.0, openh264 has started using versioned sonames and the intent has
been to bump the soname every time there's a new release with an ABI
change.
This patch drops the strict version check. meson.build already has a
minimum requirement on openh264 version 1.3.0 where soname versioning
was added, which should be good enough to ensure that the library is
using soname versioning.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5780>
This avoid a build failure when compiling against OpenSSL 3.2.0. The
problem is when windows.h is included before WinSock2.h. Because
windows.h includes winsock.h[1]. Defining _WINSOCKAPI_ stops windows.h
including winsock.h.
Error:
```
[748/1041] Compiling C object ext/dtls/gstdtls.dll.p/gstdtlscertificate.c.obj
FAILED: ext/dtls/gstdtls.dll.p/gstdtlscertificate.c.obj
[...]
Windows Kits\10\include\10.0.17763.0\shared\ws2def.h(235): error C2011: 'sockaddr': 'struct' type redefinition
Windows Kits\10\include\10.0.17763.0\um\winsock.h(482): note: see declaration of 'sockaddr'
```
[1] https://stackoverflow.com/a/1372836
Closes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3167
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5770>
This is a bit of a hack solution has I think the correct solution is to
expose model caps on sinkpad (eventually sinkpads). Till then I think
this is reasonable.
- Add a property to onnxinference to set datatype.
- Fix internal buffer allocation size based on datatype.
- Extract method to remove alphe channel and convert to planar image
when requested. Also template the method to support writing to buffers
of different datatype.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5761>
in the case of an upstream element proposing a buffer pool,
use it to allocate the buffer image with the given parameters
set by the upstream element.
Besides the buffer pool handling is sync'd with GstBaseTransform
base class.
See the case of vulkanupload ! vulkanh264enc
Co-authored-by: Victor Jaquez <vjaquez@igalia.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5651>
The code seems to validate that the media-level fingerprint matches
the fingerprint of the previous media or of the whole session. There
is no such requirement in any RFC I found. The session-session one
is just meant to act as a fallback when there is no media-level
fingerprint.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1118>
A payload of 0x80 0x80 means that it's padding. It's not a good idea to
throw this away though, because of the cc_valid field.
According to CEA 10-B section 25.2.1, if cc_valid is zero, the run-in
clock and start bit should not be generated. In practice, this means
that any closed captions will be erased and the end-user TV will show
that captions are not available for this stream. This might have
undesired consequences, e.g. we were just showing a long line of
captions and we disable it before the user has had time to read it, or
you can't enable closed captions during silence/music intervals.
We cannot reliably detect whether there's a currently-silent closed
caption stream or just nothing, but we have this information coming from
upstream, so we can at least not discard it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5508>
The src caps of the libde265 is now fixed to I420, and so if the
stream is other format, such as 4:4:4 or 10 bits format, the pipeline
will crash because the dowstream element accesses the video buffer as
I420 format.
We now restrain the input caps to "main" profile, which only contains
4:2:0 8 bits stream.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5573>
This can happen with the dummy "noopenh264" library that the freedesktop
flatpak runtime ships, and Fedora is planning on shipping as well. In
both cases the dummy implementation gets replaced with the actual
openh264 library that's downloaded directly from Cisco, but just to be
on safe side, this patch makes it careful to check the return values to
avoid crashing if the underlying library hasn't been swapped out yet.
The patch is taken from freedesktop-sdk and was originally written by
Valentin David <valentin.david@codethink.co.uk>.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5581>
Previously we were checking for opencv dep in 2 different places,
and the checks would vary in terms of how complex and exhaustive
they were.
Move the check into the libs module and reuse the result later on.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3016>
This element refactors functionality from gstonnxinference element,
namely separating out the ONNX inference from the subsequent analysis.
The new element runs an ONNX model on each video frame, and then
attaches a TensorMeta meta with the output tensor data. This tensor data
will then be consumed by downstream elements such as gstobjectdetector.
At the moment, a provisional TensorMeta is used just in the ONNX
plugin, but in future this will upgraded to a GStreamer API for other
plugins to consume.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4916>
There are a bunch of plugins that you need for webrtc support, and
it's not obvious at all to users which those are.
With this commit, srtp, sctp and dtls options will be auto-enabled if
the webrtc option is enabled.
Requires meson 1.1
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5505>
Use gst_codec_utils_caps_get_mime_codec() in pbutils for codec
strings. That function gives more elaborate RFC 6381 compatible
strings than the helper functions in gstmdphelper.c, such as
"avc1.F4000D".
Remove the helper functions, as they were only used from dashsink.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5404>
There is no need to use DRM dumb pool if buffer to
render is already a DMABuf, just import it and render it.
This fixes a DMAbuf memory leakage when waylandsink downstream
element exports DMABuf while waylandsink is configured to be
DMABuf exporter (drm-device=/drv/dri/card0):
gst-launch-1.0 v4l2src io-mode=4 ! gtkwaylandsink drm-device=/dev/dri/card0
leakage identfied with command:
watch "cat /sys/kernel/debug/dma_buf/bufinfo | grep attached "
Fixes#2729
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5350>
There is no need to use DRM dumb pool if buffer to
render is already a DMABuf, just import it and render it.
This fixes a DMAbuf memory leakage when waylandsink downstream
element exports DMABuf while waylandsink is configured to be
DMABuf exporter (drm-device=/drv/dri/card0):
gst-launch-1.0 v4l2src io-mode=4 ! waylandsink drm-device=/dev/dri/card0
leakage identfied with command:
watch "cat /sys/kernel/debug/dma_buf/bufinfo | grep attached "
Fixes#2729
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5350>
* Library versioning should not be used for plugins since it will add
-{version}.dll suffix (and versioned libraries on Linux with symlink).
Then the library file name and plugin init function name mismatch
will result in blacklisted plugin.
* Don't define BUILDING_GST_CODECS, makes no sense
* Don't define G_LOG_DOMAIN, which should be used only for libraries,
not plugins
* Depends on gstcodecparsers libary, not gstcodecs.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5249>
Section 3.4 in RFC8835 states that if a WebRTC endpoint uses an HTTP
proxy to access the Internet it MUST include the "ALPN" header. This
commit adds this header.
By default the ALPN used when connecting to the TURN/TCP server via a
proxy is set to "webrtc". It can be changed by adding an alpn url
option for the http-proxy. For example:
http://user:pass@my.http.proxy.com:8080?alpn=c-webrtc
This will add the header "ALPN: c-webrtc" to the HTTP proxy CONNECT
request.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4212>
Pass GstVideoInfoDmaDrm or GstVideoInfo whenever possible, avoiding passing
strange combination of GstVieoFormat + modifier. Even though we don't have any
at the moment, this also allow supporting GstVideoFormat that are not supported
in our DRM integration.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5120>
lc3enc:
- encodes raw audio into lc3 format
- uses the default bitrate property and frame duration
from the caps to determine the byte count of
the encoded frames if it is not specified in
the downstream caps after negotiation
- uses the same byte count value for all the channels
- all the common session configuration parameters
are passed in the src caps
lc3dec:
- decodes an lc3 encoded audio
- sink caps should contain all the common session configuration
params
- uses frame_duration and frame_bytes (byte count) in the sink
caps as parameters along with sample rate and channel count
- byte count is same for all the channels
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4376>
`srt_rejectreason_str` doesn't give us a unique string for every
possible reason. Peers can define their own reasons and SRT just gives
us the string `"Application-defined rejection reason"` for all of them.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4948>
Adding Direct3D11 backend Qt6 QML videosink element, qml6d3d11sink.
Implementation details are similar to the qt6 plugin in -good
but there are a few notable differences.
* qml6d3d11sink accepts all GstD3D11 supported video formats (e.g., NV12).
* Scene graph (owned by qml6d3d11sink) will hold dedicated and sharable
RGBA texture which belongs to Qt6's Direct3D11 device, instead of sharing
GStreamer's own texture with Qt6.
* All rendering operations will be done by using GStreamer's Direct3D11 device.
Specifically, upstream texture will be copied (in case of RGBA)
or converted to the above mentioned Qt6's sharable texture.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3707>
Even if we don't yet know what the echo probe format is, we want to be able to
provide silence for the reverse path, so that when the probe becomes available,
there is no ambiguity around what time period the new set of samples are for.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4849>
The probe's info may not precisely match the dsp's info. For instance,
the number of channels or their layout might be different.
```
GStreamer-Audio-CRITICAL **: 16:21:32.899: the GstAudioInfo argument is not equal to the GstAudioMeta's attached info
```
This broke in d5755744c3.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4849>
A race condition can occur in `srtpdec` during the READY -> NULL transition:
an RTCP buffer can make its way to `gst_srtp_dec_chain` while the element is
partially stopped, resulting in the following critical warning:
> Got data flow before segment event
The problematic sequence is the following:
1. An RTCP buffer is being handled by the chain function for the
`rtcp_sinkpad`. Since, this is the first buffer, we try pushing the sticky
events to `rtcp_srcpad`.
2. At the same moment, the element is being transitioned from PAUSED to READY.
3. While checking and pushing the sticky events for `rtcp_srcpad`, we reach the
Segment event. For this, we try to get it from the "otherpad", in this case
`rtp_srcpad`. In the problematic case, `rtp_srcpad` has already been
deactivated so its sticky events have been cleared. We won't be pushing any
Segment event to `rtcp_srcpad`.
4. We return to the chain function for `rtcp_sinkpad` and try pushing the
buffer to `rtcp_srcpad` for which deactivation hasn't started yet, hence the
"Got data flow before segment event".
This commit:
- Adds a boolean return value to `gst_srtp_dec_push_early_events`: in case the
Segment event can't be retrieved, `gst_srtp_dec_chain` can return an error
instead of calling `gst_pad_push`.
- Replaces the obsolete `gst_pad_set_caps` with `gst_pad_push_event`. The
additional preconditions checked by previous function are guaranteed here
since we push a fixed Caps which was built in the same function.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4844>
A race condition can occur in `srtpdec` during the READY -> NULL transition:
an RTCP buffer can make its way to `gst_srtp_dec_chain` while the element is
partially stopped, resulting in the following critical warning:
> assertion 'parent->numsinkpads <= 1' failed
This can occur when the first RTCP buffer is received during the READY -> NULL
transition. If deactivation of the `rtp_srcpad` has already reached
`post_activate`, the sticky events are removed from this Pad. In this case,
`gst_srtp_dec_push_early_events` branches to the generation of a stream id
using `gst_pad_create_stream_id`. This function ensures that the element
doesn't own more than 1 sink pad. Since `srtpdec` owns two of them, the
assertion fails.
This commit uses `gst_element_decorate_stream_id` which doesn't perform this
check. The preconditions is not necessary in this particular context since the
stream id for the RTP / RTCP pads are derived from the same id.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4844>
Updated API usage appropriately, and now we have a versioned package to
track breaking vs. non-breaking updates.
Deprecates a number of properties (and we have to plug in our own values
for related enums which are now gone):
* echo-suprression-level
* experimental-agc
* extended-filter
* delay-agnostic
* voice-detection-frame-size-ms
* voice-detection-likelihood
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2943>
The `switch (n_rear)` supports up to 5 rear channels, but our channel
set only had space for 3. Size the set properly to fix this.
This didn't actually cause any memory unsafety as `PUSH_CHAN` would stop
incrementing `n_rear` if the channel set is already full.
Thanks to @alatiera for noticing this.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4712>
New vulkan formats don't match the number of planes with the number of memories
attached to the buffer. This patch changes the pattern of using planes for
traverse the memories with the number of attached memories.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4351>
When transitioning from state PAUSED to READY, the sctpenc element
could previously be stuck in an endless loop trying to resend data
in case the underlying sctp stream was in the process of
resetting. usrsctp_sendv() would repeatedly return EAGAIN with the
result that 0 bytes were sent and then sctpenc would retry forever.
To bring sctpenc out of the resend loop we just need to inform the
sink pad that it is flushing, which is already done for the associated
data queue, but we also need to set the bools associated with the
sinkpads that are used as the loop criterion.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4601>