To achieve maximum throughput, waiting on command commit thread
is not ideal. And render-delay will introduce unwanted latency.
Best is to split thread and wait finished decoding job in a dedicated
output thread
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5812>
When creating a new VA pool set config size to zero because it's not used.
Also, given the potential different sizes from software buffer pools and VA
buffer pools, this patch handle that potential different values.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5805>
When creating a new VA pool set config size to zero because it's not used.
Also, given the potential different sizes from software buffer pools and VA
buffer pools, this patch handle that potential different values.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5805>
When creating a new VA pool set config size to zero because it's not used.
Also, given the potential different sizes from software buffer pools and VA
buffer pools, this patch handle that potential different values.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5805>
VA drivers allocate surfaces given their properties, so there's no need to
provide a buffer size to the VA pool.
Though, the buffer size is provided by the driver, or the canonical size
is used for single planed surfaces.
This patch removes the need to provide a size for the function
gst_va_pool_new_with_config() and adds a helper method to retrieve the surface
size, gst_va_pool_get_buffer_size(). Also change the callers accordingly.
Changes for custom VA pool creation will be addressed in the following commits.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5805>
If the ladspa plugin is enabled explicitly or via auto-features, the
liblrdf dependency can not be disabled.
As the RDF parsing currently provides hardly any features, the possibility
to disable it fairly useful.
Fixes: #3168
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5794>
With the way the runtime checks are currently set up, every single
openh264 release, no matter how minor, is considered an ABI break and
requires gst-plugins-bad recompilation. This is unnecessarily strict
because it doesn't allow downstream distributions to ship any openh264
bug fix version updates without breaking gstreamer's openh264 support.
Years ago, at the time when gstreamer's openh264 support was merged,
openh264 releases were done without a versioned soname (the library was
just libopenh264.so, unversioned). Since then, starting with version
1.3.0, openh264 has started using versioned sonames and the intent has
been to bump the soname every time there's a new release with an ABI
change.
This patch drops the strict version check. meson.build already has a
minimum requirement on openh264 version 1.3.0 where soname versioning
was added, which should be good enough to ensure that the library is
using soname versioning.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5780>
This avoid a build failure when compiling against OpenSSL 3.2.0. The
problem is when windows.h is included before WinSock2.h. Because
windows.h includes winsock.h[1]. Defining _WINSOCKAPI_ stops windows.h
including winsock.h.
Error:
```
[748/1041] Compiling C object ext/dtls/gstdtls.dll.p/gstdtlscertificate.c.obj
FAILED: ext/dtls/gstdtls.dll.p/gstdtlscertificate.c.obj
[...]
Windows Kits\10\include\10.0.17763.0\shared\ws2def.h(235): error C2011: 'sockaddr': 'struct' type redefinition
Windows Kits\10\include\10.0.17763.0\um\winsock.h(482): note: see declaration of 'sockaddr'
```
[1] https://stackoverflow.com/a/1372836
Closes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3167
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5770>
The original idea was to select the type of mapping (either using derive images
or downloading the image) in runtime, under the assumption that both methods
shared the same memory layout (offsets and strides), because a single
GstVideoMeta is assigned by the buffer pool at allocation time. Nonetheless, in
recent hardware this assumption is invalid, raising memory access errors.
This patch removes completely the mapping type selection at runtime, using the
method selected when the allocator is configured, synced with the bufferpool
allocation.
This problem was fixed originally for iHD driver only. But now it makes sense to
remove all of it.
Original-patch-by: Mengkejiergeli Ba <mengkejiergeli.ba@intel.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5760>
When exporting a DMABuf from a VASurface the user might tell that the surface
was allocated with certain fourcc, but the returned VADRMPRIMESurfaceDescriptor
migth tell a different fourcc, as in the case or radeonsi driver, for duplicated
fourcc, such as YUY2 and YUYV.
Originally it was supposed to be a failed exportation. This patch relax this
validation by allowing different fourcc.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5760>
In multi-card scenario, user can set GST_MSDK_DRM_DEVICE env variable to
choose the device. This patch can align vpl's queried results with the
users' choice by passing deviceID when creating mfx implementation.
Co-authored-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5697>
This is a bit of a hack solution has I think the correct solution is to
expose model caps on sinkpad (eventually sinkpads). Till then I think
this is reasonable.
- Add a property to onnxinference to set datatype.
- Fix internal buffer allocation size based on datatype.
- Extract method to remove alphe channel and convert to planar image
when requested. Also template the method to support writing to buffers
of different datatype.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5761>
There is an existing PMT mapping between PCR_%s and an mpegtsmux sink
pad name, where %s equals the program number that the PCR corresponds
to. We re-purpose this functionality to also support a mapping between
PCR_%s and an arbitrary PID. If this mapping is set, then the header PCR
PID is set to this value, and PCR is attached to the stream with this
PID.
Note: the current implementation also attaches PCR to the video stream,
so this may be inefficient.
Co-authored-by: Jordan Yelloz <jordan.yelloz@collabora.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5726>
Clip tile rows and cols to 64 as describe in AV1 specification
to avoid writing outside array range but preserve sb_cols
and sb_rows value which are used to futher computation.
Fixes ZDI-CAN-22226 / CVE-2023-44429
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5702>
in the case of an upstream element proposing a buffer pool,
use it to allocate the buffer image with the given parameters
set by the upstream element.
Besides the buffer pool handling is sync'd with GstBaseTransform
base class.
See the case of vulkanupload ! vulkanh264enc
Co-authored-by: Victor Jaquez <vjaquez@igalia.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5651>
Direct3D feature level 10 supported GPUs were released
more than 15 years ago, around the time when Windows
Vista / 7 were released. Also our d3d11 plugin/library
does not support feature level 9.x very well already.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5709>
We were previously always querying index 0, and while the number of planes per
buffer will never change, it seems more proper to query the right buffer rather
than always the first one.
This was found while reading strace logs, and wondering why the
V4L2_BUF_FLAG_MAPPED flag was present on all ¬0 indices even though that
happened before VIDIOC_EXPBUF.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5647>
- GstAnalyticRelationMeta is a base class for analytics
meta. It's able to store analytics results (GstAnalyticRelatableMtd)
and describe the relation between each analysis results.
- GstAnalysisRelationMeta also contain an algorithm able to explore
analysis results relation using a bfs.
- Relation(edge) between analysis results (vertice) are stored in an adjacency-matrix
that allow to quickly identify if two analysis results are related and by
which relation they related. It also work for indirect relation
and can provide the path of analysis results by which two
analysis results are related.
- One allocation per buffer to store analysis results. Here we rely on
the application to guess how much space will be required to store all
analysis results. This is something that could be improved
significantly but it's a starting point.
- Define common analysis results, classification, object-detection,
tracking that are subclass of GstAnalyticRelatableMtd. The also
provide exemple of how to extend GstAnalyticRelatableMtd to have them
benefit for the mechanim to express relation with other analysis
results.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4962>
During the video session memory allocation, the property flags can
be different from the expected ones, so do not expect all the
property flags and test it with G_MAXUINT32
It's failing with driver 525.47.26 and NVidia HW NVIDIA GeForce
RTX 3050 and 2060
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4850>
Take the case into account when user filters have been set before the
source gets updated.
Note that the further linking of the filters, if present, happens below
in the `gst_camera_bin_check_and_replace_filter()` calls.
The audio filter is still affected by the same issue but left out for
now.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5527>
Even if IDXGIOutput6 says current display colorspace is HDR,
captured texture via IDXGIOutputDuplication::AcquireNextFrame()
is converted frame by OS unless we use IDXGIOutput5::DuplicateOutput1()
with DXGI_FORMAT_R16G16B16A16_FLOAT format, in order for captured
frame to be scRGB color space. Then application should perform
tonemap operation based on reported display white level, color primaries, etc.
Since we don't have any tonemapping implementation, ignores colorimetry
reported by IDXGIOutput6.
Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3128
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5671>
This is how it was documented and how it worked before the port to GstPlay.
Without this, applications expecting signals to be emitted directly
without anything running the main context will simply not receive any
signals.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5672>
d2d runtime seems to execute pending GPU command list
when DXGI ID2D1RenderTarget is being released, and it will invoke
d3d11 immediate context APIs. Should protect all rendering operations
and DXGI resources with lock.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5659>
The code seems to validate that the media-level fingerprint matches
the fingerprint of the previous media or of the whole session. There
is no such requirement in any RFC I found. The session-session one
is just meant to act as a fallback when there is no media-level
fingerprint.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1118>
This commit corrects the mapping relationship between RGB and BGR in GST and DRM.
The previous mapping was incorrect, causing potential color mismatches in the output.
The changes are as follows:
{WL_SHM_FORMAT_RGB888, DRM_FORMAT_RGB888, GST_VIDEO_FORMAT_BGR},
{WL_SHM_FORMAT_BGR888, DRM_FORMAT_BGR888, GST_VIDEO_FORMAT_RGB},
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5620>
After talking with Vivia on IRC, she does not remember why the default
was FALSE and it is in my opinion preferable to stick to whatever
representation best represents time for a given framerate as a default
behavior.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5628>
In the case of encoders and filters when importing a DMABuf, use
GstVideoInfoDmaDrm to get the drm fourcc and modifier.
In both cases, instead of keeping the original GstVideoInfoDmaDrm from caps, the
GstVideoInfo part of the structure is converted as canonical one, given the
format from the fourcc. It's kept in the way to handle V4L2 linear DMABufs and
to avoid too many changes in the current code.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5264>
Instead of guessing the DRM format and modifier, pass a DRM video info to
gst_va_dmabuf_memories_setup().
Still, it checks for the DRM parameters in DRM info, if they are not available,
as in the case of V4L2 buffers, the part of the video info is used.
This is an API breakage, but since the plugin is still in stage, it's still
allowed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5264>
To import DMAbufs we used VASurfaceAttribExternalBuffers which works, but it's
not specific for DRM PRIME 2, since it lacks of many metadata. This patch
replaces VASurfaceAttribExternalBuffers with VADRMPRIMESurfaceDescriptor in
va_create_surfaces().
Still, this patch assumes linear modifier only.
The hack for RGB surfaces in I965 driver was pushed down into
va_create_surfaces() to avoid handling both structures.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5264>
Should fix auto-generated follow-up sections like "Hierarchy" or
"Factory details" to be listed under the element name in the
table-of-contents of the document, instead of a stand-alone
"Duplex-Mode" section.
Also cleanup some spurious colon suffix after section names.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5625>
Now that nvidia-vaapi-driver appeared and isn't yet supported by GstVA, we've to
add an allowed list of supported drivers.
This patch implements it adding a environment variable to disable this driver
check: GST_VA_ALL_DRIVERS
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5616>
Suppose you have a project where GStreamer and wayland-protocols are
pulled in as dependencies via .wrap files. In that case, Meson's setup
step will fail for gst-plugins-bad with the message "Sandbox violation:
Tried to grab file viewporter.xml outside current (sub)project." To
avoid this exception, one should use Meson's `files` and `join_paths`
functions. The suggested solution is identical to how GTK 4 processes
Wayland files.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5593>
A payload of 0x80 0x80 means that it's padding. It's not a good idea to
throw this away though, because of the cc_valid field.
According to CEA 10-B section 25.2.1, if cc_valid is zero, the run-in
clock and start bit should not be generated. In practice, this means
that any closed captions will be erased and the end-user TV will show
that captions are not available for this stream. This might have
undesired consequences, e.g. we were just showing a long line of
captions and we disable it before the user has had time to read it, or
you can't enable closed captions during silence/music intervals.
We cannot reliably detect whether there's a currently-silent closed
caption stream or just nothing, but we have this information coming from
upstream, so we can at least not discard it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5508>
When decoding stream using hardware V4L2 decoder element, in any of the
currently supported formats, the decoding will fail once frame number
1000000 is reached. The reported error clearly indicates a wrap-around
occured, instead of receiving decoded frame 1000000, frame 0 is received
from the hardware V4L2 decoder driver.
The problem is actually not in the driver itself, but rather in gstreamer,
which uses `struct v4l2_buffer` member `.timestamp` in a special way. The
timestamp of buffers with encoded data added to the SINK (input) queue of
the driver is copied by the driver into matching buffers with decoded data
added to the SOURCE (output) queue of the driver. In fact, the timestamp
is not a timestamp at all, but rather in this special case, only part of
it is used as an incrementing frame counter.
The `.timestamp` is of type `struct timeval`, which is defined in
`sys/time.h` [1]. Only the `tv_usec` member of this structure is used
for the incrementing frame counter. However, suseconds_t tv_usec [2]
may be limited to range [-1, 1000000]:
"
[XSI] The type suseconds_t shall be a signed integer type capable of
storing values at least in the range [-1, 1000000].
"
Therefore, once frame 1000000 is reached, a rollover occurs and decoding
fails.
Fix this by using both `struct timeval` members, `.tv_sec` and `.tv_usec`
with matching modular arithmetic, this way the failure would occur again
just short of 2^84 frames, which should be plenty.
[1] https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sys_time.h.html
[2] https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sys_types.h.html
A test case using stateless hardware h264 decoder, the WARN/ERROR output
in gstreamer log indicates a failure occurred. With this change, that
error no longer occurs and the WARN/ERROR are not present:
```
pc$ gst-launch-1.0 videotestsrc num-buffers=1001001 pattern=6 ! \
video/x-raw,width=16,height=16,format=I420 ! \
x264enc ! filesink location=/tmp/test.h264
dut$ GST_DEBUG="*:3" gst-launch-1.0 filesrc location=/tmp/test.h264 ! \
h264parse ! v4l2slh264dec ! fakesink
...
0:03:51.393677606 12111 0x370df400 WARN \
v4l2codecs-decoder gstv4l2decoder.c:1157:gst_v4l2_request_set_done:<v4l2decoder2> \
Requested frame 1000000, but driver returned frame 0.
0:03:51.394140597 12111 0x370df400 WARN \
v4l2codecs-decoder gstv4l2decoder.c:1157:gst_v4l2_request_set_done:<v4l2decoder2> \
Requested frame 1000001, but driver returned frame 1.
0:03:51.394425216 12111 0x370df400 WARN \
v4l2codecs-decoder gstv4l2decoder.c:1157:gst_v4l2_request_set_done:<v4l2decoder2> \
Requested frame 1000002, but driver returned frame 2.
0:03:51.394665211 12111 0x370df400 WARN \
v4l2codecs-decoder gstv4l2decoder.c:1157:gst_v4l2_request_set_done:<v4l2decoder2> \
Requested frame 1000003, but driver returned frame 3.
0:03:51.394785833 12111 0x370df400 WARN \
v4l2codecs-h264dec gstv4l2codech264dec.c:1059:gst_v4l2_codec_h264_dec_output_picture:<v4l2slh264dec0> \
error: Failed to decode frame 1000000
ERROR: from element /GstPipeline:pipeline0/v4l2slh264dec:v4l2slh264dec0: Failed to decode frame 1000000
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5598>
This pair of elements, inspired from shmsink/shmsrc, send unix file
descriptors (e.g. memfd, dmabuf) from one sink to multiple source
elements in other processes.
The unixfdsink proposes a memfd/shm allocator, which causes for example
videotestsrc to write directly into memories that can be transfered to
other processes without copying.
Sponsored-by: Netflix Inc.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5328>
The src caps of the libde265 is now fixed to I420, and so if the
stream is other format, such as 4:4:4 or 10 bits format, the pipeline
will crash because the dowstream element accesses the video buffer as
I420 format.
We now restrain the input caps to "main" profile, which only contains
4:2:0 8 bits stream.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5573>
This can happen with the dummy "noopenh264" library that the freedesktop
flatpak runtime ships, and Fedora is planning on shipping as well. In
both cases the dummy implementation gets replaced with the actual
openh264 library that's downloaded directly from Cisco, but just to be
on safe side, this patch makes it careful to check the return values to
avoid crashing if the underlying library hasn't been swapped out yet.
The patch is taken from freedesktop-sdk and was originally written by
Valentin David <valentin.david@codethink.co.uk>.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5581>
If users update geometry related properties very frequently
for a stream to be animated, redrawing on every update
can make rendering choppy or can be a performance bottleneck.
To address the issue, adding a property to control the behavior
of redrawing scene when geometry related properties are updated.
Also, do not resize swapchain on such property update, since
re-allocating backbuffer and multi-sampled render target is
unnecessary in that case.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5575>
Other Windows applications allow window switching even when
an application window is in fullscreen mode. Also fixing
regression introduced in 15248d8b84
which makes restored window is always located at topmost
since we do not call SetWindowPos() anymore when restoring
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5574>
ISimpleAudioVolume controls volume of corresponding audio session
and there would be only single input/output audio session
in case of share-mode, which means that it controls audio volume of the
process. Instead, use IAudioStreamVolume interface which controls
volume of the stream.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5549>
Ignore alpha component of source (mouse cursor texture)
when blending alpha channel, otherwise the background area of source
(which has zeros) will be written to render target. Then it will result
in black rectangle if output texture is converted to premultiplied alpha
texture
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5566>
When hotdoc documentation is enabled and opencv plugin is set as
auto-detected, but the library isn't installed, meson configuration fails
with this message:
../subprojects/gst-plugins-bad/docs/meson.build:139:21: ERROR: Unknown variable "gstopencv_dep".
This patch fixes this case defined gstopencv_dep as disabler() when
dependencies aren't found.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5560>
Current codes try derive image in _update_image_info first, if
derive returns no error, the va_allocator->info is the one from derived
image, but in va_map_unlocked, we disable derive manner for d3d backend
because it doesn't seem to work, this will cause issue for d3d path,
i.e. possibly using derived info in va_get_image to do mapping...
This patch disables derive image for d3d backend in _update_image_info,
to ensure we only use info from va_create_image for d3d path.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5495>
An operation is an arbitrary amount of work to be executed on the host, a
device, or an external entity such as a presentation engine.
The purpose of this object is to help on the operation's synchronization
through declaring explicit execution dependencies, and memory dependencies
between two sets of operations defined by the command’s two synchronization
scopes.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5079>
While VkPipelineStageFlags is an enum (arguably backed as uint32 in 32bit
platforms), VkPipelineStageFlags2 is a redefinition of guint64; likewise for
VkAccessFlags and VkAccessFlags2.
This patch types both members in GstVulkanBarrierMemoryInfo as guint64 for
compatibility, so it could be used with or without synchronization2 vulkan
extension.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5079>
Previously we were checking for opencv dep in 2 different places,
and the checks would vary in terms of how complex and exhaustive
they were.
Move the check into the libs module and reuse the result later on.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3016>
This field is used to store gbooleans (which are ints) but if it's
a :1 bit depth assigning ints to it changes it's value as the only
valid values are -1 and 0.
Make it a guint instead so the cast would be correct.
```
../subprojects/gst-plugins-bad/gst-libs/gst/vulkan/xcb/gstvkwindow_xcb.c:151:25: error:
implicit truncation from 'int' to a one-bit wide bit-field changes value from 1 to -1
[-Werror,-Wsingle-bit-bitfield-constant-conversion]
window_xcb->visible = TRUE;
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5432>
This was wrongly calling the base class method, which unnecessairly took the stream lock, already taken by
handle_frame(). The drain() call in negotiate() would then wait for the output loop to pause, while that loop
is stuck waiting to take the stream lock, thus causing a deadlock.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5521>
This element refactors functionality from gstonnxinference element,
namely separating out the ONNX inference from the subsequent analysis.
The new element runs an ONNX model on each video frame, and then
attaches a TensorMeta meta with the output tensor data. This tensor data
will then be consumed by downstream elements such as gstobjectdetector.
At the moment, a provisional TensorMeta is used just in the ONNX
plugin, but in future this will upgraded to a GStreamer API for other
plugins to consume.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4916>
- Don't try to make the parameters match `GHFunc`. Use a dedicated
callback for `g_hash_table_foreach`.
- Don't try to be clever with buffer memories. We're allocating a full
packet anyway, might as well memcpy and save on a lot of complexity.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5496>
There are a bunch of plugins that you need for webrtc support, and
it's not obvious at all to users which those are.
With this commit, srtp, sctp and dtls options will be auto-enabled if
the webrtc option is enabled.
Requires meson 1.1
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5505>
The v4l2codecs H.265 decoder uses the
GstH265SliceHdr::entry_point_offset_minus1 array so make sure that it is not
freed before decoding the frame.
Before this patch, some H.265 input would segfault in
gst_v4l2_codec_h265_dec_fill_slice_params() when executing the line:
guint32 entry_point_offset = slice_hdr->entry_point_offset_minus1[i] + 1;
Make sure that the array is not freed before using it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5499>
While adding arbitrary tile support, a round up operation was badly
converter. This caused the Y component of the stride to be 0. This
eventually lead to a crash in glupoad preceded by the following
assertion.
gst_gl_buffer_allocation_params_new: assertion 'alloc_size > 0' failed
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5458>
The caps that were sent by the caps event can be retrieved from the sinkpad
using gst_pad_get_current_caps(). This is more reliable than using cur_caps as
we know exactly which caps upstream selected when the UVC host didn't select a
format, yet.
This further allows to simplify the check, if the uvcsink has to wait for the
caps event before switching to the internal v4l2sink.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4994>
The probe passes all events except the EVENT_CAPS. Installing and removing the
probe doesn't provide any additional value.
Install an event function and always handle EVENT_CAPS. Use the caps_changed
field, to decide, if the element has to do anything special on a EVENT_CAPS.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4994>
Move the sanity checks to the beginning of the function. Make the actual effect
of the function more obvious and reset the flags in the end.
This should make it easier to understand what this function is doing.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4994>
The probe that installs the buffer probe is already on the correct pad. There is
no need for a separate function to install the probe.
While at it, change the signature of the probe functions to GstPadProbeCallback
to avoid the cast when installing the probes.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4994>
The uvcsink calculates the caps for the format that the UVC host selected. The
gst_uvc_sink_parse_cur_caps() sets these caps as cur_caps as a side effect. This
behavior is surprising as cur_caps is later updated to reflect the actually used
caps.
Just return the configured caps to avoid side effects. This makes the function
easier to understand. Update the function name to reflect the new behavior.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4994>
The only job of the event peer probe is to catch the upcoming caps event
and be able to react with the sink change. All other events that are
passing the pad shall be passed and ignored.
Since the probe is a blocking probe, there is no use in returning
with GST_PAD_PROBE_OK on other events. Otherwise the event would just
be blocked.
Since we are handling the probe removal of the probe already in the
event switch, we can remove the second explicit probe removal.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4994>
This commit ports functionality from the `rtpsrc` to make the `ristsrc`
work with dynamic payload types.
It adds two properties:
- `caps`
- `encoding-name`
These can be used to make the `ristsrc` receive other payload types than
the MPEG TS one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5422>
Since DXVA does not support some profiles such as HEVC RExt,
vendor specific decoding API is still required.
When decoder is negotiated with d3d11 caps, decoder will convert
semi-planar frame to planar since semi-planar format (e.g.,
DXGI_FORMAT_NV12) is not supported by CUDA/D3D11 interop.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5409>
Use gst_codec_utils_caps_get_mime_codec() in pbutils for codec
strings. That function gives more elaborate RFC 6381 compatible
strings than the helper functions in gstmdphelper.c, such as
"avc1.F4000D".
Remove the helper functions, as they were only used from dashsink.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5404>
The interaudiosrc might take buffers of different sizes from the audio adapter,
so keeping metas consistency would be an issue. So the sink now strips the audio
metas away and the src adds them back (for non-interleaved layouts only) when
taking buffers from the adapter.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5324>
Moves outputting frames to a task on the source pad, bringing vtdec in line with vtenc.
This brings possible performance improvements thanks to decoupling queueing new frames from outputting processed ones.
The queue length is limited to `2*DBP` to prevent decoding too far ahead compared to what we're pushing downstream.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5163>
This was easy to trigger when testing with e.g. vtenc ! vtdec ! glimagesink and closing the sink via window button,
causing GST_FLOW_ERROR to be received by the output loop, stopping it with the queue still full. This made the
enqueue_buffer() callback to lock waiting for space in our queue, while handle_frame() was waiting for the internal
VideoToolbox queue to free up, so that VTCompressionSessionEncodeFrame could finish. As the output loop was not
running, both functions waited forever.
Fixed by 1) immediately emptying our queue when GST_FLOW_ERROR is received (like we already did with _FLUSHING)
and 2) unconditionally setting the flushing flag in finish_encoding() when it sees the output loop stopped because
of GST_FLOW_ERROR, so that enqueue_buffer() will immediately discard any new frames coming out of VideoToolbox.
Both of those make sure we never run into the both-queues-full scenario.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5303>
As a short-term solution before full d3d12 rendering feature,
copy decoded d3d12 texture to shared d3d11 texture in order to use
existing various d3d11 implementations such as conversion, resizing,
and videosink.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5356>
There is no need to use DRM dumb pool if buffer to
render is already a DMABuf, just import it and render it.
This fixes a DMAbuf memory leakage when waylandsink downstream
element exports DMABuf while waylandsink is configured to be
DMABuf exporter (drm-device=/drv/dri/card0):
gst-launch-1.0 v4l2src io-mode=4 ! gtkwaylandsink drm-device=/dev/dri/card0
leakage identfied with command:
watch "cat /sys/kernel/debug/dma_buf/bufinfo | grep attached "
Fixes#2729
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5350>
There is no need to use DRM dumb pool if buffer to
render is already a DMABuf, just import it and render it.
This fixes a DMAbuf memory leakage when waylandsink downstream
element exports DMABuf while waylandsink is configured to be
DMABuf exporter (drm-device=/drv/dri/card0):
gst-launch-1.0 v4l2src io-mode=4 ! waylandsink drm-device=/dev/dri/card0
leakage identfied with command:
watch "cat /sys/kernel/debug/dma_buf/bufinfo | grep attached "
Fixes#2729
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5350>
Don't update info's size with the VA image reported data size for single plane
images, since drivers might allocate bigger space than the strictly required to
store the image, but when we dump the buffer as is (using filesink, for example)
the produced stream is corrupted. For multi-plane images video meta is required
to read/write them.
We updated info's size because gstreamer-vaapi did it too, but the reason to
update it there was for uploading and rendering surfaces (commit c698a015).
Furthermore, this patch adds an error message if the allocated data size for the
image by the driver is lesser than the expected because it would be a buggy
driver.
Fixes: #2959
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5308>
Even if decoder is negotiated with CUDA memory feature, if downstream
proposed no buffer pool, assume that the pool size is unknown.
And disable zero-copy if there's no more free output surface.
Or, in case of reverse playback, always copy frames.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5338>
Even if the segmentation feature value is not updated,
the parsed "segmentation_update_map" and "segmentation_temporal_update"
values should not be cleared as it's referenced during lower
level bitstream parsing. Also, don't use assert() in parser
unless it's clearly impossible condition.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5334>
If DPB is full already, GstH265Decoder::new_picture() might fail if
subclass uses fixed size picture pool and its size is equal to the DPB
size. Call the new_picture() after DPB is cleared in gst_h265_decoder_dpb_init()
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5333>
Issue is that when amc was producing a codec-data buffer, a
GstVideoCodecFrame was being popped off the internal queue. This meant
that the codec-data was being associated with the first input frame and
the second (first encoded buffer) output buffer with the second input
frame. At the end (assuming one input produces one output which seems
to hold in my testing and how the encoder is currently implemented)
there would be an input frame missing and would be pushed without any
timing information. This would lead to e.g. muxers rejecting the buffer
without PTS and failing to mux.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5330>