This will cause an integer overflow a little bit further down because we
allocate a bit more memory to allow for a NUL-terminator.
The caller should've avoided passing that much data in already as it's
not going to be a valid image and there's likely not even that much data
available.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4894>
glfilter will unref input buffer after _transform() call immidiately,
but gpu may still reading input buffer for rendering because gl
api is executed async. Need hold reference for input buffer by
adding parent meta to output buffer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4801>
Appsink will unref prev sample in dispose function. Which is later
when V4L2 video decoder link with appsink as V4L2 video decoder
will close V4L2 device fd during GST_STATE_CHANGE_READY_TO_NULL.
If the video buffer return to V4L2 video decoder after the decoder
closed V4L2 device fd, V4L2 can't release the video frame buffer
which allocated with MMAP mode as application can't call
VIDIOC_REQBUFS 0 to release the video frame buffer by V4L2 driver.
The memory of the video frame will leak.
Unref the gstbuffer in stop() function, so V4L2 video decoder
can received all video frame buffers and release it before close
V4L2 device fd.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4818>
This patch adds gst_egl_image_from_dmabuf_direct_target_with_dma_drm() and
add gst_egl_image_from_dmabuf_with_dma_drm() functions
New function gst_egl_image_from_dmabuf_direct_target_with_dma_drm(), where
gst_egl_image_from_dmabuf_direct_target() is a specialization of the first.
And gst_egl_image_from_dmabuf() is a specialization of new function
gst_egl_image_from_dmabuf_with_dma_drm()
Co-authored-by: Victor Jaquez <vjaquez@igalia.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4680>
It internally uses gst_gl_context_egl_get_dma_formats() instead of fetching
modifiers by itself.
Thus gst_egl_image_check_dmabuf_direct() is a decorator of this new function.
Co-authored-by: He Junyan <junyan.he@intel.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4680>
By calling the internal function gst_gl_context_egl_fetch_dma_formats() the an
array of structures holding a DMA fourcc format and its modifiers (another array of
structure holing modifier and if it's external only) will be stored.
Users would call gst_gl_context_egl_get_format_modifiers() to get the array of
modifiers of a specific DMA fourcc format.
Co-authored-by: He Junyan <junyan.he@intel.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4680>
When the alignment contains nothing, all its fields are 0 and always
can be satisfied. So there is no need to validate it in this case.
And there are a lot of places just setting this alignment to default
all zero value, this validation generates lots of warnings.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4674>
Add d3d11 conversion path to make gst_video_convert_sample() work
for GstD3D11Memory.
Note that just adding "d3d11download" to the exisitng code is
suboptimal from GstD3D11 point of view because:
* d3d11convert element can support crop/colorspace-conversion/scale
all at once while existing software pipeline needs intermediate steps
for the conversion
* "Process everything on GPU then download it to CPU memory" would be likely
faster than "download GPU memory to CPU then processing it on CPU"
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2715>
Subclasses may want to override the pad template with different formats
or with a different pad subclass.
The original beahviour is still available by calling
gst_gl_mixer_class_add_rgba_pad_templates() in _class_init() of the
subclass.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4608>
Upon creating a window, glimagesink and osxvideosink now set the policy to
NSApplicationActivationPolicyRegular, which lets us show an icon in the Dock
for convenience and appear in the top menu bar like other apps.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4573>
This is no longer needed since the introduction of `gst_macos_main()` in 1.22.
Before that existed, we had a patch for GLib in Cerbero, which did work but made it
impossible to update GLib at all. The code being removed was a fail-safe in case of
running without said patch being applied. It's no longer needed, since for macOS
we just wrap our GStreamer with an NSApplication using `gst_macos_main()`.
Warnings will be displayed if no NSApp/NSRunLoop is found wherever needed,
pointing the user towards using the new API.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4366>
The proxy and queue are created in the gst_gl_window_wayland_egl_open()
function and will be recreated on open. This leaks both objects, the
wayland client documentation mentions that they should be destroyed
using the appropriate destroy functions.
Found during valgrind memory leak testing, these blocks were marked as
definitely lost.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4354>
The first serialized events that can be send on a src pad are a CAPS and then a
SEGMENT event.
When handling events from user in appsrc, we used to send a segment
automatically if the SEGMENT has not been sent yet.
This breaks if the CAPS event was not send either as we were now sending
a SEGMENT before the CAPS.
Fix this by delaying such events until the CAPS has been configured.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4297>
Adding propose_allocation is to meet the requirement of Application to
request buffers. Application sometimes need to create buffer pool
and request buffers to maintain buffer management itself, and Gstreamer plugin
import Application's buffers to use. So, add propose_allocation in
appsink like waylandsink and kmssink etc.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4185>
In gst_video_info_dma_drm_to_caps() the caps are newly created, so there's no
need for make it writable. In gst_video_info_dma_drm_from_caps() a copy of the
caps is done, which implies a gst_caps_make_writable().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4195>
This allow allocating memory from any DRM driver that supports this
method. It additionally allow exporting DMABuf. This allocator depends
on libdrm and will be stubbed if the dependency is missing. This is derived
from kmssink dumb allocator.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3801>
These parameters are not actually `out` parameters but must
be allocated and zero-initialized by the calling function.
Marking them as `out caller-allocates` will cause memory
corruptions when calling these APIs from e.g., Python code.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4051>
Do not store cached EGL images in GstMemory QData. Instead, use a
per-DmabufUpload GHashTable to store cache entries with a weak
reference to the GstMemory.
This allows two glupload elements on separate tee branches to have
their own EGL image cache. For this pipeline:
gst-launch-1.0 v4l2src ! tee name=t \
t. ! queue ! glupload ! fakesink
t. ! queue ! glupload ! fakesink
this gets rid of the occasional critical error message:
GStreamer-CRITICAL **: 08:26:33.194: gst_mini_object_unref: assertion 'GST_MINI_OBJECT_REFCOUNT_VALUE (mini_object) > 0' failed
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3880>
If we have caps then we can only set exactly those caps, if we have no
caps yet then negotiating anything is not very meaningful because the
caps are defined by the application and not downstream.
Avoids, among other things, an unnecessary allocation query and spurious
useless caps being set before the first buffer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3757>
We create a new context in `gst_gl_context_create_thread()` and then
activate it on the current thread. Thereafter we assume that the
current thread continues to be the active thread for that context and
call `gst_gl_context_fill_info()` which asserts that the current
thread is the active thread.
However, if at the same time a different thread calls
`send_message_async()`, it will call into
`gst_gl_window_cocoa_send_message_async()` which will schedule the
message to be invoked using GCD. That anonymous function will also
call `gst_gl_context_activate()`, which creates a race, which can lead
to:
```
gst_gl_context_fill_info: assertion 'context->priv->active_thread == g_thread_self ()' failed
```
Fix it by using `gst_gl_context_thread_add()` to invoke `fill_info()`
on the context.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3732>
This should fix pipelines such as this one to work as expected
... ! opusenc ! capsfilter caps='audio/x-opus,
channels=1; audio/x-opus, channels=2' ! ...
The expectation is that the encoder will propose the first structure
before the second one to the source.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3673>
gst-launch-1.0 audiotestsrc ! udpsink host=127.0.0.1
gst-launch-1.0 udpsrc ! audioconvert ! autoaudiosink
would crash with a floating point exception when clipping the input
buffer owing to a division by zero because no caps event was received.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3469>
Currently, when rtspsrc property add-reference-timestamp-metadata=true,
a downstream rtph264depay element will attach multiple copies of the
same GstReferenceTimestampMeta to the depayloaded media buffers. This
can have signficant performance impacts further downstream in a pipeline
like the following:
rtspsrc add-reference-timestamp-metadata=true ! rtph264depay ! h264parse ! ... ! rtph264pay ! ...
For example, if there are 10 packet buffers for a frame of RTP H.264
video, each of those packet buffers will contain the same reference
timestamp meta. The rtph264depay element will then attach all 10
metadata to the depayloaded frame. And then later when we payload the
frame buffer again for proxying, we now have 10 more buffers each with
10 instance of the same metadata. Allocating/deallocating 100+ instances
of metadata @ 30fps for multiple streams has a pretty large performance
impact.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1578
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3431>
The tile width in pixel is not always available. Notably for
8L128 10bit format, the tile is 8x128 bytes, and the pixel
format is fully packed. That means that the tile contains at
least 6 pixels per line, but it also hold some bits of the
pixel from the same line on the previous or next tile.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3424>
In current tile representation, only tiles with power of two
width and height in bytes are supported. This limitation
prevents adding more complex tiles formats.
In this patch, we deprecate tile_ws and tile_hs from GstVideoFormatInfo and
replace if with an array of GstVideoTileInfo. Each plane tiles are then
described with their pixels width/height, line stride and total size.
The helper gst_video_format_info_get_tile_sizes() that depends on the
deprecated API is also being removed. This can simply be removed as it wasn't
in any stable release yet.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3424>
Setting force_live lets aggregator behave as if it had at least one of
its sinks connected to a live source, which should let us get rid of the
fake live test source hack that is probably present in dozens of
applications by now.
+ Expose API for subclasses to set and get force_live
+ Expose force-live properties in GstVideoAggregator and GstAudioAggregator
+ Adds a simple test
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3008>
When a tile format is padded and imported as DMABuf, the stride
contains the information about the actual width and height in
number of tiles. This information is needed by the detiling shader
in order accuratly calculate the location of pixels. To fix that,
we also copy the offset and strides into the otuput format and
the converter will ensure that the shader is recompiled whenever
the stride changes.
This fixes video corruptions observed when decoding on MT8195
with videos that aren't not aligned to 64bytes in width.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3365>
Posting latency messages causes a full and potentially expensive latency
recalculation of the pipeline. While subclasses should check whether the latency
really changed or not before calling this function, we ensure that we do not
post such messages if it didn't change.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3282>
duplicate symbol '__invoke_on_main' in:
/Library/Frameworks/GStreamer.framework/Versions/1.0/lib/libgstvulkan-1.0.a(cocoa_gstvkwindow_cocoa.m.o)
/Library/Frameworks/GStreamer.framework/Versions/1.0/lib/libgstgl-1.0.a(cocoa_gstglwindow_cocoa.m.o)
ld: 1 duplicate symbol for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Also make the same change in iOS for consistency.
Continuation of https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/1132
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3242>
This allows correct handling of wrapping around backwards during the
first wraparound period and avoids the infamous "Cannot unwrap, any
wrapping took place yet" error message.
It allows makes sure that for actual timestamp jumps a valid value is
returned instead of 0, which then allows the caller to handle it
properly. Not having this can have the caller see the same timestamp (0)
for a very long time, which for example can cause rtpjitterbuffer to
output the same timestamp for a very long time.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1500
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3202>
The implementation was inconsistent between create and destroy. EGLImage
creation and destruction is requires for EGL 1.5 and up, while
otherwise the KHR version is only available if EGL_KHR_image_base
feature is set. Not doing these check may lead to getting a function
pointer to a stub, which is notably the case when using apitrace.
Fixes#1389
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2925>
We need to call this to register the MusixBrainz tags before we use
them in an XMP schema.
Fixes this critical when attempting to run jpegparse on a JPEG
containing MusicBrainz XMP tags:
GStreamer-CRITICAL **: 20:41:07.885: gst_tag_get_type: assertion 'info != NULL' failed
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3092>
The purpose of a deep buffer copy is to be able to release the source
buffer and all its dependencies. Attaching the parent buffer meta to
the newly created deep copy needlessly keeps holding a reference to the
parent buffer.
The issue this solves is the fact you need to allocate more
buffers, as you have free buffers being held for no reason. In the good
cases it will use more memory, in the bad case it will stall your
pipeline (since codecs often need a minimum number of buffers to
actually work).
Fixes#283
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2928>