Use VIDIOC_CREATE_BUFS ioctl to create buffers instead of VIDIOC_REQBUFS
because it allows to create buffers also while streaming.
To prepare the introduction of VIDIOC_REMOVE_BUFFERS create
the buffers one per one instead of a range of them. This way
it can, in the futur, fill the holes.
gst_v4l2_decoder_request_buffers() is stil used to remove all
the buffers of the queue.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7684>
In my tests with the new GCC 14 compiler for Cerbero, I got the
following error:
> In file included from include/directxmath/DirectXMath.h:2275,
> from ../gst-libs/gst/d3d11/gstd3d11converter.cpp:46:
> include/directxmath/DirectXMathMatrix.inl: In function 'bool
> DirectX::XMMatrixDecompose(XMVECTOR*, XMVECTOR*, XMVECTOR*, FXMMATRIX)':
> include/directxmath/DirectXMathMatrix.inl:1161:16:
> error: variable 'aa' set but not used [-Werror=unused-but-set-variable]
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7658>
In some cases, decodebin3 will send us incomplete caps (not containing
codec_data), and then a GAP event, which will force a negotiation.
This segfaults due to a null pointer deref because self->input_state
is NULL.
The only possible fix is to avoid negotiating when we get incomplete
caps (to avoid re-negotiationg immediately afterwards, which isn't
supported by some muxers), but also set as much input state as
possible so that a renegotiation triggered by a GAP event can complete
successfully.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7634>
D3D12_HEAP_FLAG_CREATE_NOT_ZEROED flag was introduced as of
Windows 10 May 2020 Update, and older versions don't understand
the heap flag. Checks the feature support and enables the
D3D12_HEAP_FLAG_CREATE_NOT_ZEROED only if it's supported by OS
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7573>
The uvcsink was limited to only transfer YUY2 and MJPEG. For the
uncompressed formats there is no technical reason not to support them.
Since gst_video_format_to_string is already supporting more fourcc than
only YUY2 we use the default path in gst_v4l2uvc_fourcc_to_bare_struct
to create structures for more formats and bail out if the returned
format is not from the uncompressed type.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6037>
In fact, the va decoder is just a internal helper class and its access
is under the control of all dec elements. So far, there is no parallel
operation on it now.
At the other side, some code scan tools report race condition issues.
For example, the "context" field is just protected with lock at _open()
but is not protected at _add_param_buffer().
So we just delete all its lock usage.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7547>
Be smarter when allocating sink and source memory pools to reduce the
memory footprint. Use gst_v4l2_decoder_get_render_delay() to know the
need number of buffers for downstream element.
Handle errors in case of memory allocation failures.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7544>
If the UVC gadget announces multiple formats in the descriptors the uvcsink
doesn't select the actual format but let's the UVC hosts select the format.
If the GStreamer pipeline is started before a UVC host selected the format,
upstream decides on a format until the UVC host has decided. In this case, the
current format needs to be set based on the caps from the caps event to be able
to detect if the format selection by the UVC host requires a format change on
the GStreamer pipeline.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7473>
The uvcsink may be put into the READY state to start listening for UVC requests.
Therefore, the UVC host may set a streaming format before the GStreamer pipeline
is started and the uvcsink received a caps event. In this case, prev_caps will
be NULL.
If the EVENT_CAPS has not been received, skip the check if the format needs to
be changed, since the sink will be started with the format selected by the UVC
host, anyway.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7473>
Adding a property to control the number of in-flight GPU commands
(default is unlimited). Note that actual maximum number is defined
in d3d12device's direct command queue object which is 32 now,
thus total number of scheduled GPU commands cannot exceed 32.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7444>
Sometimes under certain loads, VT can error out with kVTVideoEncoderMalfunctionErr or kVTVideoEncoderNotAvailableNowErr.
These have been reported to happen more often than usual if CopyProperty/SetProperty() is used close to the encode call.
Both can be worked around by restarting the encoding session.
These errors can be returned either directly from VTCompressionSessionEncodeFrame() or later in the encoding callback.
This patch handles both scenarios the same way - a session restart is be attempted on the next encode_frame() call.
If the error is returned immediately by the encode call, it's possible that some correct frames will still be given to
the output callback, but for simplicity (+ because I wasn't able to verify this scenario) let's just discard those.
In addition, this commit also simplifies the beach/drop logic in enqueue_buffer.
Related bug reports in other projects:
http://www.openradar.me/45889262https://github.com/aws/amazon-chime-sdk-ios/issues/170#issuecomment-741908622
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7173>
Since bins can set the context of their children elements, the set_context()
vmethod shouldn't call bus messages post methods, since it locks the parent
object, the bin, which might be already locked, leading to a deadlock.
Fixes: #3706
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7378>
Adding a new videosink element for Windows composition API based
applications. Unlike d3d12videosink, this element will create only
DXGI swapchain by using IDXGIFactory2::CreateSwapChainForComposition()
without actual window handle, so that video scene can be composed
via Windows native composition API, such as DirectComposition.
Note that this videosink does not support GstVideoOverlay interface
because of the design.
The swapchain created by this element can be used with
* DirectComposition's IDCompositionVisual in Win32 app
* WinRT and WinUI3's UI.Composition in Win32/UWP app
* UWP and WinUI3 XAML's SwapChainPanel
See also examples in this commit which show usage of the videosink
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7287>
num_backward_references > 0 means we need to cache several frames
after the current frame. But the basetransform class does not
provide any _drain() kind function, so we do not have the chance
to push out our cached frames when EOS or set caps event comes.
Rather than losing the last several frames, we should just give up
the backward reference here.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7348>
The current code forgets to push the first several frames if the forward
reference > 0. They are just cached in history array and will never be
deinterlaced and pushed.
For the first several frames, even the forward reference frames are not
enough, we still need to deinterlace them as normal and push them after that.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7348>
Fixing warnings
GStreamer-CRITICAL **: 01:21:25.862: gst_value_set_int_range_step:
assertion 'start < end' failed
Although when QSV runtime reports a codec is supported, resolution query
fails sometimes, espeically VP9 encoder case on Windows.
Don't try to register an element if resolution query returned an error
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7250>
A fence configured in GstD3D12Memory should be used only for
write access to be completed. And because d3d12 -> d3d11 copy path
is read access to d3d12 resource, we should not set fence to
memory. Otherwise another read access to the d3d12 resource
will wait for d3d11 device context's copy operation although
simultaneous read access is allowed.
Use background thread to keep d3d12 resource and wait for d3d11 device's
copy operation instead.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7243>
Doing so resets the stride from the VideoMeta and it wasn't done before
the commit below. While on it, drop the plane size check as we can't
reliably predict the correct size when using DRM modifiers.
Fixes: 89b0a6fa23 ("va: refactor buffer import")
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7187>
null event NT handle to ID3D12Fence::SetEventOnCompletion()
will block the calling CPU thread already, thus it has no point that
creating an event NT handle in order to immediate wait for fence at CPU-side.
Note that passing a valid event NT handle to the fence API might be useful
when we need to wait for the fence value later (or timeout is required),
or want to wait for multiple fences at once via WaitForMultipleObjects().
But it's not a considered use case for now.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7176>
GstD3D12Window.priv.input_info is referenced by mouse event handler
in order to calculate corresponding original position
if scene is rotated/flipped by the videosink.
Fixing regression introduced by recent d3d12videosink refactoring
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7177>
The driver - AKA intel-vaapi-driver - has been unmaintained for four years
now and encoding appears to be broken in various cases. As it's unlikely
that the situation will improve, blocklist the driver for encoding.
Decoding appears to be stable enough to keep it enabled.
The driver can still be used by setting the `GST_VA_ALL_DRIVERS` env
variable.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7170>
The driver frame counters (processed, dropped, buffer level) are not
always correct apparently, and don't allow reliably assigning a frame
number to captured frames.
Instead of relying on them, count the number of frames directly here and
detect dropped frames based on the capture times of the frames: if more
than 1.75 frame durations are between two frames, then there must've
been a dropped frame.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7163>
If a d3d12 memory holds non-direct-queue fence but the fence was
created with D3D12_FENCE_FLAG_SHARED flag, use the fence instead of
waiting for fence at CPU side. Note that d3d12ipcsrc or
d3d12screencapture elements will hold such sharable fence.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7139>
This needs significant work to use the new Metal→Vulkan integration
extension `VK_EXT_metal_objects`
```
MoltenVK/mvk_deprecated_api.h:132:1: note: 'vkGetMTLDeviceMVK' has been explicitly marked deprecated here
MVK_DEPRECATED_USE_MTL_OBJS
^
MoltenVK/mvk_deprecated_api.h:74:52: note: expanded from macro 'MVK_DEPRECATED_USE_MTL_OBJS'
#define MVK_DEPRECATED_USE_MTL_OBJS VKAPI_ATTR [[deprecated("Use the VK_EXT_metal_objects extension instead.")]]
^
../sys/applemedia/videotexturecache-vulkan.mm:303:20: error: 'vkSetMTLTextureMVK' is deprecated:
Use the VK_EXT_metal_objects extension instead.
VkResult err = vkSetMTLTextureMVK (memory->vulkan_mem.image, texture);
^
MoltenVK/mvk_deprecated_api.h:151:1: note: 'vkSetMTLTextureMVK' has been explicitly marked deprecated here
MVK_DEPRECATED_USE_MTL_OBJS
^
MoltenVK/mvk_deprecated_api.h:74:52: note: expanded from macro 'MVK_DEPRECATED_USE_MTL_OBJS'
#define MVK_DEPRECATED_USE_MTL_OBJS VKAPI_ATTR [[deprecated("Use the VK_EXT_metal_objects extension instead.")]]
^
2 errors generated.
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7091>