... based on MediaFoundation work queue API.
By this commit, wasapi2 plugin will make use of pull mode scheduling
with audioringbuffer subclass.
There are several drawbacks of audiosrc/audiosink subclassing
(not audiobasesrc/audiobasesink) for WASAPI API, which are:
* audiosrc/audiosink classes try to set high priority to
read/write thread via MMCSS (Multimedia Class Scheduler Service)
but it's not allowed in case of UWP application.
In order to use MMCSS in UWP, application should use MediaFoundation
work queue indirectly.
Since audiosrc/audiosink scheduling model is not compatible with
MediaFoundation's work queue model, audioringbuffer subclassing
is required.
* WASAPI capture device might report larger packet size than expected
(i.e., larger frames we can read than expected frame size per period).
Meanwhile, in any case, application should drain all packets at that moment.
In order to handle the case, wasapi/wasapi2 plugins were making use of
GstAdapter which is obviously sub-optimal because it requires additional
memory allocation and copy.
By implementing audioringbuffer subclassing, we can avoid such inefficiency.
In this commit, all the device read/write operations will be moved
to newly implemented wasapi2ringbuffer class and
existing wasapi2client class will take care of device enumeration
and activation parts only.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2306>
This add src format validation, this avoid registering element for
drivers we don't support any of their src formats. This also special
case the AlphaDecodeBin wrapper, as we know that alphacombine element
only support I420 and NV12 for now.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2272>
codecalpha is a new plugin introduced to support VP8/VP9 alpha as
defined in the WebM and Matroska specifications. It splits the stream
into two streams, one for the alpha and one for the actual content,
then it decodes them separately with vpxdec and finally combine the
results as A420 or AV12 (i.e. YUV + an extra alpha plane).
The workflow above is setup by means of a bin, gstcodecalphabin.
This patch simulates the same workflow into the v4l2codecs namespace,
thus using the new v4l2 stateless decoders for hardware acceleration.
This is so we can register the new alpha decode elements only if the
hardware produces formats we support, i.e. I420 or NV12 for now.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2272>
We already declare the support of HEVC screen content extension profiles
in the profile mapping list, but we fail to generate the correct VA picture
parameters buffers. This may cause the GPU hang.
We need to fill the buffer of VAPictureParameterBufferHEVCExtension correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2255>
The function of gst_h265_get_profile_from_sps() is better than the
function gst_h265_profile_tier_level_get_profile() when we recognize
the profile of the stream, becaue it considers the compatibility.
It is also used by h265parse to recognize the profile. So it is
better to keep the same behaviour with the parse and other decoders.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2255>
We already declare the support of HEVC range extension profiles in
the profile mapping list, but we fail to generate the correct VA
picture and slice parameters buffers. This may cause the GPU hang.
We need to fill the buffer of VAPictureParameterBufferHEVCExtension
and VASliceParameterBufferHEVCExtension correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2255>
VA-API HEVC decoding needs to known which is the last slice of a
picture, but slices are processed sequencially, so we know the
last slice until all the slices are already pushed into the
VABuffer array.
In order to mark the last slice, they are pushed into the
VABuffer array with a delay of one slice: the first slice is
hold, and when the second slice come, the first one is pushed
while holding the second, and so on. Finally, at end_picture(),
the last slice is marked and pushed into the array.
Co-author: Victor Jaquez <vjaquez@igalia.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2246>
The VA acceleration now has more usages in linux-like platforms,
such as the MSDK. The different plugins based on the VA acceleration
need to share some common logic and types. We now move the display
related functions and types into a common va lib.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2196>
The VP9 related definitions in mfxvp9.h are available under the
condition of 'MFX_VERSION >= MFX_VERSION_NEXT', which implies that these
definitions are never used in a public release.
This is in preparation for oneVPL support because mfxvp9.h was
removed from oneVPL
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1503>
We have only one copy of gst_va_base_dec_parent_class inside the
vabasedec, so it can not handle the case when there are multi va
decoders inside one pipeline. The pipeline:
gst-launch-1.0 filesrc location=xxx.h264 ! h264parse \
! vah264dec ! msdkh265enc ! vah265dec ! fakesink
generates a assertion of
"invalid cast from 'GstVaH264Dec' to 'GstH265Decoder"
and gets a crash.
We should keep the parent_class for each decoder type.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2231>
VADecPictureParameterBufferAV1.mode_control_fields.bits were filled
twice, overwriting to zeros the first assignation. This patch unifies
both assignations.
Also it makes explicit an enum casting between libva and gstreamer; it
removes the assignation to zero a deprecated parameter; and use an
appropriate assertion.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2223>
Before creating output duplication interface, call SetThreadDesktop()
with HDESK of the current input desktop in case a desktop switch has
occurred.
This allows d3d11desktopdupsrc to capture Windows User Account Control
(UAC) prompts, which appear on a separate secure desktop. Otherwise
IDXGIOutput1::DuplicateOutput() will return E_ACCESSDENIED and the
element won't produce any frames as long as the UAC screen is active.
Note that in order to access secure desktop the application still has to
run at LOCAL_SYSTEM privileges. For GStreamer applications running with
regular user privileges this change has no effect.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2209>
IDXGIOutputDuplication can become invalid for example when there's
desktop switch, resolution change or Windows User Account Control prompt
appears on screen.
When that happens, try to re-create the duplication interface for the
changed output. Note that in the case of UAC prompt this operation will
fail if the GStreamer process doesn't run at LOCAL_SYSTEM privileges. In
such situation the source element won't create any frames as long as the
output is occupied by UAC screen.
In order to enable UAC access to sufficiently privileged GStreamer
processes, call SetThreadDesktop() with the desktop handle that
currently receives user input before creating our output duplication.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2204>
The problem is for uploading YUV frames using derived images, is that
derived images imply tiling, so frames are wrongly uploaded.
Though derived for reading might work we cannot know the Intel graphics
generation to validate the caching. Overall, it's safer to disable derived
images for i965.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2127>
iHD driver sets a tiled DRM modifier if surface's usage hint is set to
VPP_WRITE. This result in a garbled rendering when using glimagesink.
This patch changes the usage hint to generic if the caps feature is
DMABuf. Either way only iHD driver, so far, uses the usage hint flag.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2127>
Some capture devices might not provide channel mask value which will
result in capturing failure because of unknown channel mask in case
that device generates more than 2 channels. Although it might not
be correct, we can assume channel mask with the given number of channels.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2177>
DMABuf allocator already implements DMABuf Sync, meaning that doing
mmap/munmap (unless the mode have changed) is not required. In fact, on
systems with IOMMU it makes the kernel redo the mmu table which is visible
in the CPU usage.
This change reduces CPU usage when decoding
bbb_sunflower_2160p_60fps_normal.mp4 on RK3399 SoC from over 30% to
around 15%.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2152>
Implementation of mem_copy() virtual method for GstVaAllocator.
It's a deep copy where a new VA memory is popped out from the pool or,
if pool is empty, a new memory is allocated. The original memory is
mapped to read, and if its VAImage is not derived and size to copy is
the whole surface, the mapped VAImage of the original memory is put in
the new memory. Otherwise a slow memcpy is done between both memories.
Fixes: #1568
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2136>
The picture buffer (V4L2 CAPTURE buffer) was being released immediatly
when the request was done. This was problematic since even after the
request is done, the picture buffer might still be used as a reference
and should not be reused for further decoding yet.
This change effectively bind the picture buffer lifetime to the request.
So that if the picture is never showned (decode only frame) or the request
queue is full before the buffer is displayed, the picture buffer will
remain alive.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2142>
Different VA drives might have different definitions for RGB32 color
formats because different bit interpretation. Sadly the specification
doesn't clarify these interpretations. So VA users have to figure out
what's the correct mapping with it's rendering color format
definition.
This patch aims to fix the static map structure after the
VAImageFormats are queried. There is another static map with the
different interpretations of the RGB32 formats, and compare them with
the given VAImageFormat, then with the GStreamer color format, update
the mapping table.
Finally, some RGB32 color formats were added.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2129>
Add missing include for sys/ioctl.h so that these warnings dissapear
when compiling:
../subprojects/gst-plugins-bad/sys/v4l2codecs/gstv4l2decoder.c:179:9:
warning: implicit declaration of function ‘ioctl’
[-Wimplicit-function-declaration]
Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2140>
If the application did not define one yet, define our own
VK_DEFINE_NON_DISPATCHABLE_HANDLE that is independent of the
architecture.
Vulkan, by default, provides a define that depends on the architecture,
which causes the symbol type to be different. This causes an
architecture dependent .gir file, which then causes multilib
installation problems because the .gir files can't be shared.
Make it possible to override the format specifier and provide
a default one that is compatible with the default non dispatchable
handle.
Return VK_NULL_HANDLE from functions that return a non-dispatchable
handle.
Fixes#1566
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2130>
vapostproc tries to be in passthrough mode as much as possible. But
they might be situations where the user might force to process the
frames. For example, when upstream sets the crop meta and the user
wants VA do that cropping, rather than downstream.
For those situations this property will disable the passthrough mode,
if it's enabled.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2058>
If incoming buffers have crop meta it's done by vapostproc, iif
vapostproc is not in passthrough mode and downstream doesn't handle
it.
This patch announces the crop meta API in proposed bufferpool, while
it stops filtering meta APIs, since it was only filter crop api.
Also if downstream supports crop and video metas, vapostporoc
announces both meta APIs in upstream bufferpool.
Finally, the meta is removed from the buffer if the crop is enabled.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2058>
This new struct describes the input and output GstBuffers to
post-process, including VA flags. It also contains the VASurfaceID and
VARectangle, but those are private, completed inside GstVaFilter.
It is used for pass arguments to gst_va_filter_convert_surface() function.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2058>
When an input buffer needs to be copied into a VA memory, it's
required to create a buffer pool. This patch uses the
propose_allocation() caps to instantiate the allocator and pool,
instead of the negotiated caps, which rather represents the resolution
to display.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2058>
Derived images are direct maps to surfaces bits, but in Intel Gen7 to
Gen9, that memory is not cachable, thus reading can be very slow (it
might produce timeout is tests such as fluster).
This patch tries first to define if derived images are possible, and
later use them only if mapping is not for reading.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2128>
This plugin, for decoders more concretely, assumes that a VA config
can do certain color conversions when mapping frames onto CPU's
memory.
This assumption was valid for i965 and Gallium drivers which generates
valid outputs in bitstreams testers (v.gr. fluster). Nonetheless, iHD,
even when it generates acceptable rendered frames, output's MD5 of
tests weren't valid.
This patch append the image formats, for color conversion when mapping
to memory, for non-iHD drivers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2128>
Enable zero-copy if downstream proposed pool and therefore decoder
can know the amount of buffer required by downstream.
Otherwise decoder will copy when our DPB pool has no sufficient
buffers for later decoding operation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2097>
Major changes:
* GstD3D11Allocator: This allocator is now device-independent object
which can allocate GstD3D11Memory object for any GstD3D11Device.
User can get this object via gst_allocator_find(GST_D3D11_MEMORY_NAME)
* GstD3D11PoolAllocator: A new allocator implementation for texture pool.
From now on GstD3D11BufferPool will make use of this memory pool allocator
to avoid frequent texture reallocation. That usually happens because
of buffer copy (gst_buffer_make_writable for example)
In addition to that, GstD3D11BufferPool will provide GstBuffer with
GstVideoMeta, because CPU access to a GstD3D11Memory without GstVideoMeta
is almost impossible since GPU drivers needs padding for stride alignment.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2097>
We've been doing retry with 1ms sleep if DecoderBeginFrame()
returned E_PENDING which means application should call
DecoderBeginFrame() again because GPU is busy.
The 1ms sleep() during retry would result in usually about 15ms delay
in reality because of bad clock precision on Windows.
To improve throughput performance, this commit will enable
high precision clock only for NVIDIA platform since
DecoderBeginFrame() call on the other GPU vendors seems to
succeed without retry.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2099>
After the VA filter creation, when changing the element's state from NULL
to READY, immediatly checks for any filter operation requested by the user.
If any, the passthrough mode is disabled early, so there's no need for a
future renegotiation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2094>
When we transform the caps from the sink to src, or vice versa, the
"caps" passed to us may only contain parts of the features. Which
makes our vpp lose some feature in caps and get a negotiation error.
The correct way should be:
Cleaning the format and resolution of that caps, but adding all VA,
DMA features to it, making it a full feature caps. Then, clipping it
with the pad template.
fixes: #1551
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2081>
For reverse playback, we are always copying decoded
frame to downstream buffer. So the pool size can be
and need to be large enough.
In case that forward playback, however, we need to restrict
the max pool size for performance reason. Otherwise decoder
will keep copying decoded texture to downstream buffer pool
if decoding is faster than downstream throughput
performance and also there are queue element between them.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2083>
Decoder might be able to copy decoded texture to the other buffer pool
during playback depending on context. In that case, copied one
has no D3D11_BIND_DECODER bind flag.
If we used ID3D11VideoProcessor previously for decoder texture,
and incoming texture supports ID3D11VideoProcessor as well even if it has no
D3D11_BIND_DECODER flag (having D3D11_BIND_RENDER_TARGET for example),
allow zero-copying instead of using our fallback texture.
Frequent conversion tool change (between ID3D11VideoProcessor and generic shader)
might result in inconsistent image quality.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2084>
... instead of QueryInterface-ing per elements. Note that
ID3D11VideoDevice and ID3D11VideoContext objects might not be available
if device doesn't support video interface.
So GstD3D11Device object will create those objects only when requested.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2079>
Direct3D11 objects are COM, and most COM C APIs are verbose
(C++ is a little better). So, by using C++ APIs, we can make code
shorter and more readable.
Moreover, "ComPtr" helper class (which is C++ only) can be
utilized, that is very helpful for avoiding error-prone COM refcounting
issue/leak.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2077>
Added helper function _update_passthrough() which will define and set
the pass-through mode of the filter, and it'll either reconfigure both
pads or it will just mark the src pad for renegotiation or nothing at
all.
There are cases where both pads have to be reconfigured (direction
changed, for example), other when just src pad has to (filters
updated) or none (changing to ready state).
The requirement of renegotiation depends on the need to enable/disable
its VA buffer pools.
This patch sets pass-through mode by default, so the buffer pools
aren't allocated if no filtering/direction operations are defined,
which is the correct behavior.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2074>
... instead of the largest we ever seen.
Note that d3d11h264dec element holds previously configured DPB size
for later decoder object re-open decision.
This is to fix below case:
1) Initial SPS, required DPB size is 6
- decoder object is opened with DPB size 6
- max_dpb_size is now 6
2) SPS update with resolution change, required DPB size is 1
- decoder object is re-opened with DPB size 1
- max_dpb_size should be updated to 1, but it didn't happen (BUG)
3) SPS update without resolution change, only required DPB size is updated to 6
- decoder object should be re-opened but didn't happen
because we didn't update max_dpb_size at 2).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2056>
The new H.264 uAPI requires that all drivers support
scaling matrix only as an option, when a non-flat
scaling matrix is provided in the bitstream headers.
Take advantage of this and avoid passing the scaling
matrix if not needed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1624>
Frame-based decoding mode doesn't require SLICE_PARAMS and
PRED_WEIGHTS controls.
Moreover, if the driver doesn't support these two controls, trying
to set them will fail. Fix this by only setting these on
slice-based decoding mode.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1624>
To convert decoded texture into other format, downstream would use
video processor instead of shader. In order for downstream to
be able to use video processor even if we copied decoded texture
into downstream pool, we should set this bind flag. Otherwise,
downstream would keep switching video processor and shader
to convert format which would result in inconsistent image quality.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2051>