Instead of using dmabuf allocator in source pad, when raw video caps
are negotiated, it uses VA allocator as before, since it is stable
in more use cases, for example transcoding, and more backend drivers.
Dmabuf allocator is only used when dmabuf caps feature is negotiated.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/merge_requests/352>
iHD doesn't provide a full implemention for rendering surfaces and
i965 has problems in wayland. And I suspect this path is followed
by other driver implementations.
This patch demotes the rank of vaapisink to secondary, so it will
not be autoplugged avoiding bad experience of users.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/merge_requests/336>
When the element's state changes to NULL, it can still receive
queries, such as the image formats. The display is needed in such
queries but not well protected for MT safe.
For example, ensure_allowed_raw_caps() may still use the display
while it is disposed by gst_vaapi_plugin_base_close() because of
the state change.
We can keep the display until the element is destroyed. When the
state changes to NULL, and then changes to PAUSED again, the display
can be correctly set(if type changes), or leave untouched.
Fix: #260
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/merge_requests/343>
This helper function iterate all profiles and entrypoints belong
to the specified codec, query the VAConfigAttribRTFormat and list
all possible video formats.
This function is used by each codec to get the template sink caps
(for encode) or src caps(for decode) at register time, when just
all possible formats are listed and no need to be very accurate.
So there is no context created for the performance reason. Most
codecs just use YUV kinds of formats as the input/output, so we do
not include RGB kinds of formats. User can specified more formats
in extra_fmts(For example, jpeg may need BGRA) if needed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/merge_requests/315>
We may build this plugin with window system support but run it without
window system. Without this patch, the following pipeline will trigger a
segfault when running it without window system.
gst-launch-1.0 filesrc location=input.264 ! h264parse ! vaapih264dec ! fakesink
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/merge_requests/319>
The bufferproxy may reference the surface and the surface may also
reference the bufferproxy, producing a circular reference, which might
lead to serious resource leak problems.
Now make the relationship clearer, the bufferproxy's references is
transfered to surface, while bufferproxy just keeps the surface's
address without increasing its reference count.
The surface can be created through a bufferproxy like in
gst_vaapi_surface_new_with_dma_buf_handle(), and the surface might
get its bufferproxy via gst_vaapi_surface_get_dma_buf_handle(). In
both cases the surface holds a bufferproxy's reference.
The old way of refer memory by bufferproxy is not a good one, since it
make the logic error prone.
Now it is established a map between surface-bufferproxy and its GstMemory,
caching the memory bound by a surface looked for the specified surface.
Delete the GST_VAAPI_VIDEO_BUFFER_POOL_ACQUIRE_FLAG_NO_ALLOC flag.
In fact, no one is using that flag, and all vaapi buffers should
have GstVaapiVideoMeta.
In hevc, we can consider the -intra profile a subset of the none
-intra profile. The -intra profiles just contain I frames and we
definitely can use the none -intra profiles's context to decode
them.
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
The strides and offsets could be the same, but the allocation
size might be different (e.g. alignment). Thus, ensure we also
set the flag to copy from VA memory to system memory when alloc
size differs.
Fixes#243
This reverts commit dd428cc4a1 because
it rewrites the buffer size whilst surface allocation flags are
stored when allocator_params_init() is called since fab890ce.
Fix: #239
When a vaapi allocator is instantiated, it first try to generate a
surface with the specified configuration.
This patch adds, in this tried buffer, the requested allocation flags.
Store surface allocation flags passed to the vaapi allocator in
GObject's qdata, because it might be used by the vaapivideobufferpool
when recreating the allocator given any resolution change.
The bug fixing, in commit 89f202ea, just considers the case when
surface's DMABuf is set through gst_buffer_pool_acquire_buffer(),
which is typically a decoder's behavior. But vaapipostproc doesn't
provide any surface when calling gst_buffer_pool_acquire_buffer(),
thus a surface is created when GstMemory is allocated.
If the surface proxy in buffer's meta is reset at
buffer_pool_reset_buffer(), that surface will be destroyed and it
won't be available anymore. But GstBuffers are cached in the buffer
pool and they are reused again, hence only those images are rendered
repeatedly.
Fixes: #232
I've just discovered iHD driver in Skylake doesn't have VideoProc
entry point, hence, in this platform, when vaapioverlay is tried to be
registered, critical warnings are raised because blend doesn't have a
display assigned.
As it is possible to have drivers without EntryPointVideoProc it is
required to handle it gracefully. This patch does that: only tries to
register vaapioverlay if the testing display has VPP and finalize()
vmethods, in filter and blend, bail out if display is NULL.
The default output colorimetry is persuaded by the output
resolution, which is too naive when doing VPP cropping
and/or scaling. For example, scaling 4K(sink)->1080P(src)
resolution (i.e. both YUV) results in bt2020(sink)->bt709(src)
colorimetry selection and some drivers don't support that
mode in vpp.
Thus, if output (i.e. downstream) does not specify a
colorimetry then we use the input resolution instead of the
output resolution to create the default colorimetry. Also,
note that we still use the output format since it may be a
different color space than the input. As in the example
above, this will result in bt2020(sink)->bt2020(src)
colorimetry selection and all drivers (afaik) should support
that in vpp.
If colorimetry has been set by a capsfilter (e.g.
vaapipostproc ! video/x-raw,colorimetry=bt709) then
don't try to override it. Previously, the aforementioned
capsfilter will fail to negotiate if default colorimetry
is not the same as the capsfilter (e.g. 4K resolutions).
Some code can be optimized since only if the dmabuf allocator is set,
the internal flag of dmabuf is TRUE, thus there's no need to evaluate
the allocator address.
If the requested allocator in set_config() is not a VAAPI valid one,
reject the configuration, instead of lying and using a private one.
This patch superseeds !254 and !24
set_config() vmethod should fail gracefully, thus upstream could
negotiate another pool if possible.
Instead of sending error messages to the bus, let demote the level
to warning.
Instead of creating a new allocator when upstream requests a different
allocator, this patch tries to reuse the internal allocator if it was
already initializated.
If the stream changes, then either one will be unref and a new
allocator is created.
This mechanism comes from ffmpeg vaapi implementation, where they have
their own quirks.
A specific driver is identified by a substring present in the vendor
string. If that substring is found, a set of bitwise flags are store.
These flags can be accessed through the function
gst_vaapi_display_has_driver_quirks().
The purpose for this first quirks is to disable the put image try for
AMD Gallium driver (see [1]).
1. https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/merge_requests/72
Validate if the meta returned by gst_buffer_get_vaapi_video_meta() in
the acquired buffer is not null.
This situation should be very "pathological", but still it is better
be safe since that meta might be used later to create a new dma
buffer.
gst_vaapi_video_buffer_pool_reset_buffer() is called when the sink
releases the last reference on an exported DMA buffer. This should
release the underlying surface proxy. To avoid releasing the wrong
surface due to a stale surface proxy reference in the buffer's
GstVaapiVideoMeta, always update the reference to the correct surface
in gst_vaapi_video_buffer_pool_acquire_buffer().
When baseline-as-constrained is set, the decoder will expose support
for baseline decoding and assume that the baseline content is
constrained-baseline. This can be handy to decode streams in hardware
that would otherwise not be possible to decode. A lot of baseline
content is in fact constrained.
And, if downstream requests a specific level, the caps are not
negotiated, because there is no mechanism right now to specify a
custom level in the internal encoder.
When the available caps doesn't intersect with the allowed caps in the
pipeline, a new caps is proposed rather than just expecting to
iterate.
Later, the intersected caps (profile_caps) is fixated in order to
extract the configuration.
FEI encoders are not actively mantained neither tested, and it is
using infrastructure that is changing and FEI is stopping this
effort.
Also it is required to rethink how FEI can be used in GStreamer.
Use the gst_video_aggregator_pad_has_current_buffer API
to check if the current sinkpad has a queued buffer before
attempting to obtain a input buffer from the base plugin.
If the sinkpad does not have a current buffer, then it is
either not producing them yet (e.g. current time < sinkpad
start time) or it has reached EOS.
Previously, we only handled EOS case.
Example:
gst-launch-1.0 videotestsrc num-buffers=100 \
! vaapipostproc ! vaapioverlay name=overlay \
! vaapisink videotestsrc timestamp-offset=1000000000 \
num-buffers=100 ! video/x-raw,width=160,height=120 \
! overlay.
Recursive functions are elegant but dangerous since they might
overflow the stack. It is better to turn them into a list tranversal
if possible, as this case.
Instead of using a parent structure that has to be derived by API
consumers, this change propse a simplification by using the common
pattern of GTK of passing a function pointer and user data which will
be passed as its parameter. That user data contains the state and the
function will be called to update that state.
The current get_profile just return one possible profile for the encode,
which is not enough. For example, if we want to support HEVC 4:4:4
profile, the input of encode should be VYUA rather than NV12 in HEVC
main profile. So the command line:
gst-launch-1.0 videotestsrc num-buffers=200 ! capsfilter \
caps=video/x-raw,format=VUYA,width=800,height=600 ! vaapih265enc \
tune=low-power init-qp=30 ! fakesink
can not work because vaapih265enc just report NV12 in sink caps, we need
to specify the profile obviously like:
gst-launch-1.0 videotestsrc num-buffers=200 ! capsfilter \
caps=video/x-raw,format=VUYA,width=800,height=600 ! vaapih265enc \
tune=low-power init-qp=30 ! capsfilter caps=video/x-h265, \
profile=main-444 ! fakesink
The encode should have the ability to choose the profile based on input
format automatically. If the input video format is VUYA, the main-444
profile should be auto choosed.
We modify to let get_allowed_profiles of each encode sub class to return
an array of all supported profiles based on downstream's allowed caps, or
return NULL if no valid profiles specified by downstream.
If no allowed profiles found, all profiles which belong to the current
encoder's codec will be the candidates.
The function gst_vaapi_encoder_get_surface_attributes collects the surface's
attributes for that profile list we just get.
So for this case, both NV12 and VUYA should be returned.
TODO: some codec like VP9, need to implement the get_profile() function.
A plugin similar to the base compositor element but
uses VA-API VPP blend functions to accelerate the
overlay/compositing.
Simple example:
gst-launch-1.0 -vf videotestsrc ! vaapipostproc \
! tee name=testsrc ! queue \
! vaapioverlay sink_1::xpos=300 sink_1::alpha=0.75 \
name=overlay ! vaapisink testsrc. ! queue ! overlay.
We can get all the information about the video format at one shot
when we create the test context for getting the supported formats.
The current way to get the width and height ranges are inefficient,
since it calls the function gst_vaapi_profile_caps_append_encoder()
and it creates another temporal context to detect the resolution
information.
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
We now set encode->allowed_sinkpad_caps to NULL if we fail to get
surfaces formats. This causes two problem:
1. gst_video_encoder_proxy_getcaps use NULL as its caps parameter,
which changes its behavior. It will use encode's sinkpad template
rather than empty caps to do the clip job. So even if we fail to set
allowed_sinkpad_caps, gst_video_encoder_proxy_getcaps can still return
valid caps.
2. We should just set the allowed_sinkpad_caps once. The NULL point
make the ensure_allowed_sinkpad_caps function works again and again.
Don't reset the can_dmabuf field. This restores the
close/reset logic that existed prior to commit
ca2942176b in regards to
dmabuf support.
Plugins only call gst_vaapi_plugin_base_set_srcpad_can_dmabuf
once during startup, but may need to reset the other private
fields multiple times during negotiation. Thus, can_dmabuf
should be exempt from the resets.
Fixes#208
GstVaapiMiniObject and GstVaapiObject are deprecated.
This is the first step to remove them by porting GstVaapiSurface as
a GstMiniBuffer descendant.
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
GstVaapiMiniObject and GstVaapiObject are deprecrated. This is the
first step to remove them, by porting GstVaapiImage as a
GstMiniObject.
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
The base plugin public API function implementations determine
which pad should be passed to the internal helper functions.
Currently, only the base plugin static sinkpad and static
srcpad are supported/used. However, this change enables future
API functions to be added that can accept a pad (i.e. request pad)
from an element subclass (e.g. a GstVideoAggregator subclass).
Define a struct (GstVaapiPadPrivate) to encapsulate the
pad-specific data (i.e. buffer pool, allocator, info,
caps, etc.).
Add an interface to retrieve the data struct for a given
pad.
Finally, update the base plugin to use the data struct
throughout the implementation.
This will enable us to easily extend the base plugin in the
future to allow for N-to-1 pad subclasses (e.g. overlay/
composite).