Instead of creating a new allocator when upstream requests a different
allocator, this patch tries to reuse the internal allocator if it was
already initializated.
If the stream changes, then either one will be unref and a new
allocator is created.
This mechanism comes from ffmpeg vaapi implementation, where they have
their own quirks.
A specific driver is identified by a substring present in the vendor
string. If that substring is found, a set of bitwise flags are store.
These flags can be accessed through the function
gst_vaapi_display_has_driver_quirks().
The purpose for this first quirks is to disable the put image try for
AMD Gallium driver (see [1]).
1. https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/merge_requests/72
Validate if the meta returned by gst_buffer_get_vaapi_video_meta() in
the acquired buffer is not null.
This situation should be very "pathological", but still it is better
be safe since that meta might be used later to create a new dma
buffer.
gst_vaapi_video_buffer_pool_reset_buffer() is called when the sink
releases the last reference on an exported DMA buffer. This should
release the underlying surface proxy. To avoid releasing the wrong
surface due to a stale surface proxy reference in the buffer's
GstVaapiVideoMeta, always update the reference to the correct surface
in gst_vaapi_video_buffer_pool_acquire_buffer().
This commit tries to centralize the selection of vaCreateSurfaces
version, instead of having fallbacks everywhere.
These fallbacks are hacks, added because new drivers use the latest
version of vaCreateSurfaces (with surface attributes) [1], meanwhile
old drivers (or profiles as JPEG decoder in i965) might rather use the
old version.
In order to select which method, there's detected hack: each config
context has a list of valid formats, in the case of JPEG decoder the
list only contains "rare" 4:2:2 formats (ICM3, GRAY8) which aren't
handled correctly by the current gstreamer-vaapi code [2].
The hack consist in identify if the format list contains an arbitrary
preferred format (which is suposedly well supported by
gstreamer-vaapi, mostly NV12). If no prefered colour format is found,
the the old version of vaCreateSurfaces is used, and the surfaces wil
be mapped into a image with their own color format.
1. https://bugzilla.gnome.org/show_bug.cgi?id=797143
2. https://bugzilla.gnome.org/show_bug.cgi?id=797222
When baseline-as-constrained is set, the decoder will expose support
for baseline decoding and assume that the baseline content is
constrained-baseline. This can be handy to decode streams in hardware
that would otherwise not be possible to decode. A lot of baseline
content is in fact constrained.
And, if downstream requests a specific level, the caps are not
negotiated, because there is no mechanism right now to specify a
custom level in the internal encoder.
When the available caps doesn't intersect with the allowed caps in the
pipeline, a new caps is proposed rather than just expecting to
iterate.
Later, the intersected caps (profile_caps) is fixated in order to
extract the configuration.
The formats array is always created, in order to keep the logic and
to avoid broken caps, if this formats array doesn't contain any
elements, it has to be unref and the function should return NULL.
meson.build in gstreamer-vaapi requires hooks/pre-commit.hook
Copied and pasted pre-commit.hook from other gstreamer modules to make
sure gstreamer-vaapi follows the same code style
When the tune is NONE, we now can choose entrypoint freely. So the
GST_VAAPI_ENCODER_TUNE macro may not return the correct current
entrypoint.
We also delay CTU size calculation after entrypoint has been decided.
FEI encoders are not actively mantained neither tested, and it is
using infrastructure that is changing and FEI is stopping this
effort.
Also it is required to rethink how FEI can be used in GStreamer.
Use the gst_video_aggregator_pad_has_current_buffer API
to check if the current sinkpad has a queued buffer before
attempting to obtain a input buffer from the base plugin.
If the sinkpad does not have a current buffer, then it is
either not producing them yet (e.g. current time < sinkpad
start time) or it has reached EOS.
Previously, we only handled EOS case.
Example:
gst-launch-1.0 videotestsrc num-buffers=100 \
! vaapipostproc ! vaapioverlay name=overlay \
! vaapisink videotestsrc timestamp-offset=1000000000 \
num-buffers=100 ! video/x-raw,width=160,height=120 \
! overlay.
Recursive functions are elegant but dangerous since they might
overflow the stack. It is better to turn them into a list tranversal
if possible, as this case.
Instead of using a parent structure that has to be derived by API
consumers, this change propse a simplification by using the common
pattern of GTK of passing a function pointer and user data which will
be passed as its parameter. That user data contains the state and the
function will be called to update that state.
This new API allows the user to call a single method (process)
which handles the [display] lock/unlock logic internally for
them.
This API supersedes the risky begin, render, end API.
It eliminates the need for the user to call a lock method
(process_begin) before processing the input buffers
(process_render) and calling an unlock method (process_end)
afterwards.
See #219
The current get_profile just return one possible profile for the encode,
which is not enough. For example, if we want to support HEVC 4:4:4
profile, the input of encode should be VYUA rather than NV12 in HEVC
main profile. So the command line:
gst-launch-1.0 videotestsrc num-buffers=200 ! capsfilter \
caps=video/x-raw,format=VUYA,width=800,height=600 ! vaapih265enc \
tune=low-power init-qp=30 ! fakesink
can not work because vaapih265enc just report NV12 in sink caps, we need
to specify the profile obviously like:
gst-launch-1.0 videotestsrc num-buffers=200 ! capsfilter \
caps=video/x-raw,format=VUYA,width=800,height=600 ! vaapih265enc \
tune=low-power init-qp=30 ! capsfilter caps=video/x-h265, \
profile=main-444 ! fakesink
The encode should have the ability to choose the profile based on input
format automatically. If the input video format is VUYA, the main-444
profile should be auto choosed.
We modify to let get_allowed_profiles of each encode sub class to return
an array of all supported profiles based on downstream's allowed caps, or
return NULL if no valid profiles specified by downstream.
If no allowed profiles found, all profiles which belong to the current
encoder's codec will be the candidates.
The function gst_vaapi_encoder_get_surface_attributes collects the surface's
attributes for that profile list we just get.
So for this case, both NV12 and VUYA should be returned.
TODO: some codec like VP9, need to implement the get_profile() function.
vaapioverlay is only registered if the VA driver support the blend
operation.
This patch only executes the test if vaapioverlay is available,
otherwise the test is bail out without raising an error.