When creating surfaces it is possible to pass to VA hints of its usage,
so the driver may do some optimizations.
This commit adds the handling of encoding/decoding hints.
I've just discovered iHD driver in Skylake doesn't have VideoProc
entry point, hence, in this platform, when vaapioverlay is tried to be
registered, critical warnings are raised because blend doesn't have a
display assigned.
As it is possible to have drivers without EntryPointVideoProc it is
required to handle it gracefully. This patch does that: only tries to
register vaapioverlay if the testing display has VPP and finalize()
vmethods, in filter and blend, bail out if display is NULL.
Commit 1168d6d5 showed up a regression: decode_sps() stores the unit's
parser info in sps array. If that parser info comes from decoding
codec data, that parser info will have an undefined state which might
break ensure_sps().
This patch sets the parser info state, at decoding codec data, with
the internal parser state. This is similar with h264 decoder apprach.
Original-patch-by: Xu Guangxin <guangxin.xu@intel.com>
VAProcColorStandardExplicit and associated VAProcColorProperties
(primaries, transfer and matrix) are not supported until
VA-API 1.2.0.
Use VAProcColorStandardNone instead of VAProcColorStandardExplicit
if VA-API < 1.2.0.
Fixes#231
Addresses #228 on iHD side. It seems iHD can't handle
VAProcColorStandardSRGB in all situations for vpp. But
it has no problem when we specify the sRGB parameters
via VAProcColorStandardExplicit parameters.
We've always sent VA_SOURCE_RANGE_UNKNOWN to the driver.
And, the [iHD] driver essentially computes the same color
range as gstreamer when we send VA_SOURCE_RANGE_UNKNOWN for
cases were gstreamer computes it automatically. But,
if the user wants to make it explicit, we should try
to honor it.
This mechanism comes from ffmpeg vaapi implementation, where they have
their own quirks.
A specific driver is identified by a substring present in the vendor
string. If that substring is found, a set of bitwise flags are store.
These flags can be accessed through the function
gst_vaapi_display_has_driver_quirks().
The purpose for this first quirks is to disable the put image try for
AMD Gallium driver (see [1]).
1. https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/merge_requests/72
This commit tries to centralize the selection of vaCreateSurfaces
version, instead of having fallbacks everywhere.
These fallbacks are hacks, added because new drivers use the latest
version of vaCreateSurfaces (with surface attributes) [1], meanwhile
old drivers (or profiles as JPEG decoder in i965) might rather use the
old version.
In order to select which method, there's detected hack: each config
context has a list of valid formats, in the case of JPEG decoder the
list only contains "rare" 4:2:2 formats (ICM3, GRAY8) which aren't
handled correctly by the current gstreamer-vaapi code [2].
The hack consist in identify if the format list contains an arbitrary
preferred format (which is suposedly well supported by
gstreamer-vaapi, mostly NV12). If no prefered colour format is found,
the the old version of vaCreateSurfaces is used, and the surfaces wil
be mapped into a image with their own color format.
1. https://bugzilla.gnome.org/show_bug.cgi?id=797143
2. https://bugzilla.gnome.org/show_bug.cgi?id=797222
When baseline-as-constrained is set, the decoder will expose support
for baseline decoding and assume that the baseline content is
constrained-baseline. This can be handy to decode streams in hardware
that would otherwise not be possible to decode. A lot of baseline
content is in fact constrained.
The formats array is always created, in order to keep the logic and
to avoid broken caps, if this formats array doesn't contain any
elements, it has to be unref and the function should return NULL.
When the tune is NONE, we now can choose entrypoint freely. So the
GST_VAAPI_ENCODER_TUNE macro may not return the correct current
entrypoint.
We also delay CTU size calculation after entrypoint has been decided.
FEI encoders are not actively mantained neither tested, and it is
using infrastructure that is changing and FEI is stopping this
effort.
Also it is required to rethink how FEI can be used in GStreamer.
Instead of using a parent structure that has to be derived by API
consumers, this change propse a simplification by using the common
pattern of GTK of passing a function pointer and user data which will
be passed as its parameter. That user data contains the state and the
function will be called to update that state.
This new API allows the user to call a single method (process)
which handles the [display] lock/unlock logic internally for
them.
This API supersedes the risky begin, render, end API.
It eliminates the need for the user to call a lock method
(process_begin) before processing the input buffers
(process_render) and calling an unlock method (process_end)
afterwards.
See #219
The current get_profile just return one possible profile for the encode,
which is not enough. For example, if we want to support HEVC 4:4:4
profile, the input of encode should be VYUA rather than NV12 in HEVC
main profile. So the command line:
gst-launch-1.0 videotestsrc num-buffers=200 ! capsfilter \
caps=video/x-raw,format=VUYA,width=800,height=600 ! vaapih265enc \
tune=low-power init-qp=30 ! fakesink
can not work because vaapih265enc just report NV12 in sink caps, we need
to specify the profile obviously like:
gst-launch-1.0 videotestsrc num-buffers=200 ! capsfilter \
caps=video/x-raw,format=VUYA,width=800,height=600 ! vaapih265enc \
tune=low-power init-qp=30 ! capsfilter caps=video/x-h265, \
profile=main-444 ! fakesink
The encode should have the ability to choose the profile based on input
format automatically. If the input video format is VUYA, the main-444
profile should be auto choosed.
We modify to let get_allowed_profiles of each encode sub class to return
an array of all supported profiles based on downstream's allowed caps, or
return NULL if no valid profiles specified by downstream.
If no allowed profiles found, all profiles which belong to the current
encoder's codec will be the candidates.
The function gst_vaapi_encoder_get_surface_attributes collects the surface's
attributes for that profile list we just get.
So for this case, both NV12 and VUYA should be returned.
TODO: some codec like VP9, need to implement the get_profile() function.
We can get all the information about the video format at one shot
when we create the test context for getting the supported formats.
The current way to get the width and height ranges are inefficient,
since it calls the function gst_vaapi_profile_caps_append_encoder()
and it creates another temporal context to detect the resolution
information.
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>