Since bufferproxy and surface are not referenced circularly, there's
no need to keep, in the buffer proxy, a reference to the GstMemory
where it is held. This patch removes that handling.
The bufferproxy may reference the surface and the surface may also
reference the bufferproxy, producing a circular reference, which might
lead to serious resource leak problems.
Now make the relationship clearer, the bufferproxy's references is
transfered to surface, while bufferproxy just keeps the surface's
address without increasing its reference count.
The surface can be created through a bufferproxy like in
gst_vaapi_surface_new_with_dma_buf_handle(), and the surface might
get its bufferproxy via gst_vaapi_surface_get_dma_buf_handle(). In
both cases the surface holds a bufferproxy's reference.
This fixed segfault when running the pipeline below with iHD driver
(commit efe5e9a) on ICL
gst-launch-1.0 videotestsrc ! vaapivp9enc tune=low-power ! vaapivp9dec ! \
fakesink
In hevc, we can consider the -intra profile a subset of the none
-intra profile. The -intra profiles just contain I frames and we
definitely can use the none -intra profiles's context to decode
them.
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
This is a workaround for intel-media-driver bug
https://github.com/intel/media-driver/issues/865
The driver will force the RC method to CBR for HEVCe
when it parses the HRD param. Thus, any RC method
param submitted "prior" to the HRD param will be lost.
Therefore, VBR, ICQ and QVBR for HEVCe can't be
effectively enabled if the RC method param "precedes"
the HRD param.
To work around this issue, set the HRD param before
the RC method param so the driver will parse the RC
method param "after" the HRD param.
Afaict, other codecs in the driver (and other drivers)
do not appear to be dependent on the order of HRD and
RC param submission.
GstVaapiPixmap is an abstract base class which only implementation
were GstVaapiPixmapX11. This class were used for a special type of
rendering in the tests apps, utterly unrelated in GStreamer.
Since gstreamer-vaapi is no longer a general-user wrapper for VA-API
we should remove this unused API.
This removal drops libxrender dependency.
If the dependent_slice_segment_flag is true, most slice info derived from last slice.
So we need check the slice type after we call populate_dependent_slice_hdr
Since commit 32bf6f1e GLTextureUpload is broken because i965
doesn't report properly RGBA support. It could be possible to use RGBx
but GLTextureUpload only regotiates RGBA.
The simplest fix to this regression is adding synthetically the RGBA
format in the internal format map.
Instead of break at the fist foud quirk in the table, iterate all over
so it would be feasible to add several quirks for one driver per
element in array.
The intel-media-driver (iHD) can't convert output color
primaries when doing YUV to/from RGB CSC. Thus, we must
keep the output color primaries the same as the input
color primaries for this case.
fixes#238
When creating surfaces it is possible to pass to VA hints of its usage,
so the driver may do some optimizations.
This commit adds the handling of encoding/decoding hints.
I've just discovered iHD driver in Skylake doesn't have VideoProc
entry point, hence, in this platform, when vaapioverlay is tried to be
registered, critical warnings are raised because blend doesn't have a
display assigned.
As it is possible to have drivers without EntryPointVideoProc it is
required to handle it gracefully. This patch does that: only tries to
register vaapioverlay if the testing display has VPP and finalize()
vmethods, in filter and blend, bail out if display is NULL.
Commit 1168d6d5 showed up a regression: decode_sps() stores the unit's
parser info in sps array. If that parser info comes from decoding
codec data, that parser info will have an undefined state which might
break ensure_sps().
This patch sets the parser info state, at decoding codec data, with
the internal parser state. This is similar with h264 decoder apprach.
Original-patch-by: Xu Guangxin <guangxin.xu@intel.com>
VAProcColorStandardExplicit and associated VAProcColorProperties
(primaries, transfer and matrix) are not supported until
VA-API 1.2.0.
Use VAProcColorStandardNone instead of VAProcColorStandardExplicit
if VA-API < 1.2.0.
Fixes#231
Addresses #228 on iHD side. It seems iHD can't handle
VAProcColorStandardSRGB in all situations for vpp. But
it has no problem when we specify the sRGB parameters
via VAProcColorStandardExplicit parameters.
We've always sent VA_SOURCE_RANGE_UNKNOWN to the driver.
And, the [iHD] driver essentially computes the same color
range as gstreamer when we send VA_SOURCE_RANGE_UNKNOWN for
cases were gstreamer computes it automatically. But,
if the user wants to make it explicit, we should try
to honor it.
This mechanism comes from ffmpeg vaapi implementation, where they have
their own quirks.
A specific driver is identified by a substring present in the vendor
string. If that substring is found, a set of bitwise flags are store.
These flags can be accessed through the function
gst_vaapi_display_has_driver_quirks().
The purpose for this first quirks is to disable the put image try for
AMD Gallium driver (see [1]).
1. https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/merge_requests/72
This commit tries to centralize the selection of vaCreateSurfaces
version, instead of having fallbacks everywhere.
These fallbacks are hacks, added because new drivers use the latest
version of vaCreateSurfaces (with surface attributes) [1], meanwhile
old drivers (or profiles as JPEG decoder in i965) might rather use the
old version.
In order to select which method, there's detected hack: each config
context has a list of valid formats, in the case of JPEG decoder the
list only contains "rare" 4:2:2 formats (ICM3, GRAY8) which aren't
handled correctly by the current gstreamer-vaapi code [2].
The hack consist in identify if the format list contains an arbitrary
preferred format (which is suposedly well supported by
gstreamer-vaapi, mostly NV12). If no prefered colour format is found,
the the old version of vaCreateSurfaces is used, and the surfaces wil
be mapped into a image with their own color format.
1. https://bugzilla.gnome.org/show_bug.cgi?id=797143
2. https://bugzilla.gnome.org/show_bug.cgi?id=797222
When baseline-as-constrained is set, the decoder will expose support
for baseline decoding and assume that the baseline content is
constrained-baseline. This can be handy to decode streams in hardware
that would otherwise not be possible to decode. A lot of baseline
content is in fact constrained.
The formats array is always created, in order to keep the logic and
to avoid broken caps, if this formats array doesn't contain any
elements, it has to be unref and the function should return NULL.
When the tune is NONE, we now can choose entrypoint freely. So the
GST_VAAPI_ENCODER_TUNE macro may not return the correct current
entrypoint.
We also delay CTU size calculation after entrypoint has been decided.
FEI encoders are not actively mantained neither tested, and it is
using infrastructure that is changing and FEI is stopping this
effort.
Also it is required to rethink how FEI can be used in GStreamer.
Instead of using a parent structure that has to be derived by API
consumers, this change propse a simplification by using the common
pattern of GTK of passing a function pointer and user data which will
be passed as its parameter. That user data contains the state and the
function will be called to update that state.
This new API allows the user to call a single method (process)
which handles the [display] lock/unlock logic internally for
them.
This API supersedes the risky begin, render, end API.
It eliminates the need for the user to call a lock method
(process_begin) before processing the input buffers
(process_render) and calling an unlock method (process_end)
afterwards.
See #219
The current get_profile just return one possible profile for the encode,
which is not enough. For example, if we want to support HEVC 4:4:4
profile, the input of encode should be VYUA rather than NV12 in HEVC
main profile. So the command line:
gst-launch-1.0 videotestsrc num-buffers=200 ! capsfilter \
caps=video/x-raw,format=VUYA,width=800,height=600 ! vaapih265enc \
tune=low-power init-qp=30 ! fakesink
can not work because vaapih265enc just report NV12 in sink caps, we need
to specify the profile obviously like:
gst-launch-1.0 videotestsrc num-buffers=200 ! capsfilter \
caps=video/x-raw,format=VUYA,width=800,height=600 ! vaapih265enc \
tune=low-power init-qp=30 ! capsfilter caps=video/x-h265, \
profile=main-444 ! fakesink
The encode should have the ability to choose the profile based on input
format automatically. If the input video format is VUYA, the main-444
profile should be auto choosed.
We modify to let get_allowed_profiles of each encode sub class to return
an array of all supported profiles based on downstream's allowed caps, or
return NULL if no valid profiles specified by downstream.
If no allowed profiles found, all profiles which belong to the current
encoder's codec will be the candidates.
The function gst_vaapi_encoder_get_surface_attributes collects the surface's
attributes for that profile list we just get.
So for this case, both NV12 and VUYA should be returned.
TODO: some codec like VP9, need to implement the get_profile() function.
We can get all the information about the video format at one shot
when we create the test context for getting the supported formats.
The current way to get the width and height ranges are inefficient,
since it calls the function gst_vaapi_profile_caps_append_encoder()
and it creates another temporal context to detect the resolution
information.
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
Some profile, such as H265_MAIN_444 on new Intel platform, may only
support ENTRYPOINT_SLICE_ENCODE_LP entrypoint. This leads two
problems:
1. We need to specify the tune mode like `vaapih265enc tune=low-power`
every time when we need to use this kind of profile. Or we can not
create the encoder context successfully.
2. More seriously, we set the entrypoint to a fixed value in
init_context_info() and so the create_test_context_config() can not
create the test context for these profile and can not get the
supported video formats, either.
We now change the entrypoint setting based on the tune option of the
encoder. If no tune property provided, we just choose the first
available entrypoint.
Instead of init_context_info() setting the passed profile, it is
assumed that it has to be set by each encoder.
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
The symbol GST_VAAPI_ENTRYPOINT_INVALID is just a representation of
zero, which was already used as an invalid value tacitly. This patch
only makes it explicit.
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
We use GST_VAAPI_OBJECT_NATIVE_DISPLAY with wrong parameter for x11
pixmap creation, which causes crash if we run the internal test case
of:
test-decode --pixmap
The old way make the one config for each profile/entrypoint pair,
which is not very convenient for description the relationship
between them. One profile may contain more than one entrypoints
to within it, so a set like data structure should be more suitable.
GstVaapiMiniObject and GstVaapiObject are deprecated.
This is the first step to remove them by porting GstVaapiSurface as
a GstMiniBuffer descendant.
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
There are several internal functions with 'create' name, but they
don't create any new structure, but rather it initializes that
structure. Renaming those function to reflect better their purpose.
GstVaapiMiniObject and GstVaapiObject are deprecated.
This is the first step to remove them by porting GstVaapiCodedBuffer
as a GstMiniBuffer descendant.
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
GstVaapiMiniObject and GstVaapiObject are deprecrated. This is the
first step to remove them, by porting GstVaapiImage as a
GstMiniObject.
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
The GstVaapiMiniObject is obsolete and we need to replace it. This
patch turns GstVaapiContext into a plain C structure with its own
reference counting mechanism.
Also this patch removes unused overlays attributes.
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
Issue detected by Coverity
`info_to_pack.h264_slice_header` is always allocated by
gst_vaapi_feipak_h264_encode(), thus checking it to free it afterwards
in doesn't make much sense. But it requires to be free on the error
path.
There may be a null pointer dereference, or else the comparison
against null is unnecessary.
In gst_vaapi_encoder_h264_fei_encode: All paths that lead to this null
pointer comparison already dereference the pointer earlier
Issue detected by Coverity
An unsigned value can never be negative, so this test will always
evaluate the same way.
In add_slice_headers: An unsigned value can never be less than 0
Issue detected by Coverity
There may be a null pointer dereference, or else the comparison
against null is unnecessary.
In gst_vaapi_encoder_h264_fei_encode: All paths that lead to this null
pointer comparison already dereference the pointer earlier
Issue detected by Coverity
`info_to_pak` variable in gst_vaapi_encoder_h264_fei_encode() is
declared in the stack, but it is free in
gst_vaapi_feienc_h264_encode() as if declared on the heap.
This patch initializes the structure and removes the free.
A non-heap pointer is placed on the free list, likely causing a crash
later.
In gst_vaapi_encoder_h264_fei_encode: Free of an address-of
expression, which can never be heap allocated.
Issue detected by Coverity
If the FEI mode is not handled the created resources should be
released and return and error code.
The system resource will not be reclaimed and reused, reducing the
future availability of the resource.
In gst_vaapi_encoder_h264_fei_encode: Leak of memory or pointers to
system resources
Don't try to decode until the first I-frame is received within the
currently active sequence. i965 H265 decoder don't show any artifact
but it crashes.
Fixes: #98
GST_VAAPI_FORMATS_ALL collects all declared formats in video-format
as a caps template string, and make them available in caps with
memory:VASurface feature.
Fixes: #199
The code is essentially the same for getting all op default
values. Thus, use a macro to help minimize code duplication
and [hopefully] encourage using the same mechanism for all
default getters.
Currently the parameter of skin-tone-enhancement filter is forced
to zero. In fact it could be set different value by the user.
So create a new property named as "skin-tone-enhancement-level"
for accepting the used defined parameter value.
At the same time, skin-tone-enhancement is marked as deprecated.
When skin-tone-enhancement-level is set, skin-tone-enhancement
will be ignored.
g_return_val_fail() documentations says:
If expr evaluates to FALSE, the current function should be
considered to have undefined behaviour (a programmer error).
The only correct solution to such an error is to change the
module that is calling the current function, so that it avoids
this incorrect call.
So it was missused in a couple parts of the H264 and H265 internal
decoders. This patch changes that to plain conditionals.
Also, it was included a couple code-style fixes.
Found by static analysis. encoder->mb_width * encoder->mb_height
is evaluated using 32-bit arithmetic before widen. Thus, cast
at least one of these to guint64 to avoid overflow.
The YUV formats have no ambiguity for drivers, so we can add them all.
Some old driver(i965) does not implement full get/put image functions
but can use derive image funtions for the YUV format. It does not
report that kind of formats correctly in image query, but will derive
that YUV format image from surface. The dynamic mapping of YUV format
will block that manner.
Adding more YUV format mapping has no side effect. So considering the
legacy driver conformance, we add all YUV formats mapping statically
and dynamic mapping RBG formats
Fix: #189Fix: #190
Multiple different scenarios could break the display thread creation and
end up blocking waiting for thread o be created. Fix them all by
correctly waiting for a new boolean to become valid.
Some streams have error data introducing unknown NAL type. There are
also kinds of NAL types we do not want to handle. The old manner will
set a decoder error when encounter this, which cause a latent crash bug.
The decoder may successfully decode the picture and insert it into DPB.
But there are error NAL units after the AU which cause the post unit error
and make that frame dropped. The later output of the picture still want
to ref that frame and crash.
No need to set decoder error when can not recognize or handle the NAL
unit, just skip it and continue.
Fix: #191
This patch makes use of GST_PARAM_USER_SHIFT to define the internal
param in encoders to decide which parameters to expose. Thus
gstreamer-vaapi will not interfere with any change in GStreamer in the
future.
Also, the internal symbol was change to
GST_VAAPI_PARAM_ENCODER_EXPOSURE to keep the namespacing.
The command line:
gst-launch-1.0 filesrc location=some_name.mjpeg ! jpegparse !
vaapijpegdec ! videoconvert ! video/x-raw,format=I420 ! vaapisink
will crash on i965 driver because of no pointer check.
We now generate the video format map between GST format and VA format
dynamically based on the image format returned by vaQueryImageFormats.
i965 driver does to report image format of 444P and Y800 forcc, while
the jpeg decoder context VASurfaceAttribPixelFormat use them. We can
not recognize these format and pass a NULL pointer to
gst_vaapi_surface_new_from_formats.
We need to add a pointer check here and let the fallback logic handle
this case correctly.
Other drivers work well.
Improve the mapping between va format and gst format. The new map
will be generated dynamically, based on the query result of image
format in VA driver. Also consider the ambiguity of RGB color
format in LSB mode.
We no longer need this obsolete set_property function now after
switch to standard gobject's property manner.
Also delete the old encoder's property enum in the header file.