Introduce gst_vaapi_surface_proxy_new_from_pool() to allocate a new surface
proxy from the context surface pool. This change also makes sure to retain
the parent surface pool in the proxy.
Besides, it was also totally useless to attach/detach parent context to
VA surface each time we acquire/release it. Since the whole context owns
all associated VA surfaces, we can mark this as such only once and for all.
Make gst_vaapi_reply_to_query() first check whether the query argument
is actually a video-context query, i.e. with type GST_QUERY_TYPE_CUSTOM.
Then, make sure vaapisink propagates the query to the parent class if
it is not a video-context query.
Add new gst_vaapi_video_buffer_new() helper function that allocates a video
buffer from a GstVaapiVideoMeta. Also remove obsolete and useless function
gst_vaapi_video_buffer_get_meta().
Move GstVaapiVideoMeta from core libgstvaapi decoding library to the
actual plugin elements. That's only useful there. Also inline reference
counting code from GstVaapiMiniObject.
Make sure libgstvaapi core decoding library doesn't include un-needed
dependencies. So, move out GstVaapiVideoConverterGLX to plugins instead.
Besides, even if the vaapisink element is not used, we are bound to have
a correctly populated GstSurfaceBuffer from vaapidecode.
Also clean-up the file along the way.
In decode_codec_data(), force initialization of format to zero so that
we can catch up cases where codec-data has neither "format" nor "wmvversion"
fields, thus making it possible to gracefully fail in this case.
Add a new GstVaapiDecoder::decode_codec_data() hook to actually decode
codec-data in the decoder sub-class. Provide a common shared helper
function to do the actual work and delegating further to the sub-class.
Drop GstVaapiDecoderUnit buffer field (GstBuffer) since it's totally
useless nowadays as creating sub-buffers doesn't bring any value. It
actually means more memory allocations. We can't do without that in
JPEG and MPEG-4:2 decoders.
If the raw YUV buffer was created from vaapisink, through the buffer_alloc()
hook, then it will have a valid GstVaapiVideoMeta object attached to it.
However, we previously assumed in that case that it was a "native" VA buffer,
thus not calling into GstVaapiUploader::process().
Use gst_element_class_set_static_metadata() from GStreamer 1.0, which
basically is the same as gst_element_class_set_details_simple() in
GStreamer 0.10 context.
Move GstImplementsInterface and GstVideoContext support functions up
so that to keep a clear separation between the plugin element and its
interface hooks.
Use GstVideoInfo and gst_video_info_from_caps() helper wherever possible.
Also use the newly added gst_vaapi_image_format_from_structure() helper
in GstVaapiUploader::ensure_allowed_caps().
gst_vaapi_video_buffer_new_from_buffer() needs to reference the source
buffer video meta since it would be unreference'd from the get_buffer()
helper function. For other cases, we still use (steal) the newly created
video meta.
Use gst_vaapi_image_format_from_structure() helper in test-display and
then extract a VAImageFormat from it instead of relying on GstCaps for
YUV and RGB formats.
Use newer gst_video_overlay_rectangle_get_pixels_unscaled_raw() helper
function with GStreamer 0.10 compatible semantics, or that tries to
approach the current meaning. Basically, this is also just about moving
the helper to gstcompat.h.
Fix ensure_image() to only zero-initialize the first line of each plane.
Properly initializing each plane to their full vertical resolution would
require to actually compute it based on the image format.
In particular, for NV12 images, the UV plane has half vertical resolution
vs. the Y plane. So using the full image height to initialize the UV plane
will obviously lead to a buffer overflow. Likewise for other YUV format.
Since ensure_image() is only a helper function to initialize something,
and not necessarily the whole thing, it is fine to initializ the first
line only. Besides, the target surface is not rendered either.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Generated source files were missing a dependency on the complete set of
generated header files. e.g. gstvideodecoder.c requires gstvideoutils.h
to build and almost every codec parser source depends on parserutils.h.
https://bugs.freedesktop.org/show_bug.cgi?id=59575
Signed-off-by: Xiang, Haihao <haihao.xiang@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Force luma_log2_weight_denom and chroma_log2_weight_denom to zero if
there is no pred_weight_table() that was parsed.
This is a workaround for the VA intel-driver on Ivy Bridge.
g_async_queue_timeout_pop() appeared in glib 2.31.18. Implement it as
g_async_queue_timed_pop() with a GTimeVal as the final time to wait for
new data to arrive in the queue.
There shall be only one place to call decode_current_picture(), and this
is in the end_frame() hook. The EOS unit is processed after end_frame()
so this means we cannot have a valid picture to decode/output at this
point.
Improve robustness when some expected packets where not received yet
or that were not correctly decoded. For example, don't try to decode
a picture if there was no valid sequence or picture headers.
Fix gst_vaapi_decoder_get_surface() to only return frames with a valid
surface proxy, i.e. with a valid VA surface. This means that any frame
marked as decode-only is simply skipped.
If the decoder was not able to decode a frame because insufficient
information was available, e.g. missing sequence or picture header,
then allow the frame to be gracefully dropped without generating
any error.
It is also possible that a frame is not meant to be displayed but
only used as a reference, so dropping that frame is also a valid
operation since GstVideoDecoder base class has extra references to
that GstVideoCodecFrame that needs to be released.
Decode-only frames may not have a valid surface proxy. So, simply discard
them gracefully, i.e. don't create meta data information. GstVideoDecoder
base class will properly handle this case and won't try to push any buffer
to downstream elements.