Fix creation of GLX texture, to not depend on the GstCaps video size that
could be wrong, especially in presence of frame cropping. So, use the size
from the source VA surfaces.
An optimization could be to reduce the texture size to the actual visible
size on screen. i.e. scale down the texture size to match the screen dimensions,
while preserving the VA surface aspect ratio. However, some VA drivers don't
honour that.
Add support for GstVideoCropMeta in GStreamer >= 1.0.x builds and gst-vaapi
specific meta information to hold video cropping details. Make the sink
support video cropping in X11 and GLX modes.
Some video clips may have a clipping region that needs to propogate to
the renderer. These helper functions make it possible to attach that
clipping region, as a GstVaapiRectangle, the the video meta associated
with the buffer.
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Expose all raw video formats in the static caps template since the
vaapisink is supporting raw data. We will get the exact set of formats
supported by the driver dynamically through the _get_caps() routine.
This also fixes an inconsistency wrt. GStreamer 0.10 builds.
https://bugzilla.gnome.org/show_bug.cgi?id=702178
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Now that VA/GLX capable buffers are generated by default on X11, thus
depending on a VA/GLX display, we stil want to use vaPutSurface() for
rendering since it is faster.
Anyway, OpenGL rendering in vaapisink was only meant for testing and
enabling "fancy" effects to play with. This has no real value. So,
disable OpenGL rendering by default.
If the gstreamer-vaapi plug-in elements are built with GLX support, then
try to allocate a GstVaapiDisplayGLX first before resorting to a VA/X11
display next.
https://bugzilla.gnome.org/show_bug.cgi?id=701742
Allow plain gst_buffer_map() interface to work with gstreamer-vaapi
video buffers, i.e. expose the underlying GstVaapiSurfaceProxy to the
caller. This is the only sensible enough thing to do in this mode as
the underlying surface pixels need to be extracted through an explicit
call to the gst_video_frame_map() function instead.
A possible use-case of this is to implement a "handoff" signal handler
to fakesink or identity element for further processing.
Fix gst_vaapi_video_allocator_new() to silently check for direct-rendering
mode support, and not trigger fatal-criticals if either test surface or
image could not be created. Typical case: pixel format mismatch, e.g. NV12
supported by most hardware vs. I420 supported by most software decoders.
On map, ensure we have GST_MAP_WRITE flags since this is only what we
support for now. Likewise, on unmap, make sure that the VA image is
unmapped for either read or write, while still committing it to the
VA surface if write was requested.
In GStreamer 0.10 builds, gst_vaapi_uploader_get_buffer() was used
but it exhibited a memory leak because the surface generated for the
GstVaapiVideoMeta totally lost its parent video pool. So, it was not
possible to release that surface back to the parent pool when the meta
gets released, and the memory consumption kept growing.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Since GST_VAAPI_IS_xxx_VIDEO_POOL() was only testing for NULL and not
the underlying object type, the gst_vaapi_video_meta_new_from_pool()
was hereby totally broken. Fixed this regression by using the newly
provided gst_vaapi_video_pool_get_object_type() function.
Add gst_vaapi_decoder_get_frame_with_timeout() helper function that will
wait for a frame to be decoded, until the specified timeout in microseconds,
prior to returning to the caller.
This is a fix to performance regression from 851cc0, whereby the vaapidecode
loop executed on the srcpad task was called to often, thus starving all CPU
resources.
Rework heuristics to detect when downstream element ran into errors,
and thus failing to release any VA surface in due time for the current
frame to get decoded. In particular, recalibrate the render time base
when the first frame gets submitted downstream, or when there is no
timestamp that could be inferred.
Rework GstVideoDecoder::handle_frame() to decode the current frame,
while possibly waiting for a free surface, and separately submit all
decoded frames from a task. This makes it possible to pop and render
decoded frames as soon as possible.
Fix reference counting bug for passthrough mode, whereby the input buffer
was propagated as is downstream through gst_pad_push() without increasing
its reference count before. The was a problem when gst_pad_push() returns
an error and we further decrease the reference count of the input buffer.
Add support for interlaced streams with GStreamer 1.0 too. Basically,
this enables vaapipostproc, though it is not auto-plugged yet. We also
make sure to reply to CAPS queries, and happily handle CAPS events.
Make gst_vaapi_decoder_get_codec_state() return the original codec state,
i.e. make the GstVaapiDecoder object own the return state so that callers
that want an extra reference to it would just gst_video_codec_state_ref()
it before usage. This aligns the behaviour with what we had before with
gst_vaapi_decoder_get_caps().
This is an ABI incompatible change, library major version was bumped from
previous release (0.5.2).
Mark the following functions are internal, i.e. private to the vaapi plug-in:
- gst_vaapi_video_buffer_pool_get_type()
- gst_vaapi_video_converter_glx_get_type()
- gst_vaapi_video_converter_glx_new()
Implement GstSurfaceMeta API for GStreamer 1.0.x. Even though this is
an unstable/deprecated API, this makes it possible to support Clutter
sink with minimal changes. Tested against clutter-gst 1.9.92.
When render-mode is "overlay", then it is not really useful to peek into
the GstBaseSink::last_buffer, since we have our own video_buffer already
recorded and maintained into GstVaapiSink.
Fix memory leak of GstSample objects in GstVideoOverlayInterface::expose().
This also fixes extra unreferencing of the underlying GstBuffer in the common
path afterwards (for both 0.10 or 1.0).
Fix the name of the plug-in element reported to gst-inspect-1.0. i.e. we
need an explicit definition for GStreamer >= 1.0 because the GST_PLUGIN_DEFINE
incorrectly uses #name for creating the plug-in name, instead of using macro
expansion (and let further expansion of macros) through e.g. G_STRINGIFY().
Fix make dist to allow build for either GStreamer 0.10 or 1.0. i.e. make
sure to include all source files in either case while generating source
tarballs.
Implement GstVideoMeta::{,un}map() to support raw YUV buffer upload when
the last component is unmapped. Downloads are not supported yet. The aim
was to first support SW decoding + HW accelerated rendering (vaapisink).
e.g. for Wayland.
Handle GST_QUERY_CAPS, which is the GStreamer 1.0 mechanism to retrieve
the set of allowed caps, i.e. it works similar to GstPad::get_caps().
This fixes fallback to SW decoding if no HW decoder is available.
Introduce a new configure option --with-gstreamer-api that determines
the desired GStreamer API to use. By default, GStreamer 1.0 is selected.
Also integrate more compatibility glue into gstcompat.h and plugins.
Use new GstVaapiVideoBufferPool to maintain video buffers. Implement
GstBaseSink::propose_allocation() to expose that pool to upstream
elements; and also implement GstVideoDecoder::decide_allocation() to
actually use that pool (from downstream), if any, or create one.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Add initial support for GstVaapiVideoMemory backed buffer pool. The memory
object currently holds a reference to GstVaapiVideoMeta.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Make it possible to copy GstVaapiVideoMeta objects, unless they contain VA
objects created from GstVaapiVideoPool. This is mostly useful to clone a
GstVaapiVideoMeta object containing a VA surface proxy so that to alter its
rendering flags.
Fix GstVaapiVideoMeta to allow VA objects to be destroyed when they are
reset to NULL. i.e. make gst_vaapi_video_meta_set_{image,surface}() and
gst_vaapi_video_meta_set_surface_proxy() actually clear VA objects when
argument is NULL.
Port vaapidecode and vaapisink plugins to GStreamer API >= 1.0. This
is rather minimalistic so that to test the basic functionality.
Disable vaapiupload, vaapidownload and vaapipostproc plugins. The latter
needs polishing wrt. to GStreamer 1.x functionality and the former are
totally phased out in favor of GstVaapiVideoMemory map/unmap facilities,
which are yet to be implemented.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Improve check for raw YUV format modes by avoiding checks against strings
("video/x-raw-yuv") for each new GstBuffer allocation. In the usual case,
GstBaseSink::set_caps() is called first and if VA surface format mode is
used, then GstBaseSink::buffer_alloc() is not called. If the latter is
called before set_caps(), then we just make a full check. This one is
pretty rare though, e.g. it usually happens once for custom pipelines.
Fix gst_vaapi_apply_composition() to not fail if no overlay composition
was found. i.e. return success (TRUE). This was harmless though extra
debug messages are not nice.
This is a regression introduced by commit 95b8659.
Don't return static caps that don't mean anything for the underlying codecs
that are actually supported for decoding. i.e. always allocate a VA display
and retrieve the exact set of HW decoders available. That VA display may be
re-used later on during negotiation through GstVideoContext "prepare-context".
This fixes fallback to SW decoding if no HW decoder is available.
Make gst_vaapi_reply_to_query() first check whether the query argument
is actually a video-context query, i.e. with type GST_QUERY_TYPE_CUSTOM.
Then, make sure vaapisink propagates the query to the parent class if
it is not a video-context query.
Add new gst_vaapi_video_buffer_new() helper function that allocates a video
buffer from a GstVaapiVideoMeta. Also remove obsolete and useless function
gst_vaapi_video_buffer_get_meta().
Move GstVaapiVideoMeta from core libgstvaapi decoding library to the
actual plugin elements. That's only useful there. Also inline reference
counting code from GstVaapiMiniObject.
Make sure libgstvaapi core decoding library doesn't include un-needed
dependencies. So, move out GstVaapiVideoConverterGLX to plugins instead.
Besides, even if the vaapisink element is not used, we are bound to have
a correctly populated GstSurfaceBuffer from vaapidecode.
Also clean-up the file along the way.