Make sure to create the GLX context once the window object has completed
its creation. Since gl_resize() relies on the newly created window size,
then we cannot simply overload the GstVaapiWindowClass::create() hook.
So, we just call into gst_vaapi_window_glx_ensure_context() once the
window object is created in the gst_vaapi_window_glx_new*() functions.
Make it possible to add extra an extra filter to most of display cache
lookup functions so that the GstVaapiDisplay instance can really match
a compatible and existing display by type, instead of relying on extra
string tags (e.g. "X11:" prefix, etc.).
Improve check for display cache infrastructure. In particular, for X11 and
GLX backends, we need to make sure that we can create a GstVaapiDisplayX11
from another GstVaapiDisplayGLX, i.e. underlying X11 and VA displays can be
shared. Besides, allocating a GstVaapiDisplayGLX while a GstVaapiDisplayX11
already exists will have to generate different VA displays.
In GStreamer 0.10 builds, gst_vaapi_uploader_get_buffer() was used
but it exhibited a memory leak because the surface generated for the
GstVaapiVideoMeta totally lost its parent video pool. So, it was not
possible to release that surface back to the parent pool when the meta
gets released, and the memory consumption kept growing.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Since GST_VAAPI_IS_xxx_VIDEO_POOL() was only testing for NULL and not
the underlying object type, the gst_vaapi_video_meta_new_from_pool()
was hereby totally broken. Fixed this regression by using the newly
provided gst_vaapi_video_pool_get_object_type() function.
Drop obsolete GST_VAAPI_IS_xxx() helper macros since we are no longer
deriving from GObject and so those were only checking for whether the
argument was NULL or not. This is now irrelevant, and even confusing
to some extent, because we no longer have type checking.
Note: this incurs more type checking (review) but the libgstvaapi is
rather small, so this is manageable.
The whole libgstvaapi libraries got a major refresh to get rid of GObject.
This is a fundamental change that requires a new SONAME. More changes are
underway to streamline the core libraries.
So far, the net result is a reduction of .text size (code) by 32KB, i.e. -10%.
On one particular test (sintel HD trailer), the total number of executed
instruction was reduced by 8%.
Make GstVaapiID a gsize instead of guessing an underlying integer large
enough to hold all bits of a pointer. Also drop GST_VAAPI_ID_NONE since
this is plain zero and that it is no longer passed as varargs.
Port GstVaapiDecoder and GstVaapiDecoder{MPEG2,MPEG4,JPEG,H264,VC1} to
GstVaapiMiniObject. Add gst_vaapi_decoder_set_codec_state_changed_func()
helper function to let the user add a callback to a function triggered
whenever the codec state (e.g. caps) changes.
Port GstVaapiVideoPool, GstVaapiSurfacePool and GstVaapiImagePool to
GstVaapiMiniObject. Drop gst_vaapi_video_pool_get_caps() since it was
no longer used for a long time. Make object allocators static, i.e.
local to the shared library.
Drop support for user-defined data since this capability was not used
so far and GstVaapiMiniObject represents the smallest reference counted
object type. Add missing GST_VAAPI_MINI_OBJECT_CLASS() helper macro.
Besides, since GstVaapiMiniObject is a libgstvaapi internal object, it
is also possible to further simplify the layout of the object. i.e. merge
GstVaapiMiniObjectBase into GstVaapiMiniObject.
Propagate the picture size from the bitstream to the GstVaapiDecoder,
and subsequent user who installed a signal on notify::caps. This fixes
decoding of TS streams when the demuxer failed to extract the required
information.
Add gst_vaapi_decoder_get_frame_with_timeout() helper function that will
wait for a frame to be decoded, until the specified timeout in microseconds,
prior to returning to the caller.
This is a fix to performance regression from 851cc0, whereby the vaapidecode
loop executed on the srcpad task was called to often, thus starving all CPU
resources.
Rework heuristics to detect when downstream element ran into errors,
and thus failing to release any VA surface in due time for the current
frame to get decoded. In particular, recalibrate the render time base
when the first frame gets submitted downstream, or when there is no
timestamp that could be inferred.
Rework GstVideoDecoder::handle_frame() to decode the current frame,
while possibly waiting for a free surface, and separately submit all
decoded frames from a task. This makes it possible to pop and render
decoded frames as soon as possible.
Fix reference counting bug for passthrough mode, whereby the input buffer
was propagated as is downstream through gst_pad_push() without increasing
its reference count before. The was a problem when gst_pad_push() returns
an error and we further decrease the reference count of the input buffer.
Add support for interlaced streams with GStreamer 1.0 too. Basically,
this enables vaapipostproc, though it is not auto-plugged yet. We also
make sure to reply to CAPS queries, and happily handle CAPS events.
Fix support for interlaced contents with GStreamer 0.10. In particular,
propagate GstVaapiSurfaceProxy frame flags to GstVideoCodecFrame flags
correctly.
This is a regression from commit 87e5717.
Rename GstVaapiDecoderFrame to GstVaapiParserFrame because this data
structure was only useful to parsing and a proper GstvaapiDecoderFrame
instance will be created instead.
Fix regression from 0.4-branch whereby GstVaapiSurfaceProxy no longer
held any information about the expected presentation timestamp, frame
duration or additional flags like interlaced or top-field-first.
Use new GstVaapiSurfaceProxy internal helper functions to propagate the
necessary GstVideoCodecFrame flags to vaapidecode (GStreamer 0.10).
Also make GstVaapiDecoder push_frame() operate similarly to drop_frame().
i.e. increase the GstVideoCodecFrame reference count in push_frame rather
than gst_vaapi_picture_output().
Add more attributes for raw decoding modes, i.e. directly through the
libgstvaapi helper library. In particular, add presentation timestamp,
duration and a couple of flags (interlaced, TFF, RFF, one-field).