Use g_object_class_install_properties() to install GstVaapiDisplay properties.
It is useful to maintain properties as GParamSpec so that to be able to raise
"notify" signals by id instead of by name in the future.
A rendering mode can be "overlay" or "texture"'ed blit.
The former mode implies that a VA surface used for rendering can't be
re-used right away for decoding, so the sink shall make provisions to
retain the associated surface proxy until the next surface is to be
displayed.
The latter mode implies that the VA surface is implicitly copied to an
intermediate backing store, or back buffer of a frame buffer, so the
associated surface proxy can be disposed right away.
The VA display attributes are mapped to properties so that to maintain the
GStreamer terminology. Properties are to be identified by name, but internal
functions are available to lookup the property by the actual VA display
attribute type.
decode_current_picture() was converted to return a gboolean instead
of a GstVaapiDecoderStatus, so we were not getting out of the decode
loop as expected, or could cause an error instead.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Integrate the start code prefix in the slice data buffer that is submitted
to the hardware. VA-API specifies that slice_data_offset is the offset to
the first byte of slice data. And, for MPEG-2, slice() data begins with
the slice_start_code. Some VA driver implementations (EMGD) expect this.
This patch allows for regenerating the configure script from a build
directory that is not the actual source directory.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Use g_object_notify_by_pspec() instead of g_object_notify() so that to
avoid a property name lookup. i.e. this makes notifications faster to
the `vaapidecode' element.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Two elements in the luminance quantization table were wrong. So,
gst_jpeg_get_default_quantization_tables() now reconstructs tables
in zig-zag order from the standard ones (Tables K.1 and K.2).
... instead of having them pre-calculated. This saves around 1.5 KB
of data in the DSO but requires gst_jpeg_get_default_huffman_tables()
to do more work. Though, the client application may have to call that
function at most once, only.
This is not useful in practice but for raw performance evaluation when
the sink is invoked with display=drm sync=false. fakesink could also be
used though.
If vaapisink is in the GStreamer pipeline, then we shall allocate a
unique GstVaapiDisplay and propagate it upstream. i.e. subsequent
queries from vaapidecode shall get a valid answer from vaapisink.
Move display types from gstvaapipluginutil.* to gstvaapidisplay.* so that
we could simplify characterization of a GstVaapiDisplay. Also rename "auto"
type to "any", and add a "display-type" attribute.
This improves display name comparisons by always allocating a valid display
name. This also helps to disambiguate lookups by name in the global display
cache, should a new backend be implemented.
The vdeo buffer creation routines shall actually be internal to gstreamer-vaapi
plugin elements. So deprecate any explicit creation routines that are not the
new *_typed_new*() variants.
Introduce new typed constructors internal to gstreamer-vaapi plugin elements.
This avoids duplication of code, and makes it possible to further implement
generic video buffer creation routines that automatically map to base or GLX
variants.
If GLX window was created from a foreign Display, then that same Display shall
be used for subsequent glXMakeCurrent(). This means that gl_create_context()
will now use the same Display that the parent, if available.
This fixes cluttersink with the Intel GenX VA driver.
vaapisink is now built with support for multiple display types, whenever
they are enabled. The new "display" attribute is used to select a particular
renderer.
This flag is obsolete. It was meant to explicitly enable/disable VA/GLX API
support, or fallback to TFP+FBO if this API is not found. Now, we check for
the VA/GLX API by default if --enable-glx is set. If this API is not found,
we now default to use TFP+FBO.
Note: TFP+FBO, i.e. using vaPutSurface() is now also a deprecated usage and
will be removed in the future. If GLX rendering is requested, then the VA/GLX
API shall be used as it covers most usages. e.g. AMD driver can't render to
an X pixmap yet.