Improve the semantics for gst_vaapi_decoder_put_buffer() when an empty
buffer is passed on. An empty buffer is a buffer with a NULL data pointer
or with a size equals to zero. In this case, that buffer is simply
skipped and the function returns TRUE. A NULL buffer argument still
marks the end-of-stream.
Rather than always making the surface fullscreen instead implement the
set_fullscreen vfunc on GstVaapiWindow and then set the shell surface
fullscreen on not depending on that.
Reviewed-by: Joe Konno <joe.konno@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Mesa recently updated the <GL/glext.h> header version to Khronos version 85.
This caused the PFNGLMULTITEXCOORD2FPROC definition to be moved out of the
GL_VERSION_1_3_DEPRECATED block. However, since <GL/gl.h> also defines
GL_VERSION_1_3 to 1, the definitions in <GL/glext.h> are then not enabled,
thus leaving PFNGLMULTITEXCOORD2FPROC undefined as well.
Provide a PFNGLMULTITEXCOORD2FPROC replacement as an interim solution for
newer versions of the <GL/glext.h> header.
Maintaining the sub-buffer is rather suboptimal especially since we
were also maintaining a GstAdapter. Now, we only use the GstAdapter
thus requiring minor extra parsing when receiving avcC buffers.
This allows the compositor to optimize redraws and cull away changes
obscured by the video surface.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Reset, i.e. destroy then create, the decoder in _setcaps() handler only
if the underlying codec type actually changed. This makes it possible
to be more tolerant with certain MPEG-2 streams that get parsed to
form caps that are compatible with the previous state but minor changes
to "codec-data".
Make it possible to specify the maximum number of references to use within
a single VA context. This helps reducing GPU memory allocations to the useful
number of references to be used.
Forward declaring enums is not allowed by the C standard and aborts
compilation if the header file is included in a C++ project.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Some VA drivers (e.g. EMGD) can have completely random values for initial
display attributes. So, try to improve the discovery process to check the
initial display attribute values actually fall within valid bounds. If not,
try to reset those to some sensible values like the default value reported
through vaQueryDisplayAttributes().
Use g_object_class_install_properties() to install GstVaapiDisplay properties.
It is useful to maintain properties as GParamSpec so that to be able to raise
"notify" signals by id instead of by name in the future.
A rendering mode can be "overlay" or "texture"'ed blit.
The former mode implies that a VA surface used for rendering can't be
re-used right away for decoding, so the sink shall make provisions to
retain the associated surface proxy until the next surface is to be
displayed.
The latter mode implies that the VA surface is implicitly copied to an
intermediate backing store, or back buffer of a frame buffer, so the
associated surface proxy can be disposed right away.
The VA display attributes are mapped to properties so that to maintain the
GStreamer terminology. Properties are to be identified by name, but internal
functions are available to lookup the property by the actual VA display
attribute type.
decode_current_picture() was converted to return a gboolean instead
of a GstVaapiDecoderStatus, so we were not getting out of the decode
loop as expected, or could cause an error instead.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Integrate the start code prefix in the slice data buffer that is submitted
to the hardware. VA-API specifies that slice_data_offset is the offset to
the first byte of slice data. And, for MPEG-2, slice() data begins with
the slice_start_code. Some VA driver implementations (EMGD) expect this.
Use g_object_notify_by_pspec() instead of g_object_notify() so that to
avoid a property name lookup. i.e. this makes notifications faster to
the `vaapidecode' element.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Two elements in the luminance quantization table were wrong. So,
gst_jpeg_get_default_quantization_tables() now reconstructs tables
in zig-zag order from the standard ones (Tables K.1 and K.2).
... instead of having them pre-calculated. This saves around 1.5 KB
of data in the DSO but requires gst_jpeg_get_default_huffman_tables()
to do more work. Though, the client application may have to call that
function at most once, only.