Make GstVaapiSurfaceProxy only a thin wrapper around a VA context and a
VA surface. i.e. drop any other attribute like timestamp, duration,
interlaced or top-field-first.
Maintain decoded surfaces as GstVideoCodecFrame objects instead of
GstVaapiSurfaceProxy objects. The latter will tend to be reduced to
the strict minimum: a context and a surface.
Make sure to push all decoded frames downstream as soon as possible.
This makes sure we don't need to wait for a new frame to be ready to
be decoded before receiving new decoded frames.
This also separates the decode process and the output process. The latter
could be moved to a specific GstTask later on.
Add new gst_vaapi_decoder_get_frame() function meant to be used with
gst_vaapi_decoder_decode(). The purpose is to return the next decoded
frame as a GstVideoCodecFrame and the associated GstVaapiSurfaceProxy
as the user-data object.
Determine whether the buffer represents the top-field only by checking for
the GST_VIDEO_BUFFER_TFF flag instead of relying on the GstVaapiSurfaceProxy
flag. Also trust "interlaced" caps to determine whether the input frame
is interleaved or not.
Intermediate elements may produce a sub-buffer from a valid GstVaapiVideoBuffer
for non raw YUV cases. Make sure vaapipostproc now understands those buffers.
Decoder units were zero-initialized, including the SPS/PPS/slice headers.
The latter don't require zero-initialization since the codecparsers/ lib
will do so for key variables already. This is not a great value per se but
at least it makes it possible to check whether the default initialization
decisions made in the codecparsers/ lib were right or not.
This can be reverted if this exposes too many issues.
Drop explicit initialization of most fields that are implicitly set to
zero. Drop helper macros for casting to GstVaapiPictureH264 or
GstVaapiFrameStore. Also remove some useless checks for NULL pointers.
Implement GstVaapiDecoder.start_frame() and end_frame() semantics so
that to create new VA context earlier and submit VA pictures to the
HW for decoding as soon as possible. i.e. don't wait for the next
frame to start decoding the previous one.
Introduce new GstVaapiDecoderUnitH264 object, which holds the standard
NAL unit header (GstH264NalUnit) and additional parsed header info.
Besides, we now parse headers as early as in the _parse() function so
that to avoid un-necessary creation of sub-buffers in _decode() for
NAL units that are not slices.
This is a performance win by ~+1.1% only.
Intermediate elements may produce a sub-buffer from a valid GstVaapiVideoBuffer
for non raw YUV cases. Make sure vaapisink now understands those buffers.
Directly use the GstVideoCodecState associated with the VA decoder
instead of parsing caps again.
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Make vaapidecode derive from the standard GstVideoDecoder base element
class. This simplifies the code to the strict minimum for the decoder
element and makes it easier to port to GStreamer 1.x API.
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Use standard GstVideoCodecState throughout GstVaapiDecoder and expose
it with a new gst_vaapi_decoder_get_codec_state() function. This makes
it possible to drop picture size (width, height) information, framerate
(fps_n, fps_d) information, pixel aspect ratio (par_n, par_d) information,
and interlace mode (is_interlaced field).
This is a new API with backwards compatibility maintained. In particular,
gst_vaapi_decoder_get_caps() is still available.
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Align gst_vaapi_decoder_get_surface() semantics with the rest of the
API. That is, return a GstVaapiDecoderStatus and the decoded surface
as a handle to GstVaapiSurfaceProxy in parameter.
This is an API/ABI change.
Introduce new decoding process whereby a GstVideoCodecFrame is created
first. Next, input stream buffers are accumulated into a GstAdapter,
that is then passed to the _parse() function. The GstVaapiDecoder object
accumulates all parsed units and when a complete frame or field is
detected, that GstVideoCodecFrame is passed to the _decode() function.
Ultimately, the caller receives a GstVaapiSurfaceProxy if decoding
process was successful.
The start_frame() hook is called prior to traversing all decode-units
for decoding. The unit argument represents the first slice in the frame.
Some codecs (e.g. H.264) need to wait for the first slice in order to
determine the actual VA context parameters.
Split decoding process into two steps: (i) parse incoming bitstreams
into simple decoder-units until the frame or field is complete; and
(ii) decode the whole frame or field at once.
This is an ABI change.
Introduce a new GstVaapiDecoderFrame that is just a list of decoder units
(GstVaapiDecoderUnit objects) that constitute a frame. This object is just
an extension to GstVideoCodecFrame for VA decoder purposes. It is available
as the user-data member element.
This is a libgstvaapi internal object.
Introduce GstVaapiDecoderUnit which represents a fragment of the source
stream to be decoded. For instance, a decode-unit will be a NAL unit for
H.264 streams, an EBDU for VC-1 streams, and a video packet for MPEG-2
streams.
This is a libgstvaapi internal object.
GstVaapiSurfaceProxy does not use any particular functionality from
GObject. Actually, it only needs a basic object type with reference
counting.
This is an API and ABI change.
Introduce a new reference counted object that is very lightweight and
also provides flags and user-data functionalities. Initialization and
finalization times are reduced by up to a factor 5x vs GstMiniObject
from GStreamer 0.10 stack.
This is a libgstvaapi internal object.
Fix decode_slice() to ensure a VA context exists prior to creating a
new GstVaapiSliceH264, which invokes vaCreateBuffer() with some VA
context ID. i.e. the latter was not initialized, thus causing failures
on Cedar Trail for example.
If GST_PLUGIN_PATH environment variable exists and points to a valid
directory, then use it as the system installation path for gst-vaapi
plugin elements.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Use glib >= 2.32 semantics for GMutex and GRecMutex wrt. initialization
and termination. Basically, the new mutex objects can be used as static
mutex objects from the deprecated APIs, e.g. GStaticMutex and GStaticRecMutex.
This fixes symbol clashes between the gst-vaapi built-in codecparsers/
library and the system-provided one, mainly used by videoparses/. Now,
only symbols with the gst_vaapi_* prefix will be exported, if they are
not marked as "hidden" to libgstvaapi.
Try to allocate the GstVaapiUploader helper object prior to listing the
supported image formats. Otherwise, only a single generic caps is output
with no particular pixel format referenced in there.
Use GstVaapiUploader helper that automatically handles direct rendering
mode, thus making the "direct-rendering" property obsolete and hence it
is now removed.
The "direct-rendering" level 2, i.e. exposing VA surface buffers, was never
really well supported and it could actually trigger degraded performance.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Make vaapisink expose only the set of supported caps for raw YUV buffers.
Add gst_vaapi_uploader_get_caps() helper function to determine the set
of supported YUV caps as source (for images). This function actually
tries to zero and upload each image to a 64x64 test surface. Of course,
this relies on VA drivers to not claim success if vaPutImage() is not
correctly supported.
Add new GstVaapiUploader helper to upload raw YUV buffers to VA surfaces.
It is up to the caller to negotiate source caps (for images) and output
caps (for surfaces). gst_vaapi_uploader_has_direct_rendering() is available
to help decide between the creation of a GstVaapiVideoBuffer or a regular
GstBuffer on sink pads.
Signed-off-by: Zhao Halley <halley.zhao@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>