Split GstVideoDecoder::handle_frame() implementation into two functions:
(i) one for decoding the provided GstVideoCodecFrame and (ii) another one
for purging all decoded frames and submit them downstream.
Update plugin elements with the new GstVaapiVideoMeta API.
This also fixes support for subpictures/overlay because GstVideoDecoder
generates a sub-buffer from the GstVaapiVideoBuffer. So, that sub-buffer
is marked as read-only. However, when comes in the textoverlay element
for example, it checks whether the input buffer is writable. Since that
buffer read-only, then a new GstBuffer is created. Since gst_buffer_copy()
does not preserve the parent field, the generated buffer in textoverlay
is not exploitable because we lost all VA specific information.
Now, with GstVaapiVideoMeta information attached to a standard GstBuffer,
all information are preserved through gst_buffer_copy() since the latter
does copy metadata (qdata in this case).
Fix calculation of the time-out value for cases where no VA surface is
available for decoding. In this case, we need to wait until downstream
sink consumed at least one surface. The time-out was miscalculated as
it was always set to <current-time> + one second, which is not suitable
for streams with larger gaps.
Don't call gst_video_decoder_drop_frame() if gst_video_decoder_finish_frame()
was already called before and it returned an error. In that case, we were
releasing the frame again, thus leading to a "double-free" condition.
Maintain decoded surfaces as GstVideoCodecFrame objects instead of
GstVaapiSurfaceProxy objects. The latter will tend to be reduced to
the strict minimum: a context and a surface.
Make sure to push all decoded frames downstream as soon as possible.
This makes sure we don't need to wait for a new frame to be ready to
be decoded before receiving new decoded frames.
This also separates the decode process and the output process. The latter
could be moved to a specific GstTask later on.
Determine whether the buffer represents the top-field only by checking for
the GST_VIDEO_BUFFER_TFF flag instead of relying on the GstVaapiSurfaceProxy
flag. Also trust "interlaced" caps to determine whether the input frame
is interleaved or not.
Intermediate elements may produce a sub-buffer from a valid GstVaapiVideoBuffer
for non raw YUV cases. Make sure vaapipostproc now understands those buffers.
Intermediate elements may produce a sub-buffer from a valid GstVaapiVideoBuffer
for non raw YUV cases. Make sure vaapisink now understands those buffers.
Directly use the GstVideoCodecState associated with the VA decoder
instead of parsing caps again.
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Make vaapidecode derive from the standard GstVideoDecoder base element
class. This simplifies the code to the strict minimum for the decoder
element and makes it easier to port to GStreamer 1.x API.
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
GstVaapiSurfaceProxy does not use any particular functionality from
GObject. Actually, it only needs a basic object type with reference
counting.
This is an API and ABI change.
Try to allocate the GstVaapiUploader helper object prior to listing the
supported image formats. Otherwise, only a single generic caps is output
with no particular pixel format referenced in there.
Use GstVaapiUploader helper that automatically handles direct rendering
mode, thus making the "direct-rendering" property obsolete and hence it
is now removed.
The "direct-rendering" level 2, i.e. exposing VA surface buffers, was never
really well supported and it could actually trigger degraded performance.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Make vaapisink expose only the set of supported caps for raw YUV buffers.
Add gst_vaapi_uploader_get_caps() helper function to determine the set
of supported YUV caps as source (for images). This function actually
tries to zero and upload each image to a 64x64 test surface. Of course,
this relies on VA drivers to not claim success if vaPutImage() is not
correctly supported.
Add new GstVaapiUploader helper to upload raw YUV buffers to VA surfaces.
It is up to the caller to negotiate source caps (for images) and output
caps (for surfaces). gst_vaapi_uploader_has_direct_rendering() is available
to help decide between the creation of a GstVaapiVideoBuffer or a regular
GstBuffer on sink pads.
Signed-off-by: Zhao Halley <halley.zhao@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
The use of heap allocated GMutex/GCond is deprecated. Instead place them
inside the structure they are locking.
These changes switch to use g_mutex_init/g_cond_init rather than the heap
allocation functions.
Because we cannot test for a NULL pointer for the GMutex/GCond we must
initialise inside the GObject _init function and clear inside the _finalize
which is guaranteed to only be called once and after the object is no longer
in use.
Don't care of the return value for gst_vaapi_decoder_put_buffer()
during destruction of the element. Don't print out (uninitialised)
error code when allocation of video buffer failed.
Reset, i.e. destroy then create, the decoder in _setcaps() handler only
if the underlying codec type actually changed. This makes it possible
to be more tolerant with certain MPEG-2 streams that get parsed to
form caps that are compatible with the previous state but minor changes
to "codec-data".
Add new gst_vaapi_codec_from_caps() helper to determine codec type from
the specified caps. Don't globally expose this function since this is
really trivial and only used in the vaapidecode element.
Previously, vaapidecode would wait up to one second until a free surface
is available, or it aborts decoding. Now, vaapidecode waits until the
last decoded surface was to be presented, plus one second. Besides, end
times are now expressed relative to the monotonic clock.
When playback stops the GstVaapiDecode object is reset into a clean
state. However, surfaces may still be referenced by library users and
unreferencing them after the reset triggers an access to an unset mutex.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Forward declaring enums is not allowed by the C standard and aborts
compilation if the header file is included in a C++ project.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
If either dimension is out-of-bounds, then scale window to fit the
display size, even if the output is to be rotated. Use the standard
gst_video_sink_center_rect() function to center and scale the window
wrt. the outer (display) bounds.
Keep VA surface proxy associated with the surface that is currently
being displayed. This makes sure that surface is not released back
to the pool of surfaces free to use for decoding. This is necessary
with VA driver implementations that support rendering to an overlay
pipe. Otherwise, there could be cases where we are decoding into a
surface that is being displayed, hence some flickering.
This is not useful in practice but for raw performance evaluation when
the sink is invoked with display=drm sync=false. fakesink could also be
used though.