Review all interactions between the main video decoder stream thread
and the decode task to derive a correct sequence of operations for
decoding. Also avoid extra atomic operations that become implicit under
the GstVideoDecoder stream lock.
vaapidecode used to wait up to one second past the expected time of
presentation for the last decoded frame. This is not realistic in
practice when it comes to video pause/resume. Changed behaviour to
unconditionnally wait for a free VA surface prior to continuing the
decoding. The decode task will continue pushing the output frames to
the downstream element while also reporting errors at the same time
to the main thread.
https://bugzilla.gnome.org/show_bug.cgi?id=707108
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
If the allocation meta GST_VIDEO_GL_TEXTURE_UPLOAD_META_API_TYPE is
requested, and more specifically under a GLX configuration, then add
the GstVideoGLTextureUploadMeta to the output buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=703236
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
If there is no frame delimiter at the end of the stream, e.g. no
end-of-stream or end-of-sequence marker, and that the current frame
was fully parsed correctly, then assume that last frame is complete
and submit it to the decoder.
https://bugzilla.gnome.org/show_bug.cgi?id=705123
Signed-off-by: Guangxin.Xu <Guangxin.Xu@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Don't return static caps that don't mean anything for the underlying codecs
that are actually supported for decoding. i.e. always allocate a VA display
and retrieve the exact set of HW decoders available. That VA display may be
re-used later on during negotiation through GstVideoContext "prepare-context".
This fixes fallback to SW decoding if no HW decoder is available.
Make vaapidecode derive from the standard GstVideoDecoder base element
class. This simplifies the code to the strict minimum for the decoder
element and makes it easier to port to GStreamer 1.x API.
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
The use of heap allocated GMutex/GCond is deprecated. Instead place them
inside the structure they are locking.
These changes switch to use g_mutex_init/g_cond_init rather than the heap
allocation functions.
Because we cannot test for a NULL pointer for the GMutex/GCond we must
initialise inside the GObject _init function and clear inside the _finalize
which is guaranteed to only be called once and after the object is no longer
in use.
Previously, vaapidecode would wait up to one second until a free surface
is available, or it aborts decoding. Now, vaapidecode waits until the
last decoded surface was to be presented, plus one second. Besides, end
times are now expressed relative to the monotonic clock.
GStreamer codecparsers-based decoders are the only supported decoders now.
Though, FFmpeg decoders are still available in gstreamer-vaapi 0.3.x series.
Rationale: playbin2 links all elements at run-time. Once vaapidecode
is created and a NEWSEGMENT event arrives, downstream element may not
be ready yet. So, delay this event until next element is chained in,
otherwise basesink could output "Received buffer without a new-segment.
Assuming timestamps start from 0".
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Otherwise, the decoder would always create its own X display instead
of probing it from the downstream element, which is not reliable.
e.g. DISPLAY is not :0 or when running on Wayland.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>