Since GStreamer 1.2 the vmethod reset() in GstVideoDecoderClass was deprecated
and flush() was added.
This patch set the vmethod flush() if the installed GStreamer version is 1.2 or
superior. Otherwise, reset() is set.
v2: 1) In order to avoid symbol collision, the old method gst_vaapidecode_flush()
was renamed to gst_vaapidecode_internal_flush().
2) The new vmethod flush() always do a hard full reset.
v3: 1) Call gst_vaapidecode_internal_flush() first in flush() vmethod, in order to
gather all collected data with gst_video_decoder_have_frame()
https://bugzilla.gnome.org/show_bug.cgi?id=742922
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
In commit 2f8c115 (vaapidecode: use the query virtual methods in 1.4)
a bug was introduced: when calling the parent's query function of the
src pad, the one of the sink pad is called instead. This patch fixes
this issue.
https://bugzilla.gnome.org/show_bug.cgi?id=746248
Sometimes, for example, when switching video streams but keeping
the same sink, the surface will be released after the decoder is
stopped and replaced. This caused a crash because the release
callback was called on an invalid pointer.
The patch adding an additional reference to the decoder object in the buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=745189
Signed-off-by: Olivier Crete <olivier.crete@collabora.com>
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
GstVideoDecoder, the base class of vaapidecode, added support for
pad queries as virtual methods. This patch enables the use of that
support, while keeping support for lower versions of gstreamer.
This patch is important because GstVideoDecoder takes care of other
queries that might be important in the pipeline managing.
v2: 1) rebase to current master
2) fix indentation with gst-indent
3) simplify the patch layout
4) fix the context query
5) initialise the filter to NULL
6) improve the query log message for gst-1.2
https://bugzilla.gnome.org/show_bug.cgi?id=744406
Because the decoder uses the thread from handle_frame() to decode a frame,
the src pad task creates an unsolveable AB-BA deadlock between
handle_frame() waiting for a free surface and decode_loop() pushing
decoded frames out.
Instead, have handle_frame() take responsibility for pushing surfaces,
and remove the deadlock completely. If you need a separate thread
downstream, you can insert a queue between vaapidecode and its downstream
to get one.
Another justification for the single thread implementation is,
there are two many point of locking in gstreamer-vaapi's current
implementation which can lead to deadlocks.
https://bugzilla.gnome.org/show_bug.cgi?id=742605
Signed-off-by: Simon Farnsworth <simon.farnsworth@onelan.co.uk>
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
vaapidecode keeps an output state that use the format
GST_VIDEO_FORMAT_ENCODED, while it crafts a different src caps
for a correct negotiation.
I don't see the rational behind this decoupling, it looks like
unnecessary complexity. This patch simplify this logic keeping
in sync the output state and the src caps.
This patch improves the readability of the function
gst_vaapidecode_update_src_caps() and simplify its logic. Also,
the patch validates if the buffer pool has the configuration for
the GL texture upload meta, in order to set the caps feature
meta:GLTextureUpload. Otherwise, the I420 format is set back.
https://bugzilla.gnome.org/show_bug.cgi?id=744618
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
When vaapidecode finishes the decoding of a frame and pushes it,
if, in the decide_allocation() method, it is determined if the
next element supports the GL texture upload meta feature, the
decoder adds the buffer's meta.
Nonetheless, in the same spirit of the commit 71d3ce4d, the
determination if the next element supports the GL texture upload
meta needs to check both the preferred caps feature *and* if the
allocation query request the API type.
This patch, first removes the unused variable need_pool, and
determines the attribute has_texture_upload_meta using the
preferred caps feature *and* the allocation query.
Also, the feature passed to GstVaapPluginBase is not longer
determined by has_texture_upload_meta, but by the computed
preferred one.
https://bugzilla.gnome.org/show_bug.cgi?id=744618
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Currently the src caps are set immediately after the sink caps are set, but in
that moment the pipeline might not fully constructed and the video sink has
not negotiated its supported caps and features. As a consequence, in many cases
of playback, the least optimized caps feature is forced. This is partially the
responsible of bug #744039.
Also, vaapidecode doesn't attend the reconfigure events from downstream,
which is a problem too, since the video sink can be changed with different
caps features.
This patch delays the src caps, setting them until the first frame arrives to
the decoder, assuming until that very moment the whole pipeline is already
negotiated. Particularly, it checks if the src pad needs to be reconfigured,
as a consequence of a reconfiguration event from downstream.
A key part of this patch is the new GstVaapiCapsFeature
GST_VAAPI_CAPS_FEATURE_NOT_NEGOTIATED, which is returned when the src pad
doesn't have a peer yet. Also, for a better report of the caps allowed
through the src pad and its peer, this patch uses gst_pad_get_allowed_caps()
instead of gst_pad_peer_query_caps() when looking for the preferred feature.
v3: move the input_state unref to close(), since videodecoder resets at
some events such as navigation.
v4: a) the state_changed() callback replaces the input_state if the media
changed, so this case is also handled.
b) since the parameter ref_state in gst_vaapidecode_update_src_caps() is
always the input_state, the parameter were removed.
c) there were a lot of repeated code handling the input_state, so I
refactored it with the function gst_vaapi_decode_input_state_replace().
https://bugzilla.gnome.org/show_bug.cgi?id=744618
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Some frameworks (EFL) expect BGRA textures for storage. However,
adding support for that broadly into GStreamer framework implies
two kinds of hacks: (i) libgstgl helpers currently do not support
BGRA textures correctly, (ii) we need to better parse downstream
suggested caps and intersect them with what the VA plugin elements
can offer to them for GL texturing.
Add initial support for EGL through GstVideoGLTextureUploadMeta.
Fix gst_vaapi_ensure_display() to allocate a GstVaapiDisplay off the
downstream supplied GstGLContext configuration, i.e. use its native
display handle to create a GstVaapiDisplay of type X11 or Wayland ;
and use the desired OpenGL API to allocate the GstVaapiDisplayEGL
wrapper.
https://bugzilla.gnome.org/show_bug.cgi?id=741079
Reset the VA decoder after updating the base plugin caps, and most
importantly, after GstVideoDecoder negotiation. The reason behind
this is that the negotiation could trigger a last decide_allocation()
where we could actually derive a new GstVaapiDisplay to use from the
downstream element. e.g. GLX backend.
Query caps filtering should be always done on top of allowed caps instead
of existing fixed caps on a particular pad.
This fixes the mvc stream decoding when there is a base view(high profile)
and non-base view(stereo-high profile).
According to documentation[1] when receiving a GST_QUERY_CAPS
the return value should be all formats that this elements supports,
taking into account limitations of peer elements further downstream
or upstream, sorted by order of preference, highest preference first.
This patch add those limitations intersecting with the received
filter in the query. Also takes into account the already negotiated
caps. Also adds the processing of the query on the SRC pad.
1. http://gstreamer.freedesktop.org/data/doc/gstreamer/head/pwg/html/section-nego-getcaps.htmlhttps://bugzilla.gnome.org/show_bug.cgi?id=744406
Otherwise the condition could become true before the lock
is taken and the g_cond_signal() could be called
before the g_cond_wait(), so the g_cond_wait() is never
awoken.
https://bugzilla.gnome.org/show_bug.cgi?id=740645
Make sure that the GstVaapiPluginBase instance receives the new src
pad caps whenever they get updated from within the GstVaapiDecoder
decode routines.
This also ensures that downstream elements receive correctly sized
SW decoded buffers if needed.
https://bugs.tizen.org/jira/browse/TC-114
Split GstVideoDecoder::set_format() handler to first update the sink
pad caps and reset the active VA decoder instance based on those, and
then update the src pad caps whenever possible, e.g. when the caps
specify a valid video resolution.
In the current implementation, gst_video_info_set_format() would reset
the whole GstVideoInfo structure first, prior to setting video format
and size. So, coleteral information like framerate or pixel-aspect-
ratio are lost.
Provide and use a unique gst_video_info_change_format() for overcome
this issue, i.e. only have it change the format and video size, and
copy over the rest of the fields.
https://bugzilla.gnome.org/show_bug.cgi?id=734665
Add a default decide_allocation() hook to GstVaapiPluginBase. The caps
feature argument can be used to force a bufferpool with a specific kind
of memory.
The gst_vaapidecode_decode_loop() function is called within a separate
task to fetch and output all frames that were decoded so far. So, if
the decoder_loop_status is forcibly set to EOS when _finish() is called,
then we are bound to exist the task without submitting the pending
frames.
If the downstream element error'ed out, then the gst_pad_push() would
propagate up an error and so we will get it right for cutting off
_finish() early in that case.
This is a regression from 6003596.
https://bugzilla.gnome.org/show_bug.cgi?id=733897
Fixes a hang/race on shutdown where _decode_loop() had already completed
its execution and _finish() was waiting on a GCond for decode_loop()
to complete. Also fixes the possible race where _finish() is called
but _decode_loop() endlessly returns before signalling completion
iff the decoder instance returns GST_FLOW_OK.
Found with: ... ! vaapidecode ! {glimagesink,cluttersink}
https://bugzilla.gnome.org/show_bug.cgi?id=733897
[factored out GST_VIDEO_DECODER_STREAM_UNLOCK() call]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Rework the logics behind the configuration of an adequate bufferpool,
especially when OpenGL meta or additional capsfeatures are needed.
Besides, for GStreamer >= 1.4, the first capsfeatures that gets matched,
and that is not system memory, is now selected by default.
Make sure to propagate memory:VASurface capsfeature to srcpad caps
only for GStreamer >= 1.5 as the plug-in elements in GStreamer 1.4
core currently miss additional patches available in 1.5-git (1.6).
This is a temporary workaround.
Always add VideoAlignment bufferpool option if the downstream element
expects its own pool to be used but does not offer it through a proper
propose_allocation() implementation for instance, and that the ALLOCATION
query does not expose the availability of the Video Meta API.
This fixes propagation of video buffer stride information to Firefox.
Always expose I420 format by default when the VA surface could be
mapped for interoperability with non harware accelerated elements.
However, the default behaviour remains the auto-plugging of vaapi
elements, down to the sink.
Side effect: "direct-rendering" mode is also disabled most of the
times as plain memcpy() from uncached speculative write combining
memory is not going to be efficient enough.
Fix support for VA surface download capability in vaapidecode element
for GStreamer >= 1.2. This is a fix to supporting libva-vdpau-driver,
but also the libva-intel-driver while performing hardware accelerated
conversions from the native VA surface format (NV12) to the desired
output VA image format.
For instance, this fixes pipelines involving vaapidecode ! xvimagesink.
https://bugzilla.gnome.org/show_bug.cgi?id=733243
When playbin/decodebin builds the pipeline, it puts decoders and sinks
into different bins and forwards the queries from bins to bins. So in
the initials steps the pipeline is built iteratively by playbin and
looks like this :
[filesrc]
[filesrc] -> [typefind]
[filesrc] -> [typefind] -> [demuxer]
[filesrc] -> [typefind] -> [demuxer] -> [decoder]
At this point the decoder is asked for its SRC caps and it will make a
choice based on what gst_pad_peer_query_caps() returns. The problem is
that the caps returns at that point includes caps features like ANY,
essentially because playbin can plug in additional elements like
videoscale, videoconv or deinterlace.
This patch adds a another call to
gst_vaapi_find_preferred_caps_feature() when the decoder decides its
allocation, to make sure we asks the downstream elements when the
entire pipeline has been built.
https://bugzilla.gnome.org/show_bug.cgi?id=731645
We can avoid scanning for start codes again if the bitstream is fed
in NALU chunks. Currently, we always scan for start codes, and keep
track of remaining bits in a GstAdapter, even if, in practice, we
are likely receiving one GstBuffer per NAL unit. i.e. h264parse with
"nal" alignment.
https://bugzilla.gnome.org/show_bug.cgi?id=723284
[use gst_adapter_available_fast() to determine the top buffer size]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
gst_video_info_set_format() does not preserve video info properties. In
order to keep important information in the caps such as interlace mode,
framerate, pixel aspect ratio, ... we need to manually copy back those
properties after setting the new video format.
https://bugzilla.gnome.org/show_bug.cgi?id=722276
It can happen that there is a pool provided that does not advertise
the vappivideometa. We should unref that pool before using our own.
Discovered with vaapidecode ! {glimagesink,cluttersink}
https://bugzilla.gnome.org/show_bug.cgi?id=724957
[fixed compilation by adding the missing semi-colon]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Parse any pending data until a complete frame is obtained. This is a
memory optimization to avoid expansion of video packets stuffed into
the GstAdapter, and a fix to EOS condition to detect there is actually
pending data that needs to be decoded, and subsequently output.
https://bugzilla.gnome.org/show_bug.cgi?id=731831
Fix vaapidecode to correctly report caps features downstream, when
a custom pipeline is built manually.
https://bugzilla.gnome.org/show_bug.cgi?id=719372
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Since vaapidecode provides buffer that can be mapped as regular memory,
those caps should be added to the template caps. That only applies to
GStreamer >= 1.2.
https://bugzilla.gnome.org/show_bug.cgi?id=720608
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
vaapidecode hangs when pipeline is stopped without any EOS, e.g. when
<Ctrl>+C is pressed, thus causing the srcpad task to keep running and
locked. This fixes a deadlock on state change from PAUSED to READY.
https://bugzilla.gnome.org/show_bug.cgi?id=720584
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Replace gst_vaapi_display_get_{decode,encode}_caps() APIs with more
more convenient APIs that return an array of GstVaapiProfile instead
of GstCaps: gst_vaapi_display_get_{decode,encode}_profiles().
Move common VA display creation code to GstVaapiPluginBase, with the
default display type remaining "any". Also add a "display-changed"
hook so that subclasses could perform additional tasks when/if the
VA display changed, due to a new display type request for instance.
All plug-ins are updated to cope with the new internal APIs.
Introduce a new GstVaapiPluginBase object that will contain all common
data structures and perform all common tasks. First step is to have a
single place to hold VA displays.
While we are at it, also make sure to store and subsequently release
the appropriate debug category for the subclasses.
Requesting the GLTextureUpload meta on buffers in the bufferpool
prevents such metas from being de-allocated when buffers are released
in the sink.
This is particulary useful in terms of performance when using the
GLTextureUploadMeta API since the GstVaapiTexture associated with
the target texture is stored in the meta.
https://bugzilla.gnome.org/show_bug.cgi?id=712558
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Fix GstElement::set_context() implementation for all plug-in elements
to avoid leaking an extra reference to the VA display, thus preventing
correct cleanup of VA resources in GStreamer 1.2 builds.
There are situations where gst_video_decoder_flush() is called, and
this subsequently produces a gst_video_decoder_reset() that kills the
currently active GstVideoCodecFrame. This means that it no longer
exists by the time we reach GstVideoDecoder::finish() callback, thus
possibly resulting in a crash if we assumed spare data was still
available for decode (current_frame_size > 0).
Try to honour GstVideoDecoder::reset() behaviour from GStreamer 1.0
that means a flush, thus performing the actual operations there like
calling gst_video_decoder_have_frame() if pending data is available.
Review all interactions between the main video decoder stream thread
and the decode task to derive a correct sequence of operations for
decoding. Also avoid extra atomic operations that become implicit under
the GstVideoDecoder stream lock.
Fix hard reset for seek cases by flushing the GstVaapiDecoder queue
and completely purge any decoded output frame that may come out from
it. At this stage, the GstVaapiDecoder shall be in a complete clean
state to start decoding over new buffers.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
vaapidecode used to wait up to one second past the expected time of
presentation for the last decoded frame. This is not realistic in
practice when it comes to video pause/resume. Changed behaviour to
unconditionnally wait for a free VA surface prior to continuing the
decoding. The decode task will continue pushing the output frames to
the downstream element while also reporting errors at the same time
to the main thread.
https://bugzilla.gnome.org/show_bug.cgi?id=707108
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
The srcpad caps exposed for GStreamer 1.2 were missing any useful info
like framerate, pixel-aspect-ratio, interlace-mode et al. Not to mention
that it relied on possibly un-initialized data. Fix srcpad caps to be
initialized from a sanitized copy of GstVideoDecoder output state caps.
Note: the correct way to expose the srcpad caps triggers an additional
issue in core GStreamer auto-plugging capabilities as the correct caps
to be exposed should be format=ENCODED with memory:VASurface caps feature
at the minimum. In some situations, we could determine the underlying
VA surface format, but this is not always possible. e.g. cases where it
is not allowed to expose the underlying VA surface data, or when the
VA driver implementation cannot actually provide such information.
Currently, the decoder only supports YUV 4:2:0 output. So, expose the
output formats for GStreamer 1.2 in caps to a realistic subset. This
means NV12, I420 or YV12 but also ENCODED if we cannot determine the
underlying VA surface format, or if it is actually not allowed to get
access to the surface contents.
Fix vaapidecode srcpad caps to only expose RGBA video format for the
meta:GstVideoGLTextureUploadMeta feature. That's only what is supported
so far. Besides, drop this meta from the vaapisink sinkpad caps since
we really don't support that for rendering.
https://bugzilla.gnome.org/show_bug.cgi?id=711828
Add gst_caps_set_interlaced() helper function that would reset the
interlace-mode field to "progressive" for GStreamer >= 1.0, or the
interlaced field to "false" for GStreamer 0.10.
If the allocation meta GST_VIDEO_GL_TEXTURE_UPLOAD_META_API_TYPE is
requested, and more specifically under a GLX configuration, then add
the GstVideoGLTextureUploadMeta to the output buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=703236
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Move VA video buffer memory from "video/x-surface,type=vaapi" format,
as expressed in caps, to the more standard use of caps features. i.e.
add "memory:VASurface" feature attribute to the associated caps.
https://bugzilla.gnome.org/show_bug.cgi?id=703271
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Fix gst_vaapidecode_query() to correctly display the query type name,
instead of randomly displaying that we shared the underlying display.
Also add debug info for the GstVaapiSink::query() handler, i.e. the
supplied query type name actually.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Add support for the new GstContext API from GStreamer 1.2.x.
- implement the GstElement::set_context() hook ;
- reply to the `context' query from downstream elements.
https://bugzilla.gnome.org/show_bug.cgi?id=703235
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Port vaapidecode and vaapisink plugins to GStreamer API >= 1.2. This
is rather minimalistic so that to test the basic functionality.
Disable vaapipostproc plugin for now as further polishing is needed.
Also disable GstVideoContext interface support since this API is now
gone in 1.2.x. This is preparatory work for GstContext support.
https://bugzilla.gnome.org/show_bug.cgi?id=703235
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Fix _getcaps() implementation to not report codecs with size information
filled in the returned caps. That's totally useless nowadays. Ideally,
this is a hint to insert a video parser element, thus allowing future
optimizations, but this is not a strict requirement for gstreamer-vaapi,
which is able to parse the elementary bitstreams itself.
https://bugzilla.gnome.org/show_bug.cgi?id=704734
If there is no frame delimiter at the end of the stream, e.g. no
end-of-stream or end-of-sequence marker, and that the current frame
was fully parsed correctly, then assume that last frame is complete
and submit it to the decoder.
https://bugzilla.gnome.org/show_bug.cgi?id=705123
Signed-off-by: Guangxin.Xu <Guangxin.Xu@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Add support for GstVideoCropMeta in GStreamer >= 1.0.x builds and gst-vaapi
specific meta information to hold video cropping details. Make the sink
support video cropping in X11 and GLX modes.
Add gst_vaapi_decoder_get_frame_with_timeout() helper function that will
wait for a frame to be decoded, until the specified timeout in microseconds,
prior to returning to the caller.
This is a fix to performance regression from 851cc0, whereby the vaapidecode
loop executed on the srcpad task was called to often, thus starving all CPU
resources.
Rework heuristics to detect when downstream element ran into errors,
and thus failing to release any VA surface in due time for the current
frame to get decoded. In particular, recalibrate the render time base
when the first frame gets submitted downstream, or when there is no
timestamp that could be inferred.
Rework GstVideoDecoder::handle_frame() to decode the current frame,
while possibly waiting for a free surface, and separately submit all
decoded frames from a task. This makes it possible to pop and render
decoded frames as soon as possible.
Add support for interlaced streams with GStreamer 1.0 too. Basically,
this enables vaapipostproc, though it is not auto-plugged yet. We also
make sure to reply to CAPS queries, and happily handle CAPS events.
Make gst_vaapi_decoder_get_codec_state() return the original codec state,
i.e. make the GstVaapiDecoder object own the return state so that callers
that want an extra reference to it would just gst_video_codec_state_ref()
it before usage. This aligns the behaviour with what we had before with
gst_vaapi_decoder_get_caps().
This is an ABI incompatible change, library major version was bumped from
previous release (0.5.2).
Handle GST_QUERY_CAPS, which is the GStreamer 1.0 mechanism to retrieve
the set of allowed caps, i.e. it works similar to GstPad::get_caps().
This fixes fallback to SW decoding if no HW decoder is available.
Introduce a new configure option --with-gstreamer-api that determines
the desired GStreamer API to use. By default, GStreamer 1.0 is selected.
Also integrate more compatibility glue into gstcompat.h and plugins.
Use new GstVaapiVideoBufferPool to maintain video buffers. Implement
GstBaseSink::propose_allocation() to expose that pool to upstream
elements; and also implement GstVideoDecoder::decide_allocation() to
actually use that pool (from downstream), if any, or create one.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Port vaapidecode and vaapisink plugins to GStreamer API >= 1.0. This
is rather minimalistic so that to test the basic functionality.
Disable vaapiupload, vaapidownload and vaapipostproc plugins. The latter
needs polishing wrt. to GStreamer 1.x functionality and the former are
totally phased out in favor of GstVaapiVideoMemory map/unmap facilities,
which are yet to be implemented.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Don't return static caps that don't mean anything for the underlying codecs
that are actually supported for decoding. i.e. always allocate a VA display
and retrieve the exact set of HW decoders available. That VA display may be
re-used later on during negotiation through GstVideoContext "prepare-context".
This fixes fallback to SW decoding if no HW decoder is available.
Move GstVaapiVideoMeta from core libgstvaapi decoding library to the
actual plugin elements. That's only useful there. Also inline reference
counting code from GstVaapiMiniObject.
Use gst_element_class_set_static_metadata() from GStreamer 1.0, which
basically is the same as gst_element_class_set_details_simple() in
GStreamer 0.10 context.
Move GstImplementsInterface and GstVideoContext support functions up
so that to keep a clear separation between the plugin element and its
interface hooks.
Decode-only frames may not have a valid surface proxy. So, simply discard
them gracefully, i.e. don't create meta data information. GstVideoDecoder
base class will properly handle this case and won't try to push any buffer
to downstream elements.
Implement GstVideoDecoder::reset() as a destruction of the VA decoder
and the creation of a new VA decoder.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Split GstVideoDecoder::handle_frame() implementation into two functions:
(i) one for decoding the provided GstVideoCodecFrame and (ii) another one
for purging all decoded frames and submit them downstream.
Update plugin elements with the new GstVaapiVideoMeta API.
This also fixes support for subpictures/overlay because GstVideoDecoder
generates a sub-buffer from the GstVaapiVideoBuffer. So, that sub-buffer
is marked as read-only. However, when comes in the textoverlay element
for example, it checks whether the input buffer is writable. Since that
buffer read-only, then a new GstBuffer is created. Since gst_buffer_copy()
does not preserve the parent field, the generated buffer in textoverlay
is not exploitable because we lost all VA specific information.
Now, with GstVaapiVideoMeta information attached to a standard GstBuffer,
all information are preserved through gst_buffer_copy() since the latter
does copy metadata (qdata in this case).
Fix calculation of the time-out value for cases where no VA surface is
available for decoding. In this case, we need to wait until downstream
sink consumed at least one surface. The time-out was miscalculated as
it was always set to <current-time> + one second, which is not suitable
for streams with larger gaps.
Don't call gst_video_decoder_drop_frame() if gst_video_decoder_finish_frame()
was already called before and it returned an error. In that case, we were
releasing the frame again, thus leading to a "double-free" condition.
Maintain decoded surfaces as GstVideoCodecFrame objects instead of
GstVaapiSurfaceProxy objects. The latter will tend to be reduced to
the strict minimum: a context and a surface.
Make sure to push all decoded frames downstream as soon as possible.
This makes sure we don't need to wait for a new frame to be ready to
be decoded before receiving new decoded frames.
This also separates the decode process and the output process. The latter
could be moved to a specific GstTask later on.
Directly use the GstVideoCodecState associated with the VA decoder
instead of parsing caps again.
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Make vaapidecode derive from the standard GstVideoDecoder base element
class. This simplifies the code to the strict minimum for the decoder
element and makes it easier to port to GStreamer 1.x API.
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
GstVaapiSurfaceProxy does not use any particular functionality from
GObject. Actually, it only needs a basic object type with reference
counting.
This is an API and ABI change.
The use of heap allocated GMutex/GCond is deprecated. Instead place them
inside the structure they are locking.
These changes switch to use g_mutex_init/g_cond_init rather than the heap
allocation functions.
Because we cannot test for a NULL pointer for the GMutex/GCond we must
initialise inside the GObject _init function and clear inside the _finalize
which is guaranteed to only be called once and after the object is no longer
in use.
Don't care of the return value for gst_vaapi_decoder_put_buffer()
during destruction of the element. Don't print out (uninitialised)
error code when allocation of video buffer failed.
Reset, i.e. destroy then create, the decoder in _setcaps() handler only
if the underlying codec type actually changed. This makes it possible
to be more tolerant with certain MPEG-2 streams that get parsed to
form caps that are compatible with the previous state but minor changes
to "codec-data".
Add new gst_vaapi_codec_from_caps() helper to determine codec type from
the specified caps. Don't globally expose this function since this is
really trivial and only used in the vaapidecode element.
Previously, vaapidecode would wait up to one second until a free surface
is available, or it aborts decoding. Now, vaapidecode waits until the
last decoded surface was to be presented, plus one second. Besides, end
times are now expressed relative to the monotonic clock.
When playback stops the GstVaapiDecode object is reset into a clean
state. However, surfaces may still be referenced by library users and
unreferencing them after the reset triggers an access to an unset mutex.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
If vaapisink is in the GStreamer pipeline, then we shall allocate a
unique GstVaapiDisplay and propagate it upstream. i.e. subsequent
queries from vaapidecode shall get a valid answer from vaapisink.
This flag is obsolete. It was meant to explicitly enable/disable VA/GLX API
support, or fallback to TFP+FBO if this API is not found. Now, we check for
the VA/GLX API by default if --enable-glx is set. If this API is not found,
we now default to use TFP+FBO.
Note: TFP+FBO, i.e. using vaPutSurface() is now also a deprecated usage and
will be removed in the future. If GLX rendering is requested, then the VA/GLX
API shall be used as it covers most usages. e.g. AMD driver can't render to
an X pixmap yet.
GStreamer codecparsers-based decoders are the only supported decoders now.
Though, FFmpeg decoders are still available in gstreamer-vaapi 0.3.x series.
Bump GStreamer required version to 0.10.14, needed for
gst_element_class_set_details_simple().
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Fix typo whereby plain VADisplay type was used instead of the GstVaapiDisplay
wrapper.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Try to gracefully abort when the HW does not support the requested
profile. There is no fallback unless profiles are correctly parsed
and matched through caps beforehand.
Rationale: playbin2 links all elements at run-time. Once vaapidecode
is created and a NEWSEGMENT event arrives, downstream element may not
be ready yet. So, delay this event until next element is chained in,
otherwise basesink could output "Received buffer without a new-segment.
Assuming timestamps start from 0".
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Propagate "interlaced" caps downstream and set "tff" buffer flag
appropriately to output buffers for interlaced pictures.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Otherwise, the decoder would always create its own X display instead
of probing it from the downstream element, which is not reliable.
e.g. DISPLAY is not :0 or when running on Wayland.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
With the new video/x-surface abstraction, we can't rely on having a VA
specific sink downstream. Also, there was no particular reason to do that.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
This new interface allows for upstream and downstream display sharing
that works in both static and dynamic pipelines.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>