Improve check for upstream element that requires DMABUF buffer pool,
e.g. v4l2src element. In particular, make sure to traverse through
any additional capsfilter for instance.
Note: the traversal to the top-most upstream element could be made
more generic, but we are insofar only interested in supporting pipes
similar to v4l2src or v4l2src ! capsfilter, e.g. with an explicit
specification for a desired video camera format, or resolution.
The dmabuf allocator would close the DMABUF handle passed in the init
function gst_dmabuf_allocator_alloc(). So, we need to dup() it so that
to avoid a double close, ultimately in the underlying driver that owns
the DMABUF handle.
vaapidecode keeps an output state that use the format
GST_VIDEO_FORMAT_ENCODED, while it crafts a different src caps
for a correct negotiation.
I don't see the rational behind this decoupling, it looks like
unnecessary complexity. This patch simplify this logic keeping
in sync the output state and the src caps.
This patch improves the readability of the function
gst_vaapidecode_update_src_caps() and simplify its logic. Also,
the patch validates if the buffer pool has the configuration for
the GL texture upload meta, in order to set the caps feature
meta:GLTextureUpload. Otherwise, the I420 format is set back.
https://bugzilla.gnome.org/show_bug.cgi?id=744618
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
When vaapidecode finishes the decoding of a frame and pushes it,
if, in the decide_allocation() method, it is determined if the
next element supports the GL texture upload meta feature, the
decoder adds the buffer's meta.
Nonetheless, in the same spirit of the commit 71d3ce4d, the
determination if the next element supports the GL texture upload
meta needs to check both the preferred caps feature *and* if the
allocation query request the API type.
This patch, first removes the unused variable need_pool, and
determines the attribute has_texture_upload_meta using the
preferred caps feature *and* the allocation query.
Also, the feature passed to GstVaapPluginBase is not longer
determined by has_texture_upload_meta, but by the computed
preferred one.
https://bugzilla.gnome.org/show_bug.cgi?id=744618
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Currently the src caps are set immediately after the sink caps are set, but in
that moment the pipeline might not fully constructed and the video sink has
not negotiated its supported caps and features. As a consequence, in many cases
of playback, the least optimized caps feature is forced. This is partially the
responsible of bug #744039.
Also, vaapidecode doesn't attend the reconfigure events from downstream,
which is a problem too, since the video sink can be changed with different
caps features.
This patch delays the src caps, setting them until the first frame arrives to
the decoder, assuming until that very moment the whole pipeline is already
negotiated. Particularly, it checks if the src pad needs to be reconfigured,
as a consequence of a reconfiguration event from downstream.
A key part of this patch is the new GstVaapiCapsFeature
GST_VAAPI_CAPS_FEATURE_NOT_NEGOTIATED, which is returned when the src pad
doesn't have a peer yet. Also, for a better report of the caps allowed
through the src pad and its peer, this patch uses gst_pad_get_allowed_caps()
instead of gst_pad_peer_query_caps() when looking for the preferred feature.
v3: move the input_state unref to close(), since videodecoder resets at
some events such as navigation.
v4: a) the state_changed() callback replaces the input_state if the media
changed, so this case is also handled.
b) since the parameter ref_state in gst_vaapidecode_update_src_caps() is
always the input_state, the parameter were removed.
c) there were a lot of repeated code handling the input_state, so I
refactored it with the function gst_vaapi_decode_input_state_replace().
https://bugzilla.gnome.org/show_bug.cgi?id=744618
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Working on bug #743687, I realized that vaapidecode always adds to its buffer
pool the config option GST_BUFFER_POOL_OPTION_VIDEO_GL_TEXTURE_UPLOAD_META if
the decide_allocation()'s query has GST_VIDEO_GL_TEXTURE_UPLOAD_META_API_TYPE.
Nevertheless, there are occasions where the query has the API type, but the
last negotiated caps don't have the feature meta:GstVideoGLTextureUploadMeta.
Under this contradiction, vaapidecode adds the GLTextureUploadMeta API to its
buffer pool configuration, and adds its buffer's meta to each output buffer,
even if the negotiated caps feature is memory:SystemMemory with I420 color
format.
This kind of output buffers chokes ClutterAutoVideosSink, since it uses a map
that relates caps <-> GL upload method. If it receives a buffer with color
format I420, it assumes that it doesn't have a texture upload meta, because
only those with RGB color format has it. Our buffers, with I420 format, say
that they have the upload meta too. In that case the mapped method is a dummy
one which does nothing. I reported this issue in bug #744039 (the patch,
obviously, was rejected).
This patch workarounds the problem: the buffer pool's configuration option
GST_BUFFER_POOL_OPTION_VIDEO_GL_TEXTURE_UPLOAD_META is set if and only if the
query has the GST_VIDEO_GL_TEXTURE_UPLOAD_META_API_TYPE *and* the negotiated
caps feature is meta:GstVideoGLTextureUploadMeta.
I have tested these patches with gst-master (1.5), gst-1.4 and gst-1.2 and
in all they seem to work correctly.
https://bugzilla.gnome.org/show_bug.cgi?id=744618
[adapted to fit current EGL changes]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Add support for GstVideoGLTextureOrientation modes. In particular,
add orientation flags to the GstVaapiTexture wrapper and the GLX
implementations. Default mode is that texture memory is laid out
with top lines first, left row first. Flags indicate whether the
X or Y axis need to be inverted.
Some frameworks (EFL) expect BGRA textures for storage. However,
adding support for that broadly into GStreamer framework implies
two kinds of hacks: (i) libgstgl helpers currently do not support
BGRA textures correctly, (ii) we need to better parse downstream
suggested caps and intersect them with what the VA plugin elements
can offer to them for GL texturing.
When multiple display servers are available, the glimagesink element
(from GStreamer 1.4) may not be able to derive a global display in
Wayland. Rather, a "window"-specific display is created. In this case,
the GstGLDisplay handle available through GstGLContext is invalid.
So, try to improve heuristics for display server characterisation in
those particular situations.
Add initial support for EGL through GstVideoGLTextureUploadMeta.
Fix gst_vaapi_ensure_display() to allocate a GstVaapiDisplay off the
downstream supplied GstGLContext configuration, i.e. use its native
display handle to create a GstVaapiDisplay of type X11 or Wayland ;
and use the desired OpenGL API to allocate the GstVaapiDisplayEGL
wrapper.
https://bugzilla.gnome.org/show_bug.cgi?id=741079
Sync video texture sizes to GstVideoGLTextureUploadMeta private date,
i.e. GstVaapiVideoMetaTexture, on a regular basis. In particular, we
now update the texture size from the GstVideoMeta, if any, or reset
to some defaults otherwise.
If a GstGLContext is supplied by the downstream element, then make
sure that the VA plugin element gets a compatible display to what
is requested by the GL context. e.g. re-allocate a VA/GLX display
when a GLX context is provided by the downstream element.
Record GL context supplied by downstream elements. This can be useful,
and further needed, to enforce run-time check that the GL context is
compatible for use by libgstvaapi. e.g. check that we don't create a
VA/GLX display for EGL/X11 contexts.
https://bugzilla.gnome.org/show_bug.cgi?id=725643
Original-path-by: Matthew Waters <ystreet00@gmail.com>
Reset the VA decoder after updating the base plugin caps, and most
importantly, after GstVideoDecoder negotiation. The reason behind
this is that the negotiation could trigger a last decide_allocation()
where we could actually derive a new GstVaapiDisplay to use from the
downstream element. e.g. GLX backend.
Query caps filtering should be always done on top of allowed caps instead
of existing fixed caps on a particular pad.
This fixes the mvc stream decoding when there is a base view(high profile)
and non-base view(stereo-high profile).
According to documentation[1] when receiving a GST_QUERY_CAPS
the return value should be all formats that this elements supports,
taking into account limitations of peer elements further downstream
or upstream, sorted by order of preference, highest preference first.
This patch add those limitations intersecting with the received
filter in the query. Also takes into account the already negotiated
caps. Also adds the processing of the query on the SRC pad.
1. http://gstreamer.freedesktop.org/data/doc/gstreamer/head/pwg/html/section-nego-getcaps.htmlhttps://bugzilla.gnome.org/show_bug.cgi?id=744406
Otherwise the condition could become true before the lock
is taken and the g_cond_signal() could be called
before the g_cond_wait(), so the g_cond_wait() is never
awoken.
https://bugzilla.gnome.org/show_bug.cgi?id=740645
GstVaapiVideoMemory quark is not needed any more, and the actual
implementation was already removed bfore the merge. i.e. this is
an oversight for a hunk that was not meant to be pushed.
Allow v4l2src element to connected to vaapipostproc or vaapisink when
"io-mode" is set to "dmabuf-import". In practice, this is a more likely
operational mode with uvcvideo. Supporting v4lsrc with "io-mode" set
to "dmabuf" could work, but with more demanding driver or kernel reqs.
Note: with GStreamer 1.4, v4l2src (gst-plugins-good) needs to be built
with --without-libv4l2.
https://bugzilla.gnome.org/show_bug.cgi?id=743635
Allow imports of v4l2 buffers into VA surfaces for further operation
with vaapi plugins, e.g. vaapipostproc or vaapiencode_* elements.
https://bugzilla.gnome.org/show_bug.cgi?id=735362
[fixed memory leaks, ported to new dma_buf infrastructure, cleanups]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Reword surface pool allocation helpers so that to allow for a simple
form, e.g. gst_vaapi_surface_pool_new(format, width, height); and a
somewhat more elaborated/flexible form with optional allocation flags
and precise GstVideoInfo specification.
This is an API/ABI change, and SONAME version needs to be bumped.
Add gst_vaapi_display_has_opengl() helper function to help determining
whether the display can support OpenGL context to be bound to it, i.e.
if the class is of type GST_VAAPI_DISPLAY_TYPE_GLX.
Make gst_vaapi_display_get_display_type() return the actual VA display
type. Conversely, add a gst_vaapi_display_get_class_type() function to
return the type of the GstVaapiDisplay instance. The former is used to
identify the display server onto which the application is running, and
the latter to identify the original object class.
GstVaapiTexture is a generic abstraction that could be moved to the
core libgstvaapi library. While doing this, no extra dependency needs
to be added. This means that a GstVaapitextureClass is now available
for any specific code that needs to be added, e.g. creation of the
underlying GL texture objects, or backend dependent ways to upload
a surface to the texture object.
Generic OpenGL data types (GLuint, GLenum) are also replaced with a
plain guint.
https://bugzilla.gnome.org/show_bug.cgi?id=736715
The gst_vaapi_texture_put_surface() function is missing a crop_rect
argument that would be used during transfer for cropping the source
surface to the desired dimensions.
Note: from a user point-of-view, he should create the GstVaapiTexture
object with the cropped size. That's the default behaviour in software
decoding pipelines that we need to cope with.
This is an API/ABI change, and SONAME version needs to be bumped.
https://bugzilla.gnome.org/show_bug.cgi?id=736712
dvbsuboverlay signals no subtitles present by not setting
GstVideoOverlayCompositionMeta on a buffer.
Detect this, and remove subtitles whenever we have no overlay composition to
hand.
Signed-off-by: Simon Farnsworth <simon.farnsworth@onelan.co.uk>
Otherwise restarting may fail because the state of vaapipluginbase and
vaapipostproc don't match. e.g. gst_vaapipostproc_set_caps() will skip
initailization and not call gst_vaapi_plugin_base_set_caps()
Make sure that the GstVaapiPluginBase instance receives the new src
pad caps whenever they get updated from within the GstVaapiDecoder
decode routines.
This also ensures that downstream elements receive correctly sized
SW decoded buffers if needed.
https://bugs.tizen.org/jira/browse/TC-114
Split GstVideoDecoder::set_format() handler to first update the sink
pad caps and reset the active VA decoder instance based on those, and
then update the src pad caps whenever possible, e.g. when the caps
specify a valid video resolution.
In some occasions, a buffer pool is created for pre-initialization
purposes regardless of whether a valid image size is available or
not. However, during actual decode stage, the vaapidecode element
is expected to update the srcpad caps with the new dimensions, thus
also triggering a reset of the underlying bufferpool.
If the best downstream capsfeature turns out to be GLMemory, then make
sure to propagate RGBA video format in caps to that element. This fixes
the following pipeline: ... ! vaapipostproc ! glimagesink.
When an explicit output video format is selected, from an src pad
capsfilter, make sure that the downstream element actually supports
that format. In particular, fix crash with the following pipelines:
... ! vaapipostproc ! video/x-raw,format=XXX ! xvimagesink ; where
XXX is a format not supported by xvimagesink.
While doing so, also reduce the set of src pad filter caps to the
actual set of allowed src pad caps.
The ensure_surface() and ensure_image() functions shall only relate
to the underlying backing store. The actual current flags are to be
updated only through ensure_{surface,image}_is_current() or very other
particular cases in GstMemory hooks.
If the surface proxy is updated into the GstVaapiVideoMemory, then
it is assumed it is the most current representation of the current
video frame. Likewise, make a few more arrangements to have the
"current " flags set more consistently.
When interacting with SW elements, the buffers and underlying video
memory could be mapped as read/write. However, we need to use those
buffers again as plain VA surfaces, we have to make sure the VA image
is thus committed back to VA surface memory.
This fixes pipelines involving avdec_* and vaapi{postproc,sink}.
Hence vaapisink can display buffers decoded by gst-libav, or HW decoded
buffers can be further processed in-place, e.g. with a textoverlay.
https://bugzilla.gnome.org/show_bug.cgi?id=704078
[ported to current git master branch, amended commit message]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
In the current implementation, gst_video_info_set_format() would reset
the whole GstVideoInfo structure first, prior to setting video format
and size. So, coleteral information like framerate or pixel-aspect-
ratio are lost.
Provide and use a unique gst_video_info_change_format() for overcome
this issue, i.e. only have it change the format and video size, and
copy over the rest of the fields.
https://bugzilla.gnome.org/show_bug.cgi?id=734665
This is for helping decodebin to autoplug the vaapidecode element.
Decodebin is selecting decoder elements only based on rank and caps.
Without overriding the autoplug-* signals there is no way to autoplug
HW decoders inside decodebin. An easier soulution is to raise the
rank of vaapidecode, so that it gets selected first.
https://bugzilla.gnome.org/show_bug.cgi?id=739332
Added the same Klass specifications used in other upstream
video postprocessing elements like videoconvert, videoscale,
videobalance and deinterlace.
An example use case is for this is to help the playsink
to autoplug the hardware accelerated deinterlacer.
This is a workaround until auto-plugging is fixed when
format=ENCODED + memory:VASurface caps feature are provided.
Use the downstream negotiated video format as the output video format
if the user didn't ask for the colorspace conversion explicitly.
Usecase: This will help to connect elements like videoscale, videorate etc
to vaapipostproc.
https://bugzilla.gnome.org/show_bug.cgi?id=739443
Add new "scale-method" property to expose the scaling mode to use during
video processing. Note that this is only a hint, and the actual behaviour
may differ from implementation (VA driver) to implementation.
Attach the codec_data to out_caps only if downstream needed.
For eg: h264 encoder doesn't need to stuff codec_data to the
src caps if the negotiated caps has a stream format of byte-stream.
https://bugzilla.gnome.org/show_bug.cgi?id=734902
Fix arguments to XkbKeycodeToKeysym() for converting an X11 keycode
to a KeySym. In particular, there is no such Window argument. Also
make sure to check for, and use, the correct <X11/XKBlib.h> header
where that new function is defined. Otherwise, default to the older
XKeycodeToKeysym() function.
Really use the motion event coordinates to propagate the "mouse-move"
event to upper layer, instead of those from a button event. Those are
technically the same though.
When we copy a buffer because we're moving it into VA-API memory, we
need to copy flags. Otherwise, interlaced YUV buffers from a capture
source (e.g. V4L2) don't get flagged as interlaced.
https://bugzilla.gnome.org/show_bug.cgi?id=726270
Signed-off-by: Simon Farnsworth <simon.farnsworth@onelan.co.uk>
[reversed order of gst_buffer_copy_into() flags to match <1.0 code]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Allow implicit conversions to raw video formats, while still keeping
VA surfaces underneath. This allows for chaining the vaapipostproc
element to a software-only element that takes care of maps/unmaps.
e.g. xvimagesink.
https://bugzilla.gnome.org/show_bug.cgi?id=720174
Use pooled GstVaapiVideoMeta information, i.e. always allocate that on
video buffer allocation. Also optimize copy of additional metadata info
into the resulting video buffer: only copy the video cropping info and
the source surface proxy.
https://bugzilla.gnome.org/show_bug.cgi?id=720311
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
[fixed proxy leak, fixed double free on error, optimized meta copy]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
If no explicit output surface format is supplied try to keep the one
supplied through the sink pad caps. This avoids a useless copy, even
if things are kept in GPU memory.
This is a performance regression from git commit dfa70b9.
If only advanced deinterlacing is requested, i.e. deinterlacing is
the only active algorithm to apply with source and output surface
formats being the same, then make sure to enable VPP processing.
Otherwise, allow fallback to bob-deinterlacing with simple rendering
flags alteration.
Add a default decide_allocation() hook to GstVaapiPluginBase. The caps
feature argument can be used to force a bufferpool with a specific kind
of memory.
Add GST_VAAPI_VIDEO_BUFFER_POOL_ACQUIRE_FLAG_NO_ALLOC params flag that
can be used to disable early allocations of vaapi video metas on buffers,
thus delagating that to the bufferpool user.
Default to I420 format for output surfaces so that to match the usual
GStreamer pipelines. Though, internally, we could still opt for NV12
surface formats, i.e. default format=ENCODED is a hint for that, thus
delegating the decision to the VA driver.
Use the new gst_caps_has_vaapi_surface() helper function to detect
whether the sink pad caps contain native VA surfaces, or not, i.e.
no raw video caps.
Also rename is_raw_yuv to get_va_surfaces to make the variable more
explicit as we just want a way to differentiate raw video caps from
VA surfaces actually.
The "discontinuity" tracking code, whereby lost frames are tentatively
detected, is inoperant if the sink pad buffer timestamps are not right
to begin with.
This is a temporary workaround until the following bug is fixed:
https://bugzilla.gnome.org/show_bug.cgi?id=734386
In order to make the discontinuity detection code useful, we need to
detect the lost frames in the history as early as the previous frame.
This is because some VA implementations only support one reference
frame for advanced deinterlacing.
In practice, turn the condition for detecting new frame that is beyond
the previous frame from field_duration*2 to field_duration*3, i.e.
nothing received for the past frame and a half because of possible
rounding errors when calculating the field-duration either in this
element (vaapipostproc), or from the upstream element (parser element).
This is a regression introduced with commit faefd62.
https://bugzilla.gnome.org/show_bug.cgi?id=734135
Introduce new gst_caps_has_vaapi_surface() helper function to detect
whether the supplied caps has VA surfaces. With GStreamer >= 1.2, this
implies a check for memory:VASurface caps features, and format=ENCODED
for earlier versions of GStreamer.
Simplify the creation and installation process of properties, by first
accumulating them into a g_properties[] array, and next calling into
g_object_class_install_properties().
Also add missing docs and flags to some properties.
Move code around in a more logical way. Introduce GST_VAAPISINK_CAST()
helper macro and use it wherever we know the object is a GstBaseSink or
any base class. Drop explicit initializers for values that have defaults
set to zero.
Introduce new backends vtable so that to have clean separation between
display dependent code and common base code. That's a "soft" separation,
we don't really need dedicated objects.
https://bugzilla.gnome.org/show_bug.cgi?id=722248
Support for X11 "synchronous" mode was never implemented, and was only
to be useful for debugging. Drop that altogether, that's not going to
be useful in practice.
https://bugzilla.gnome.org/show_bug.cgi?id=733985
Rendering with GLX in vaapisink is kind of useless nowadays, including
OpenGL related fancy effects. Plain VA/GLX interfaces are also getting
deprecated in favor of EGL, or more direct buffer sharing with actual
GL textures.
Should testing of interop with GLX be needed, one could still be using
the modern cluttersink or glimagesink elements.
https://bugzilla.gnome.org/show_bug.cgi?id=733984
Allow dynamic changes to the window, e.g. performed by the user, and
make sure to refresh its contents, while preserving aspect ratio.
In practice, Expose and ConfigureNotify events are tracked in X11
display mode by default. This occurs in a separte event thread, and
this is similar to what xvimagesink does. Any of those events will
trigger a reconfiguration of the window "soft" size, subsequently
the render-rect when necessary, and finally _expose() the result.
The default of handle_events=true can be changed programatically via
gst_x_overlay_handle_events().
Thanks to Fabrice Bellet for rebasing the patch.
https://bugzilla.gnome.org/show_bug.cgi?id=711478
[dropped XInitThreads(), cleaned up the code a little]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
The gst_vaapidecode_decode_loop() function is called within a separate
task to fetch and output all frames that were decoded so far. So, if
the decoder_loop_status is forcibly set to EOS when _finish() is called,
then we are bound to exist the task without submitting the pending
frames.
If the downstream element error'ed out, then the gst_pad_push() would
propagate up an error and so we will get it right for cutting off
_finish() early in that case.
This is a regression from 6003596.
https://bugzilla.gnome.org/show_bug.cgi?id=733897
Fixes a hang/race on shutdown where _decode_loop() had already completed
its execution and _finish() was waiting on a GCond for decode_loop()
to complete. Also fixes the possible race where _finish() is called
but _decode_loop() endlessly returns before signalling completion
iff the decoder instance returns GST_FLOW_OK.
Found with: ... ! vaapidecode ! {glimagesink,cluttersink}
https://bugzilla.gnome.org/show_bug.cgi?id=733897
[factored out GST_VIDEO_DECODER_STREAM_UNLOCK() call]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Now that we always track the currently active video buffer, it is
not necessary to automatically increase its reference since this is
implicitly performed in ::show_frame() through the get_input_buffer()
helper from GstVaapiPluginBase class.
This is a regression from a26df80.
Rework the logics behind the configuration of an adequate bufferpool,
especially when OpenGL meta or additional capsfeatures are needed.
Besides, for GStreamer >= 1.4, the first capsfeatures that gets matched,
and that is not system memory, is now selected by default.
Make sure to propagate memory:VASurface capsfeature to srcpad caps
only for GStreamer >= 1.5 as the plug-in elements in GStreamer 1.4
core currently miss additional patches available in 1.5-git (1.6).
This is a temporary workaround.
If a multiview stream is decoded, multiple view components are submitted
as is downstream. It is the responsibility of the sink element to display
the required view components. By default, always select the frame buffer
that matches the view-id of the very first frame to be displayed.
However, introduce a "view-id" property to allow the selection of a
specific view component of interest to display.