Sync video texture sizes to GstVideoGLTextureUploadMeta private date,
i.e. GstVaapiVideoMetaTexture, on a regular basis. In particular, we
now update the texture size from the GstVideoMeta, if any, or reset
to some defaults otherwise.
If a GstGLContext is supplied by the downstream element, then make
sure that the VA plugin element gets a compatible display to what
is requested by the GL context. e.g. re-allocate a VA/GLX display
when a GLX context is provided by the downstream element.
Record GL context supplied by downstream elements. This can be useful,
and further needed, to enforce run-time check that the GL context is
compatible for use by libgstvaapi. e.g. check that we don't create a
VA/GLX display for EGL/X11 contexts.
https://bugzilla.gnome.org/show_bug.cgi?id=725643
Original-path-by: Matthew Waters <ystreet00@gmail.com>
Reset the VA decoder after updating the base plugin caps, and most
importantly, after GstVideoDecoder negotiation. The reason behind
this is that the negotiation could trigger a last decide_allocation()
where we could actually derive a new GstVaapiDisplay to use from the
downstream element. e.g. GLX backend.
Query caps filtering should be always done on top of allowed caps instead
of existing fixed caps on a particular pad.
This fixes the mvc stream decoding when there is a base view(high profile)
and non-base view(stereo-high profile).
According to documentation[1] when receiving a GST_QUERY_CAPS
the return value should be all formats that this elements supports,
taking into account limitations of peer elements further downstream
or upstream, sorted by order of preference, highest preference first.
This patch add those limitations intersecting with the received
filter in the query. Also takes into account the already negotiated
caps. Also adds the processing of the query on the SRC pad.
1. http://gstreamer.freedesktop.org/data/doc/gstreamer/head/pwg/html/section-nego-getcaps.htmlhttps://bugzilla.gnome.org/show_bug.cgi?id=744406
Otherwise the condition could become true before the lock
is taken and the g_cond_signal() could be called
before the g_cond_wait(), so the g_cond_wait() is never
awoken.
https://bugzilla.gnome.org/show_bug.cgi?id=740645
GstVaapiVideoMemory quark is not needed any more, and the actual
implementation was already removed bfore the merge. i.e. this is
an oversight for a hunk that was not meant to be pushed.
Allow v4l2src element to connected to vaapipostproc or vaapisink when
"io-mode" is set to "dmabuf-import". In practice, this is a more likely
operational mode with uvcvideo. Supporting v4lsrc with "io-mode" set
to "dmabuf" could work, but with more demanding driver or kernel reqs.
Note: with GStreamer 1.4, v4l2src (gst-plugins-good) needs to be built
with --without-libv4l2.
https://bugzilla.gnome.org/show_bug.cgi?id=743635
Allow imports of v4l2 buffers into VA surfaces for further operation
with vaapi plugins, e.g. vaapipostproc or vaapiencode_* elements.
https://bugzilla.gnome.org/show_bug.cgi?id=735362
[fixed memory leaks, ported to new dma_buf infrastructure, cleanups]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Reword surface pool allocation helpers so that to allow for a simple
form, e.g. gst_vaapi_surface_pool_new(format, width, height); and a
somewhat more elaborated/flexible form with optional allocation flags
and precise GstVideoInfo specification.
This is an API/ABI change, and SONAME version needs to be bumped.
Add gst_vaapi_display_has_opengl() helper function to help determining
whether the display can support OpenGL context to be bound to it, i.e.
if the class is of type GST_VAAPI_DISPLAY_TYPE_GLX.
Make gst_vaapi_display_get_display_type() return the actual VA display
type. Conversely, add a gst_vaapi_display_get_class_type() function to
return the type of the GstVaapiDisplay instance. The former is used to
identify the display server onto which the application is running, and
the latter to identify the original object class.
GstVaapiTexture is a generic abstraction that could be moved to the
core libgstvaapi library. While doing this, no extra dependency needs
to be added. This means that a GstVaapitextureClass is now available
for any specific code that needs to be added, e.g. creation of the
underlying GL texture objects, or backend dependent ways to upload
a surface to the texture object.
Generic OpenGL data types (GLuint, GLenum) are also replaced with a
plain guint.
https://bugzilla.gnome.org/show_bug.cgi?id=736715
The gst_vaapi_texture_put_surface() function is missing a crop_rect
argument that would be used during transfer for cropping the source
surface to the desired dimensions.
Note: from a user point-of-view, he should create the GstVaapiTexture
object with the cropped size. That's the default behaviour in software
decoding pipelines that we need to cope with.
This is an API/ABI change, and SONAME version needs to be bumped.
https://bugzilla.gnome.org/show_bug.cgi?id=736712
dvbsuboverlay signals no subtitles present by not setting
GstVideoOverlayCompositionMeta on a buffer.
Detect this, and remove subtitles whenever we have no overlay composition to
hand.
Signed-off-by: Simon Farnsworth <simon.farnsworth@onelan.co.uk>
Otherwise restarting may fail because the state of vaapipluginbase and
vaapipostproc don't match. e.g. gst_vaapipostproc_set_caps() will skip
initailization and not call gst_vaapi_plugin_base_set_caps()
Make sure that the GstVaapiPluginBase instance receives the new src
pad caps whenever they get updated from within the GstVaapiDecoder
decode routines.
This also ensures that downstream elements receive correctly sized
SW decoded buffers if needed.
https://bugs.tizen.org/jira/browse/TC-114
Split GstVideoDecoder::set_format() handler to first update the sink
pad caps and reset the active VA decoder instance based on those, and
then update the src pad caps whenever possible, e.g. when the caps
specify a valid video resolution.
In some occasions, a buffer pool is created for pre-initialization
purposes regardless of whether a valid image size is available or
not. However, during actual decode stage, the vaapidecode element
is expected to update the srcpad caps with the new dimensions, thus
also triggering a reset of the underlying bufferpool.
If the best downstream capsfeature turns out to be GLMemory, then make
sure to propagate RGBA video format in caps to that element. This fixes
the following pipeline: ... ! vaapipostproc ! glimagesink.
When an explicit output video format is selected, from an src pad
capsfilter, make sure that the downstream element actually supports
that format. In particular, fix crash with the following pipelines:
... ! vaapipostproc ! video/x-raw,format=XXX ! xvimagesink ; where
XXX is a format not supported by xvimagesink.
While doing so, also reduce the set of src pad filter caps to the
actual set of allowed src pad caps.
The ensure_surface() and ensure_image() functions shall only relate
to the underlying backing store. The actual current flags are to be
updated only through ensure_{surface,image}_is_current() or very other
particular cases in GstMemory hooks.
If the surface proxy is updated into the GstVaapiVideoMemory, then
it is assumed it is the most current representation of the current
video frame. Likewise, make a few more arrangements to have the
"current " flags set more consistently.
When interacting with SW elements, the buffers and underlying video
memory could be mapped as read/write. However, we need to use those
buffers again as plain VA surfaces, we have to make sure the VA image
is thus committed back to VA surface memory.
This fixes pipelines involving avdec_* and vaapi{postproc,sink}.
Hence vaapisink can display buffers decoded by gst-libav, or HW decoded
buffers can be further processed in-place, e.g. with a textoverlay.
https://bugzilla.gnome.org/show_bug.cgi?id=704078
[ported to current git master branch, amended commit message]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
In the current implementation, gst_video_info_set_format() would reset
the whole GstVideoInfo structure first, prior to setting video format
and size. So, coleteral information like framerate or pixel-aspect-
ratio are lost.
Provide and use a unique gst_video_info_change_format() for overcome
this issue, i.e. only have it change the format and video size, and
copy over the rest of the fields.
https://bugzilla.gnome.org/show_bug.cgi?id=734665
This is for helping decodebin to autoplug the vaapidecode element.
Decodebin is selecting decoder elements only based on rank and caps.
Without overriding the autoplug-* signals there is no way to autoplug
HW decoders inside decodebin. An easier soulution is to raise the
rank of vaapidecode, so that it gets selected first.
https://bugzilla.gnome.org/show_bug.cgi?id=739332
Added the same Klass specifications used in other upstream
video postprocessing elements like videoconvert, videoscale,
videobalance and deinterlace.
An example use case is for this is to help the playsink
to autoplug the hardware accelerated deinterlacer.
This is a workaround until auto-plugging is fixed when
format=ENCODED + memory:VASurface caps feature are provided.
Use the downstream negotiated video format as the output video format
if the user didn't ask for the colorspace conversion explicitly.
Usecase: This will help to connect elements like videoscale, videorate etc
to vaapipostproc.
https://bugzilla.gnome.org/show_bug.cgi?id=739443
Add new "scale-method" property to expose the scaling mode to use during
video processing. Note that this is only a hint, and the actual behaviour
may differ from implementation (VA driver) to implementation.
Attach the codec_data to out_caps only if downstream needed.
For eg: h264 encoder doesn't need to stuff codec_data to the
src caps if the negotiated caps has a stream format of byte-stream.
https://bugzilla.gnome.org/show_bug.cgi?id=734902
Fix arguments to XkbKeycodeToKeysym() for converting an X11 keycode
to a KeySym. In particular, there is no such Window argument. Also
make sure to check for, and use, the correct <X11/XKBlib.h> header
where that new function is defined. Otherwise, default to the older
XKeycodeToKeysym() function.
Really use the motion event coordinates to propagate the "mouse-move"
event to upper layer, instead of those from a button event. Those are
technically the same though.