Use the new gst_caps_has_vaapi_surface() helper function to detect
whether the sink pad caps contain native VA surfaces, or not, i.e.
no raw video caps.
Also rename is_raw_yuv to get_va_surfaces to make the variable more
explicit as we just want a way to differentiate raw video caps from
VA surfaces actually.
The "discontinuity" tracking code, whereby lost frames are tentatively
detected, is inoperant if the sink pad buffer timestamps are not right
to begin with.
This is a temporary workaround until the following bug is fixed:
https://bugzilla.gnome.org/show_bug.cgi?id=734386
In order to make the discontinuity detection code useful, we need to
detect the lost frames in the history as early as the previous frame.
This is because some VA implementations only support one reference
frame for advanced deinterlacing.
In practice, turn the condition for detecting new frame that is beyond
the previous frame from field_duration*2 to field_duration*3, i.e.
nothing received for the past frame and a half because of possible
rounding errors when calculating the field-duration either in this
element (vaapipostproc), or from the upstream element (parser element).
This is a regression introduced with commit faefd62.
https://bugzilla.gnome.org/show_bug.cgi?id=734135
Introduce new gst_caps_has_vaapi_surface() helper function to detect
whether the supplied caps has VA surfaces. With GStreamer >= 1.2, this
implies a check for memory:VASurface caps features, and format=ENCODED
for earlier versions of GStreamer.
Simplify the creation and installation process of properties, by first
accumulating them into a g_properties[] array, and next calling into
g_object_class_install_properties().
Also add missing docs and flags to some properties.
Move code around in a more logical way. Introduce GST_VAAPISINK_CAST()
helper macro and use it wherever we know the object is a GstBaseSink or
any base class. Drop explicit initializers for values that have defaults
set to zero.
Introduce new backends vtable so that to have clean separation between
display dependent code and common base code. That's a "soft" separation,
we don't really need dedicated objects.
https://bugzilla.gnome.org/show_bug.cgi?id=722248
Support for X11 "synchronous" mode was never implemented, and was only
to be useful for debugging. Drop that altogether, that's not going to
be useful in practice.
https://bugzilla.gnome.org/show_bug.cgi?id=733985
Rendering with GLX in vaapisink is kind of useless nowadays, including
OpenGL related fancy effects. Plain VA/GLX interfaces are also getting
deprecated in favor of EGL, or more direct buffer sharing with actual
GL textures.
Should testing of interop with GLX be needed, one could still be using
the modern cluttersink or glimagesink elements.
https://bugzilla.gnome.org/show_bug.cgi?id=733984
Allow dynamic changes to the window, e.g. performed by the user, and
make sure to refresh its contents, while preserving aspect ratio.
In practice, Expose and ConfigureNotify events are tracked in X11
display mode by default. This occurs in a separte event thread, and
this is similar to what xvimagesink does. Any of those events will
trigger a reconfiguration of the window "soft" size, subsequently
the render-rect when necessary, and finally _expose() the result.
The default of handle_events=true can be changed programatically via
gst_x_overlay_handle_events().
Thanks to Fabrice Bellet for rebasing the patch.
https://bugzilla.gnome.org/show_bug.cgi?id=711478
[dropped XInitThreads(), cleaned up the code a little]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
The gst_vaapidecode_decode_loop() function is called within a separate
task to fetch and output all frames that were decoded so far. So, if
the decoder_loop_status is forcibly set to EOS when _finish() is called,
then we are bound to exist the task without submitting the pending
frames.
If the downstream element error'ed out, then the gst_pad_push() would
propagate up an error and so we will get it right for cutting off
_finish() early in that case.
This is a regression from 6003596.
https://bugzilla.gnome.org/show_bug.cgi?id=733897
Fixes a hang/race on shutdown where _decode_loop() had already completed
its execution and _finish() was waiting on a GCond for decode_loop()
to complete. Also fixes the possible race where _finish() is called
but _decode_loop() endlessly returns before signalling completion
iff the decoder instance returns GST_FLOW_OK.
Found with: ... ! vaapidecode ! {glimagesink,cluttersink}
https://bugzilla.gnome.org/show_bug.cgi?id=733897
[factored out GST_VIDEO_DECODER_STREAM_UNLOCK() call]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Now that we always track the currently active video buffer, it is
not necessary to automatically increase its reference since this is
implicitly performed in ::show_frame() through the get_input_buffer()
helper from GstVaapiPluginBase class.
This is a regression from a26df80.
Rework the logics behind the configuration of an adequate bufferpool,
especially when OpenGL meta or additional capsfeatures are needed.
Besides, for GStreamer >= 1.4, the first capsfeatures that gets matched,
and that is not system memory, is now selected by default.
Make sure to propagate memory:VASurface capsfeature to srcpad caps
only for GStreamer >= 1.5 as the plug-in elements in GStreamer 1.4
core currently miss additional patches available in 1.5-git (1.6).
This is a temporary workaround.
Supporting anything thing below GStreamer 1.2 is asking for trouble
for keeping up with the required facilities to make efficient pipelines.
Users are invited to upgrade to the very latest GStreamer 1.2.x release,
at the minimum.
Support for GStreamer 0.10 is obsolete. i.e. it is no longer supported
and may actually be removed altogether for a future release. There is
no real point to maintain a build for such an ancient GStreamer version
that is not even supported upstream.
If a multiview stream is decoded, multiple view components are submitted
as is downstream. It is the responsibility of the sink element to display
the required view components. By default, always select the frame buffer
that matches the view-id of the very first frame to be displayed.
However, introduce a "view-id" property to allow the selection of a
specific view component of interest to display.
Always record the VA surface that is currently being rendered, no matter
the fact we are using texturedblit or overlay. That's because in some
occasions, we need to refresh or resize the displayed contents based on
new events. e.g. user-resized window.
Besides, it's simpler to track the last video buffer in GstVaapiSink than
through the base sink "last-sample".
Add a "display-name" property to vaapisink so that the end user could
select the desired output. Keep "display-name" in-line with the existing
"display" (GstVaapiDisplayXXX type).
So, for X11 or GLX, the "display-name" is the usual display name as we
know for XOpenDisplay(); for Wayland, the "display-name" is the name used
for wl_display_connect(); and for DRM, the "display-name" is actually the
DRI device name.
https://bugzilla.gnome.org/show_bug.cgi?id=722247
Ensure the X11 implementation for GstVaapiWindow::get_geometry() is
thread-safe by default, so that upper layer users don't need to handle
that explicitly.
Add gst_vaapi_window_reconfigure() interface to force an update of
the GstVaapiWindow "soft" size, based on the current geometry of the
underlying native window.
This can be useful for instance to synchronize the window size when
the user changed it.
Thanks to Fabrice Bellet for rebasing the patch.
[changed interface to gst_vaapi_window_reconfigure()]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Add gst_vaapi_display_get_display_name() helper function to determine
the name associated with the underlying native display. Note that for
raw DRM backends, the display name is actually the device path.
Always add VideoAlignment bufferpool option if the downstream element
expects its own pool to be used but does not offer it through a proper
propose_allocation() implementation for instance, and that the ALLOCATION
query does not expose the availability of the Video Meta API.
This fixes propagation of video buffer stride information to Firefox.
Make sure to always prefer native internal formats for the VA surfaces
that get allocated. Also disable "direct-rendering" mode in this case.
This is needed so that to make sure that anything that gets out of the
decoder, or anything that gets into the encoder, is in native format
for the hardware, and thus the driver doesn't need to perform implicit
conversions in there. Interop with SW elements is still available with
fast implementations of VA imaging APIs.
Forbid shares of GstMemory instances, and rather make copy of it. This
effectively copies the GstMemory structure and enclosed metadata, but
this does not copy the VA surface contents itself. It should though.
This fixes preroll and makes sure to not download garbage for the first
frame when a SW rendering sink is used.
Use an image pool to hold VA images to be used for downloads/uploads
of contents for the associated surface.
This is an optmization for size. So, instead of creating as many VA
images as there are buffers (then VA surfaces) allocated, we only
maintain a minimal set of live VA images, thus preserving memory
resources.
Disable read-write mappings if "direct-rendering" is not supported.
Since the ordering of read and write operations is not specified,
this would require to always download the VA surface on _map(), then
commit the temporary VA image back to the VA surface on _unmap().
Some SW decoding plug-in elements still use R/W mappings though.
https://bugzilla.gnome.org/show_bug.cgi?id=733242
Fix error messages introduced in the previous commit for the _map()
imaplementation. Also use the new get_image_data() helper function
to determine the base pixels data buffer from a GstVaapiImage when
updating the video info structure from it.