Refactor gst_vaapi_plugin_base_create_gl_context() in order to check
the return value of gst_gl_ensure_element_data(). The result is a code
bit cleaner.
There is a regression in 7a206923, since the buffer pool ditches all
the buffers generated by them because the pool config size is
different of the buffer's size.
Test pipeline:
gst-launch-1.0 filesrc location=big_buck_bunny_1080p_h264.mov \
! qtdemux ! vaapih264dec ! vaapipostproc ! xvimagesink \
--gst-debug=GST_PERFORMANCE:5
The allocator may update the buffer size according to the VA surface
properties. In order to do this, the video info is modified when the
allocator is created, which reports through the allocation info the
updated size, and set it to the pool config.
The vaapi video decoders might have different allocation caps from
the negotiation caps, thus the GstVideoMeta shall use the negotiation
caps, not the allocation caps.
This was done before reusing gst_allocator_get_vaapi_video_info(),
storing there the negotiation caps if they differ from the allocation
ones, but this strategy felt short when the allocator had to be reset
in the vaapi buffer pool, since we need both.
This patch adds gst_allocator_set_vaapi_negotiated_video_info() and
gst_allocator_get_vaapi_negotiated_video_info() to store the
negotiated video info in the allocator, and distinguish it from
the allocation video info.
https://bugzilla.gnome.org/show_bug.cgi?id=783599
Direct rendering (use vaDeriveImage rather than vaPutImage) has better
performance in some Intel platforms (Haswell, for example) but in others
(Skylake) is the opposite.
In order to have some control, the patch enables the direct rendering
through the environment variable GST_VAAPI_ENABLE_DIRECT_RENDERING.
Also it seems to generating some problems with gallium/radeon backend.
See bug #779642.
https://bugzilla.gnome.org/show_bug.cgi?id=775848
When dmabuf is negotiated downstream and decoding and rendering are
not done on the same device, the layout has to be linear in order for
the memory to be shared accross devices, since each device has its
own way to do tiling.
Right now this code is rather just a to-do comment, since we are not
fetching the device ids.
https://bugzilla.gnome.org/show_bug.cgi?id=755072
If the negotiated caps are raw caps and downstream supports the
EGL_EXT_image_dma_buf_import extension, then the created allocator
is the DMAbuf, configured to downstream.
At this moment, the only element which can push dmabuf-based buffers
to downstream, is vaapipostproc.
In order to enable, in the future, dmabuf-based buffers, the vaapi base
plugin needs to check if downstream can import dmabuf buffers.
This patch checks if downstream can handle dmabuf, by introspecting the
shared GL context. If the GL context is EGL/GLES2 and have the extension
EGL_EXT_image_dma_buf_import, then dmabuf can be negotiated.
Original-patch-by: Julien Isorce <j.isorce@samsung.com>
Add GstPadDirection param to gst_vaapi_dmabuf_allocator_new(), thus
we later could do different thing when the allocated memory is for
upstream or dowstream, as required by VA-API.
https://bugzilla.gnome.org/show_bug.cgi?id=755072
If a GstVaapiDisplay is not found in the GStreamer context sharing,
then VAAPI elements look for a local GstGLContext in gst context
sharing mechanism ('gst.gl.local.context').
If this GstGLContext not found either then, only the VAAPI decoders
and the VAAPI post-processor, will try to instantiate a new
GstGLContext.
If a valid GstGLContext is received, then a new GstVaapiDisplay will
be instantiated with the platform, API and windowing specified by the
instantiated GstGLContext.
Original-Patch-By: Matt Fischer <matt.fischer@garmin.com>
https://bugzilla.gnome.org/show_bug.cgi?id=777409
Every time a new buffer is allocated, the pool is activated. This
doesn't impact in performance since gst_buffer_pool_set_active()
checks the current state of the pool. Nonetheless it logs out a
message if the state is the same, and it floods the logging subsystem
if it is enabled.
To avoid this log flooding first the pool state is checked before
changing it.
Adds two buffers as the default value of minimum buffer.
This would be used when creating and proposing vaapi bufferpool for
sink pad, hence the upstream element will keep, at least, these two
buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=775203
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
First, deactivate source pad pool when the out caps change, and if so,
destroy texture map, the source pad allocator and pool only if the
new caps are different from the ones already set.
This patch adds the created allocator to the allocation query either
in decide_allocation() and propose_allocation() vmehtods.
With it, there's no need to set the modified allocator's size in the
pool configuration.
https://bugzilla.gnome.org/show_bug.cgi?id=774782
When the allocator is created, it stores the allocation caps. But sometimes
the "allocation caps" may be different from the "negotiation caps".
In this case, the allocator should store the negotiation caps since they
are the ones used for frame mapping with GstVideoMeta.
When vaapispostproc is used, this is not a problem since the element is assume
to resize. But when using a vaapi decoder only, with a software renderer, it
fails in this case.
https://bugzilla.gnome.org/show_bug.cgi?id=773323
Since when the sink pad allocator is created, it is decided if the required
one is vaapi allocator or dmabuf allocator, there is no need to force its set
again.
When running gst-discoverer-1.0, in certain media, vaapipostroc is stopped
meanwhile it is transforming caps. The problem is that stop() calls
gst_vaapi_plugin_base_close(), which nullifies the element's va display, but
the va display is used in tranform_caps() when it is extracting the possible
format conversions. This display disappearing generates warning messages.
This patch holds a local reference of va display at ensure_allowed_raw_caps()
hence it doesn't go away meanwhile it is used, even if the
gst_vaapi_plugin_base_close() is called in other thread.
https://bugzilla.gnome.org/show_bug.cgi?id=773593
Enable the direct rendering with linear surfaces if the negotiated src caps
are video/x-raw without features.
Pass also the caps, since they are needed to know the requested caps features.
Just as we did in ensure_sinkpad_allocator(), let's move the error message
into the ensure_srcpad_allocator() from the caller,
gst_vaapi_plugin_base_decide_allocation()
Instead of receiving the GstVideoInfo structure as parameter, get the original
GstCaps from ensure_sinkpad_buffer_pool(), in this way we could decide better
which allocator instantiate.
The way to experiment with the direct rendering is through and internal
compiler pre-processor flag.
The current change set enables a way to specified at run-time, as a flag
passed to the allocator at instanciation time.
When caps reconfiguration is called, the new downstream frame size might be
different. Thus, if the downstream caps change,the display's texture map is
reset.
In addition, during pipeline shutdown, textures in texture map have to be
released, since each one have a reference to the GstVaapiDisplay object, which
is a dangerous circular reference.
https://bugzilla.gnome.org/show_bug.cgi?id=769293
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
Check earlier if upstream video source has activated the dmabuf-import
io-mode (hack to disappear soon), thus we can avoid the re-assignation of a
new allocator.
Add the method gst_allocator_get_vaapi_image_size() for the
GstVaapiVideoAllocator, which gets the size of the allocated images with the
current video info.
This method replaces the direct call to the allocator's image info when the
pool is configured.
Depends on media, video size is sometimes updated with new allocator.
It leads to dismatch between bufferpool's set size and real allocated buffer size.
In this case, it causes every buffer is freed during release in bufferpool,
which should be reused. This affects performance.
https://bugzilla.gnome.org/show_bug.cgi?id=769248
Using the GstContext mechanism, it is possible to find if the pipeline
shares a GstGLContext, even if we are not to negotiating GLTextureUpload
meta. This is interesting because we could negotiate system memory caps
feature, but enable DMABuf if the GstGLContext is EGL with some extensions.
https://bugzilla.gnome.org/show_bug.cgi?id=766206
Move some of the logic in gst_vaapi_plugin_base_driver_is_whitelisted() to a
new function gst_vaapi_driver_is_whitelisted(), in this way, it can be used
when registering the plugin's feature set with the test VA display.
https://bugzilla.gnome.org/show_bug.cgi?id=724352
if gst_buffer_pool_set_config returns FALSE, check the modified
config and retry set_config if the config is still acceptable.
Signed-off-by: Scott D Phillips <scott.d.phillips@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=766184
There's no need to check for the display in the plugin object when
decide_allocation() vmethod is called, because the display will created or
re-created along the method execution.
Get the pool config just before use it, to avoid a memory leak if the
allocator cannot be instantiated. Similarly, return FALSE if the configuration
cannot be set, avoid keep a not used allocator in the pool.
Instead of instantiating an allocator per vaapivideobufferpool, only one
allocator is instantiated per element's pad and shared among future pools.
If the pad's caps changes, the allocator is reset.
https://bugzilla.gnome.org/show_bug.cgi?id=765435