We prefer to use the same format between image and surface for gst
vaapi allocator. The old way may choose different formats between
image and surface. For example, the RGBA image may have a NV12 surface.
So we need to do format conversion when we put/get image to surface.
Some drivers such as iHD can not support such conversion and always
cause a data flow error. There may also have some performance cost
for format conversion when put/get images.
So we prefer to use the same format for image and surface in the
allocator. If the surface can not support that format, we then
fallback to find a best one as the surface format.
Co-authored-by: Víctor Jáquez <vjaquez@igalia.com>
[wl_shell] is officially [deprecated], so provide support for the
XDG-shell protocol should be provided by all desktop-like compositors.
(In case they don't, we can of course fall back to wl_shell).
Note that the XML file is directly provided by the `wayland-protocols`
dependency and generates the protocol marshalling code.
[wl_shell]: https://people.freedesktop.org/~whot/wayland-doxygen/wayland/Client/group__iface__wl__shell.html
[deprecated]: 698dde1958
Fix the deinterlace black frame when playing with glimagesink:
gst-launch-1.0 filesrc location=test.264 ! h264parse ! vaapih264dec \
! vaapipostproc deinterlace-mode=1 deinterlace-method=1 ! glimagesink
The gst_vaapi_plugin_base_get_allowed_raw_caps is used for both sink
pad and src pad, which cause some bugs. For sink pad, we need to verify
vaPutImage() while for the src pad we need to verify vaGetImage().
For vaapidecoderXXX kind of plugins, the case is more complex. We need
to verify whether the decoded result(in some surface, NV12 format most
of the time) can be vaGetImage to some raw image format. Add more check
to fix all these problems.
https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/issues/123
Signed-off-by: He Junyan <junyan.he@hotmail.com>
Add 444 10bit yuv format Y410, which can be used to decode
main-444 10bit streams. Currently, this feature is only
supported by media-driver in Icelake.
The extract_allowed_surface_formats function just check whether
we can support some kind of surface/image format pair. We just
need to create a surface, create an image with the same video-format
and putImage from image to surface. All these operations success,
that kind of video-format is supported.
The old manner do not work for some kind of video-format. For example,
the RGBA kind of format will create a NV12 surface and RGBA image,
and the putImage will fail because the format is not same. And so
the RGBA format is not supported but actually it is supported.
gst_vaapi_plugin_base_close() removed the raw caps that are used indirectly
in gst_vaapipostproc_transform_caps(). The usage is already protected by
the mutex.
This is needed when the pipeline is stopped during startup.
gl_platform_type in gst_vaapi_get_display_type_from_gl_env generate
unused-variable warning and may block build when Werror enabled.
Several functions like gst_vaapi_display_egl_new_with_native_display
have no prototype warning and link error when GL is enabled but EGL
is disabled. Fix all these warning and link error.
https://bugzilla.gnome.org/show_bug.cgi?id=797358
Signed-off-by: Junyan He <junyan.he@intel.com>
Add 422 10bit yuv format Y210, which can be used to decode
main-10-422 10bit streams. Currently, this feature is only
supported by media-driver in Icelake.
https://bugzilla.gnome.org/show_bug.cgi?id=797264
The current vaapipostproc calls driver's video processing
pipeline for deinterlacing only if it is Advance deinterlacing.
Modify in the way that it always tries with driver's video
processing pipeline for deinterlacing, and falls back to software
method of appending picture structure meta data only if it fails
with driver's method.
https://bugzilla.gnome.org/show_bug.cgi?id=797095
Removed exposed macros GST_VAAPI_OBJECT_DISPLAY() and
GST_VAAPI_OBJECT_ID() to plugins, keeping them only for internal
library usage.
The purpose is readability.
https://bugzilla.gnome.org/show_bug.cgi?id=797139
If the unit could not be parsed, just skip this nal and keep parsing
what is left in the adapter. We need to flush the broken unit in the
decoder specific parser because the generic code does not know about
units boundary. This increases error resilliance.
Before this, the broken unit would stay in the adapter and EOS would be
returned. Which stopped the streaming. Just removing the EOS would have
lead to the adapter size growing indefinitely.
https://bugzilla.gnome.org/show_bug.cgi?id=796863
On systems with an Nvidia card, this error is output each time
the registry is rebuilt, which happens pretty often when
using gst-build as a development environment.
https://bugzilla.gnome.org/show_bug.cgi?id=796663
gst_vaapi_display_egl_new_with_native_display() has been broken since
it wasn't used.
Currently it's needed to call this API to create a display providing
the EGL display, so it could avoid duplicated calls to the native
display (eg. eglTerminate).
Signed-off-by: Victor Jaquez <vjaquez@igalia.com>
https://bugzilla.gnome.org/show_bug.cgi?id=795391
The commit 67e33d3de2 ("vaapiencode: h264:
find best profile in those available") changed the code to pick a profile
that is actually supported by the hardware. Unfortunately it dropped the
downstream constraints. This can cause negotiation failures under certain
circumstances.
The fix is split in two cases:
1\ the available VA-API caps doesn't intersect with pipeline's allowed
caps:
* The best allowed profile (pipeline's caps) is set as the encoding
target profile (it will be adjusted later by the available profiles
and properties)
2\ the available VA-API caps does intersect with pipeline's allowed
caps:
* The intersected caps are fixed, and its profile is set as the
encoding target profile. In this case the is not the best profile,
but the minimal one (if VA-API reports the profiles in order).
Setting the minimal profile of the intersected caps is better for
compatibility.
This patch fixes other tests related with caps negotiation, for
example, it handles baseline profile, even when VA only supports
constrained-baseline.
Original-patch-by: Michael Olbrich <m.olbrich@pengutronix.de>
https://bugzilla.gnome.org/show_bug.cgi?id=794306
Instead of using our own context handling for looking for GstGL
parameters (display, context and other context), this patch changes
the logic to use the utility function offered by GstGL.
https://bugzilla.gnome.org/show_bug.cgi?id=793643
The parameters of gst_gl_ensure_element_data() have to be not
local variable since they are going to be used to see if they're
set in gst_element_set_context() inside the API.
This is basically a revert of commit 3d56306chttps://bugzilla.gnome.org/show_bug.cgi?id=793643
Through "gst.vaapi.app.Display" context, users can set their own
VADisplay and native display of their backend.
So far we support only X11 display, from now we also support Wayland
display.
Attributes:
- wl-display : pointer of struct wl_display .
https://bugzilla.gnome.org/show_bug.cgi?id=705821
Instead to look for the best profile in the allowed profiles by
downstream, the encoder should look for the base profile in the
available profile in VA-API.
https://bugzilla.gnome.org/show_bug.cgi?id=794306
Instead of copying the metada in prepare_output_buffer() vmethod,
it is done in append_output_buffer_metadata() thus deinterlaced
buffers could also have the proper metas.
GstVideoCropMeta now it is copied internally and it is decided via
transform_meta() vmethod.
A new internal method, copy_metadata() was added to handle VPP
transformation where non-GstVideoVaapiMeta metas were lost.
When importing buffers to a VA-base buffer, it is required to copy
the metas in the original buffer, otherwise information will be
lost, such as GstVideoRegionOfInterestMeta.
https://bugzilla.gnome.org/show_bug.cgi?id=768248
Generate system allocated output buffers when downstream doesn't
support GstVideoMeta.
The VA buffer content is copied to the new output buffer, and it
replaces the VA buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=785054
When downstream can't handle GstVideoMeta it is required to send
system allocated buffers.
The system allocated buffers are produced in prepare_output_buffer()
vmethod if downstream can't handl GstVideoMeta.
At transform() vmethod if the buffer is a system allocated buffer,
a VA buffer is instanciated and replaces the out buffer. Later
the VA buffer is copied to the system allocate buffer and it
replaces the output buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=785054
This patch add the member copy_output_frame and set it TRUE when
when downstream didn't request GstVideoMeta API, the caps are raw
and the internal allocator is the VA-API one.
https://bugzilla.gnome.org/show_bug.cgi?id=785054
This function will inform the element if it shall copy the generated
buffer by the pool to a system allocated buffer before pushing it
to downstream.
https://bugzilla.gnome.org/show_bug.cgi?id=785054
VA-API based buffer might need a video meta because of different
strides. But when donwstream doesn't support video meta we need to
force the usage of video meta.
Before we changed the buffer pool configuration, but actually this
is a hack and we cannot rely on that for downstream.
This patch add a check fo raw video caps and allocator is VA-API,
then the option is enabled without changing the pool configuration.
In this case the element is responsible to copy the frame to a
simple buffer with the expected strides.
https://bugzilla.gnome.org/show_bug.cgi?id=785054
vaapsink, when used with the Intel VA-API driver, tries to display
surfaces with format NV12, which are handled correctly by
Weston. Nonetheless, COGL cannot display YUV surfaces, making fail
pipelines on mutter.
This shall be solved either by COGL or by making the driver to paint
RGB surfaces. In the meanwhile, let's just demote vaapisink as
marginal when the Wayland environment is detected, no matter if it is
Weston.
https://bugzilla.gnome.org/show_bug.cgi?id=775698
In propose_allocation() if the numer of allocation params is zero, the
system's allocator is added first, and lastly the native VA-API
allocator.
In decide_allocation(), the allocations params in query are travered,
looking for a native VA-API allocator. If it is found, it is reused as
src pad allocator. Otherwise, a new allocator is instantiated and
appended in the query.
https://bugzilla.gnome.org/show_bug.cgi?id=789476
GST_VAAPI_VIDEO_ALLOCATOR_NAME was added in commit 5b11b8332 but it
was never used, since the native VA-API allocator name has been
GST_VAAPI_VIDEO_MEMORY_NAME.
This patch removes GST_VAAPI_VIDEO_ALLOCATOR_NAME macro.
https://bugzilla.gnome.org/show_bug.cgi?id=789476
Don't subscribe to button press events when using a foreing window,
because the user created window would trap those events, preveting the
show of frames.
https://bugzilla.gnome.org/show_bug.cgi?id=791615
Check for display's color-balance properties, available by the VA-API
driver, before setting them.
Also logs an info message of those unavailable properties.
https://bugzilla.gnome.org/show_bug.cgi?id=792638
gst_vaapipostproc_ensure_filter might free the allowed_srcpad_caps
and allowed_sinkpad_caps. This can race with copying these caps in
gst_vaapipostproc_transform_caps and lead to segfaults.
The gst_vaapipostproc_transform_caps function already locks
postproc_lock before copying the caps. Make sure that calls to
gst_vaapipostproc_ensure_filter also acquire this lock.
https://bugzilla.gnome.org/show_bug.cgi?id=791404
gst.vaapi.app.Display context is made for applications that will
provide the VA display and the native display to used by the
pipeline, when are using vaapisink as overlay. There are no use
case for encoders, decoders, neither for the postprocessor.
In the case of the vaapisink, it shall query for gst.vaapi.Display
upstream first, and then, if there is no reply,
gst.vaapi.app.Display context will be posted in the bus for the
application. If the application replies, a GstVaapiDisplay object
is instantiated given the context info, otherwise a
GstVaapiDisplay is created with the normal algorithm to guess the
graphics platform. Either way, the instantiated GstVaapiDisplay
is propagated among the pipeline and the have-message bus message.
Also only vaapisink will process the gst.vaapi.app.Display, if
and only if, it doesn't have a display already set. This is
caused because if vaapisink is in a bin (playsink, for example)
the need-context is posted twice, leading to an error state.
https://bugzilla.gnome.org/show_bug.cgi?id=790999
As buffers negotiated with memory:VASurface caps feature can also be
mapped, they can also be configured to use VA derived images, in other
words "direct rendering".
Also, because of the changes in dmabuf allocator as default allocator,
the code for configuring the direct rendering was not clear.
This patch cleans up the code and enables direct rendering when the
environment variable GST_VAAPI_ENABLE_DIRECT_RENDERING is defined,
even then the memory:VASurface cap feature is negotiated.
https://bugzilla.gnome.org/show_bug.cgi?id=786054
Set if source pad can handle dmabuf only if the GstGL context comes
from downstream.
It is possible to know that at two moments:
1\ In the case of GstGLTextureUpload caps feature is negotiated and
downstream pool reports back gst.gl.GstGLContext.
2\ When GstGLContext is found as GstContext from dowstream.
https://bugzilla.gnome.org/show_bug.cgi?id=788503
In function gst_vaapi_find_gl_context() add a direction parameter to
return back the direction where the GstGL context was found.
This is going to be useful when checking if downstream can import
dmabuf-based buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=788503
This patch refactors the code by adding the function
vaapi_plugin_base_set_srcpad_can_dmabuf(), it determines if the passed
GstGLContext can handle dmabuf-based buffers.
The function is exposed publicly since it is intended to be used later
at GstVaapiDisplay instantiation.
https://bugzilla.gnome.org/show_bug.cgi?id=788503
Tis patch, allows some properties that we want to be set on
runtime. (eg. bitrate)
Note that all properties are under control by num_codedbuf_queued.
https://bugzilla.gnome.org/show_bug.cgi?id=786321
VPP support is only needed for advanced deinterlacing, which is not
enabled by default either. Error out if it is selected but VPP is not
supported, and otherwise just work without VPP support.
https://bugzilla.gnome.org/show_bug.cgi?id=788758
When creating the test display for querying capabilites, it try in
certain order: DRM, Wayland and finally X11. GLX nor EGL are tried
since they are either composited with X11 or Wayland.
The reason for this is to reduce the posibility of failure that could
blacklist the plugin.
https://bugzilla.gnome.org/show_bug.cgi?id=782212
We returned FALSE from ::start() if VPP support is not available, but it
is only really needed for complex filters and during transform we check
for that. For simple deinterlacing it is not needed.
Instead of reusing a parameter variable for the return value of
gst_vaapipostproc_transform_caps(), this patch uses the function
scoped pointer. Thus, the code is cleaner.
https://bugzilla.gnome.org/show_bug.cgi?id=785706
Instead of reusing a parameter variable for the return value of
gst_vaapipostproc_fixate_caps(), this patch uses the function scoped
pointer. Thus, the code is cleaner.
https://bugzilla.gnome.org/show_bug.cgi?id=785706
When glimagesink negotiates the caps feature memory:DMABuf the
exported dmabufs buffers with NV12 format are not well rendered, thus
setting only planar.
https://bugzilla.gnome.org/show_bug.cgi?id=788229
Instead of using gst_vaapi_display_set_property() or
gst_vaapi_display_get_property(), this patch set replace it usage
with g_object_set() or g_object_get().
Also the internal helper cb_set_value() is removed since it is not
used anymore.
https://bugzilla.gnome.org/show_bug.cgi?id=788058
Make sure vaapisink create a va surface backed buffer pool and all
required attributes get assigned correctly for drm display type.
This is needed to make the below pipeline working:
gst-launch-1.0 filesrc location= raw_video.mov ! videoparse format=uyvy
width=320 height=240 framerate=30/1 ! vaapisink display=drm
https://bugzilla.gnome.org/show_bug.cgi?id=786954
A new FEI based encoder element for h264 is added: vaapih264feienc
FEI is a an extension to VA-API which is providing low level
advanced control over different stages of encoding.
Extending vaapih264enc with fei support is possible, but it will
make the code too much complicated and will be difficult
to debug. So adding the new encoder element, but keeping
the rank as 0 , vaapih264enc will stay as the primary
encoder for normal use cases.
The vaaih264feienc is mainly useful for customers who want to play
with MotionVectors and Macroblock Predictions. Also user can
do one stage of encoding(eg: only the Motion Vector Calculation)
in software and offload trasformation/entroy-coding etc to
Hardware (which is what PAK module is doing) using FEI element.
vaapih264feienc can work in different modes using fei-mode properoty
eg: gst-launch-1.0 videotestsrc ! vaapih264feienc fei-mode=ENC+PAK ! filesink location=sample.264
Important Note: ENC only mode won't produce any encoded data which is expected.
But ENC alwys requires the output of PAK in order to do the inter-prediction
over reconstructed frames.
Similary PAK mode alway requires MV and MBCode as input, so unless there is an
upstream element providing those buffers, PAK only won't work as expected.
In a nutshell, ENC_PAK and the ENC+PAK modes are the only options we can verify
with vaapih264feienc. But ideally, EN+PAK mode verification is enough to make sure
that ENC and PAK are working as expected since ENC+PAK mode always invoke ENC and PAK
separately in vaapih264feienc.
People contributed:
Wang, Yi <yi.a.wang@intel.com>
Leilei <leilei.shang@intel.com>
Zhong, Xiaoxia <xiaoxia.zhong@intel.com>
xiaominc <xiaomin.chen@intel.com>
Li, Jing B <jing.b.li@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=785712https://bugzilla.gnome.org/show_bug.cgi?id=784667
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
When vaapih264dec's base-only profile is set to TRUE, fake SVC profile
support in caps.
https://bugzilla.gnome.org/show_bug.cgi?id=732266
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Instead including particular gstgl header files in a header file
that doesn't export a gstgl symbol, the main gstgl header file is
included in gstvaapipluginutil.c where the symbols are used.
https://bugzilla.gnome.org/show_bug.cgi?id=786597
Coverity scan bug:
An unsigned value can never be negative, so this test will always
evaluate the same way.
As len is guint32, there is no need to check it if it is equal or
bigger than zero.
Coverity scan bug:
The variable will contain an arbitrary value left from earlier
computations.
Variable base_only is fetched from base-only property, and it may be
not assigned. It needs to be initialized.
Refactor gst_vaapi_plugin_base_create_gl_context() in order to check
the return value of gst_gl_ensure_element_data(). The result is a code
bit cleaner.
The function g_bit_nth_lsf() may return -1 if the request bit position
is not avaible. Thus, this patch check if the return value is not -1
in order to continue.
Replacing GstVaapiDisplay during rendering might be hiding problems
at some cases, even though it's safe currently since we use cache
of GstVaapidisplay.
Play safe by failing if this happens.
https://bugzilla.gnome.org/show_bug.cgi?id=766704
Through "gst.vaapi.app.Display" context, users can set their own VADisplay
and native display of their backend.
Attributes:
- display : pointer of VADisplay
- x11-display : pointer of X11 display (Display *), if they're using.
This patch creates GstVaapidisplayX11 if information provided through
"gst.vaapi.app.Display"
https://bugzilla.gnome.org/show_bug.cgi?id=766704
So far vaapi encoder does not set profile to src caps. This patch makes it
setting profile to src caps, which is determined by itself.
In addition, if encoder chose different profile, which is not negotiated with
downstream, we should set compatible profile to make negotiation working.
https://bugzilla.gnome.org/show_bug.cgi?id=757941
Check if the requested profile in source caps, is supported by the
VA driver. If it is not, an info log message is send saying that
another (compatible?) profile will be used.
https://bugzilla.gnome.org/show_bug.cgi?id=757941
First check if downstream requests ANY caps. If so, byte-stream is
used and the profile will be choose by the encoder. If dowstream
requests EMPTY caps, the negotiation will fail.
Lately, byte-stream and profile are looked in the allowed caps.
https://bugzilla.gnome.org/show_bug.cgi?id=757941
The check for avc stream format was done in the vaapi encoder's
vmethod get_caps(), but that is wrong since it has to be check
when encoder set_format().
https://bugzilla.gnome.org/show_bug.cgi?id=757941
vaapipostproc didn't negotiate the proper multiview caps losing
downstream information.
This patch enables the playing of MVC encoded stream by setting
the proper multiview mode/flags and views to src caps, according
to sink caps.
https://bugzilla.gnome.org/show_bug.cgi?id=784320
There is another regression with 7a206923 when setting the video
info for the video meta, it should be the one from the image's
allocator rather from the allocation caps.
Test pipeline:
gst-launch-1.0 filesrc location=bug766184.flv ! decodebin \
! tee ! videoconvert ! videoscale \
! video/x-raw, width=1920, height=1080 ! xvimagesink
There is a regression in 7a206923, since the buffer pool ditches all
the buffers generated by them because the pool config size is
different of the buffer's size.
Test pipeline:
gst-launch-1.0 filesrc location=big_buck_bunny_1080p_h264.mov \
! qtdemux ! vaapih264dec ! vaapipostproc ! xvimagesink \
--gst-debug=GST_PERFORMANCE:5
The allocator may update the buffer size according to the VA surface
properties. In order to do this, the video info is modified when the
allocator is created, which reports through the allocation info the
updated size, and set it to the pool config.
Refactor the set_config() virtual method considering a cleaner
approach to allocator instanciation, if it it not set or if it is
not valid for the pool.
https://bugzilla.gnome.org/show_bug.cgi?id=783599
The vaapi video decoders might have different allocation caps from
the negotiation caps, thus the GstVideoMeta shall use the negotiation
caps, not the allocation caps.
This was done before reusing gst_allocator_get_vaapi_video_info(),
storing there the negotiation caps if they differ from the allocation
ones, but this strategy felt short when the allocator had to be reset
in the vaapi buffer pool, since we need both.
This patch adds gst_allocator_set_vaapi_negotiated_video_info() and
gst_allocator_get_vaapi_negotiated_video_info() to store the
negotiated video info in the allocator, and distinguish it from
the allocation video info.
https://bugzilla.gnome.org/show_bug.cgi?id=783599
Renamed local video info structure names in set_config() vitual
method. The purpose of their renaming is to clarify the origin
of those structures, whether come from passed caps parameter
(new_allocation_vinfo) or from the configured allocator
(allocator_vinfo).
https://bugzilla.gnome.org/show_bug.cgi?id=783599
Renamed private GstVideoInfo structure video_info to allocation_vinfo
and alloc_info to negotiated_vinfo.
The purpose of these renaming is to clarify the origin and purpose of
these private variables:
video_info (now allocation_vinfo) comes from the bufferpool
configuration. It describes the physical video resolution to be
allocated by the allocator, which may be different from the
negotiated one.
alloc_info (now vmeta_vinfo) comes from the negotiated caps in
the pipeline. It represents how the frame is going to be mapped
using the video meta.
In Intel's VA-API backend, the allocation_vinfo resolution is
bigger than the negotiated_info.
https://bugzilla.gnome.org/show_bug.cgi?id=783599
When state of vaapisink is changed from PLAYING to NULL, the handle_events
flag is set to FALSE, and never recovered, and then event thread is never
going to run.
So we should allow to set the flag only when users try it.
https://bugzilla.gnome.org/show_bug.cgi?id=782543
Handles new custom event GstVaapiEncoderRegionOfInterest
to enable/disable a ROI region.
Writes a way to use new event to document.
https://bugzilla.gnome.org/show_bug.cgi?id=768248
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
This bin should have similar classification as decodebin which is
"Generic/Bin/Decoder" otherwise it will appear wrongly as video
decoder.
Signed-off-by: Victor Toso <victortoso@redhat.com>
https://bugzilla.gnome.org/show_bug.cgi?id=782063
When downstream negotiates a pixel-aspect-ratio of 0/1, the
calculations for resizing and formatting in vaapipostproc and
vaapisink, respectively, failed, and thus the pipeline.
This patch handles this situation by converting p-a-r of 0/1 to
1/1. This is how other sinks, such as glimagesink, work.
https://bugzilla.gnome.org/show_bug.cgi?id=781759
Since it's created by itself, it should be unref-counted
after gst_buffer_pool_config_set_allocator call. Afterwards,
this allocator will be ref-counted again when assigning to priv->allocator.
https://bugzilla.gnome.org/show_bug.cgi?id=781577
Sometimes a video decoder could set different buffer pool
configurations, because their frame size changes. In this case we
did not reconfigure the allocator.
This patch enables this use case, creating a new allocator inside
the VAAPI buffer pool if the caps changed, if it is not dmabuf-based.
If so, it is just reconfigured, since it doesn't have a surface pool.
https://bugzilla.gnome.org/show_bug.cgi?id=781577
Direct rendering (use vaDeriveImage rather than vaPutImage) has better
performance in some Intel platforms (Haswell, for example) but in others
(Skylake) is the opposite.
In order to have some control, the patch enables the direct rendering
through the environment variable GST_VAAPI_ENABLE_DIRECT_RENDERING.
Also it seems to generating some problems with gallium/radeon backend.
See bug #779642.
https://bugzilla.gnome.org/show_bug.cgi?id=775848
Clear decoders out on a flush but keep the same instance,
rather than completely recreating them. That avoids
unecessarily freeing and recreating surface pools
and contexts, which can be quite expensive
https://bugzilla.gnome.org/show_bug.cgi?id=781142
When these definitions are false, they are undef in the
preprocessor, not a defined value of 0. When they are unset the
compile fails with:
'GST_GL_HAVE_WINDOW_WAYLAND' undeclared (first use in this function)
https://bugzilla.gnome.org/show_bug.cgi?id=780948
This new virtual method, get_profile(), if implemented by specific
encoders, will return the VA profile potentially determined by the
source caps.
Also it is implemented by h264 and h265 encoders, which are the main
users of this vmethod.
https://bugzilla.gnome.org/show_bug.cgi?id=771291
In order to get the supported surface formats within a specific
profile this patch adds the GstVaapiProfile as property to
gst_vaapi_encoder_get_surface_formats().
Currently the extracted formats are only those related with the
default profile of the element's codec.
https://bugzilla.gnome.org/show_bug.cgi?id=771291
As in gstreamer-vaapi a common base class is used, the specific
default category is passed to the base-plugin initializator, thus
the log messages are categorized with the used plugin.
Nonetheless, when the gst-debug is disabled in compilation time,
it is needed to pass NULL to the base-plugin initializator. This
patch does that.
https://bugzilla.gnome.org/show_bug.cgi?id=780302
Particularly in GNOME Wayland, the negotiated or created GL context
defines a GLX environment, but VAAPI fails to create a GLX VA
display because there is no a DRI2 connection.
This patch retries to create the VA display if VA cannot create one
with the GL context parameters. Now using the old list of display
types.
This should also work in the case of systems with two GPU, when the
non-VAAPI has the graphics environment, and the VAAPI-enabled one
shall work headless.
https://bugzilla.gnome.org/show_bug.cgi?id=772838
Removes GstVideoGLTextureUploadMeta caps feature if the driver
doesn't support opengl.
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=772838
Rename to be consistent with H.264 and also H.265 encoder. The
meson build assumed this was already consistently named, and so
previously was not able to actually build the H.265 decoder.
https://bugzilla.gnome.org/show_bug.cgi?id=778576
When dmabuf is negotiated downstream and decoding and rendering are
not done on the same device, the layout has to be linear in order for
the memory to be shared accross devices, since each device has its
own way to do tiling.
Right now this code is rather just a to-do comment, since we are not
fetching the device ids.
https://bugzilla.gnome.org/show_bug.cgi?id=755072
Use new -base API gst_video_decoder_allocate_output_frame_full() to
pass the current proxy/surface to the pool.
The pool will will export thins given surface instead of exporting a
brand new surface that will never be filled in with meaningfull data.
https://bugzilla.gnome.org/show_bug.cgi?id=755072
Overriding the vmethod acquire_buffer() it is possible to attach the
right GstMemory to the current acquired buffer.
As a matter of fact, this acquired buffer may contain any instantiated
GstFdmemory, since this buffer have been popped out from the buffer
pool, which is a FIFO queue. So there is no garantee that this buffer
matches with the current processed surface. Evenmore, the VA driver
might not use a FIFO queue. Therefore, it is no way to guess on the
ordering.
In short, acquire_buffer on the VA driver and on the buffer pool return
none matching data, we have to manually attach the right GstFdMemory to
the acquired GstBuffer. The right GstMemory is the one associated with
the current surface.
https://bugzilla.gnome.org/show_bug.cgi?id=755072
gst_vaapi_dmabuf_memory_new() always exports a surface. Previously, it
had to create that surface. Now it can also export an already provided
surface. It is useful to export decoder's surfaces (from VA context).
https://bugzilla.gnome.org/show_bug.cgi?id=755072
Do not add the meta:GstVideoGLTextureUploadMeta feature if the render
element can handle dmabuf-based buffers, avoiding its negotiation.
Similar as "vaapidecode: do not add meta:GstVideoGLTextureUploadMeta
feature if can dmabuf"
https://bugzilla.gnome.org/show_bug.cgi?id=755072
If the negotiated caps are raw caps and downstream supports the
EGL_EXT_image_dma_buf_import extension, then the created allocator
is the DMAbuf, configured to downstream.
At this moment, the only element which can push dmabuf-based buffers
to downstream, is vaapipostproc.
In order to enable, in the future, dmabuf-based buffers, the vaapi base
plugin needs to check if downstream can import dmabuf buffers.
This patch checks if downstream can handle dmabuf, by introspecting the
shared GL context. If the GL context is EGL/GLES2 and have the extension
EGL_EXT_image_dma_buf_import, then dmabuf can be negotiated.
Original-patch-by: Julien Isorce <j.isorce@samsung.com>
The surface created for downstream is going to be filled by VAAPI
elements. So, the driver needs write access on that surface.
This patch releases the derived image held by the proxy, thus the
surface is unmarked as busy.
This is how it has to be done as discussed on libva mailing list.
https://bugzilla.gnome.org/show_bug.cgi?id=755072
Add GstPadDirection param to gst_vaapi_dmabuf_allocator_new(), thus
we later could do different thing when the allocated memory is for
upstream or dowstream, as required by VA-API.
https://bugzilla.gnome.org/show_bug.cgi?id=755072
A value of width/height property should be set to out caps,
if negotiation had been going properly.
So we can use srcpad_info when making decision of scaling.
https://bugzilla.gnome.org/show_bug.cgi?id=778010
If a GstVaapiDisplay is not found in the GStreamer context sharing,
then VAAPI elements look for a local GstGLContext in gst context
sharing mechanism ('gst.gl.local.context').
If this GstGLContext not found either then, only the VAAPI decoders
and the VAAPI post-processor, will try to instantiate a new
GstGLContext.
If a valid GstGLContext is received, then a new GstVaapiDisplay will
be instantiated with the platform, API and windowing specified by the
instantiated GstGLContext.
Original-Patch-By: Matt Fischer <matt.fischer@garmin.com>
https://bugzilla.gnome.org/show_bug.cgi?id=777409
Instead of calling g_return_val_if_fail() to check the context type, we
should use a normal conditional, since it is possible that other context types
can arrive and try to be assigned. Otherwise a critical log message is
printed.
This happens when we use playbin3 with vaapipostproc as video-filter.
https://bugzilla.gnome.org/show_bug.cgi?id=777409
Every time a new buffer is allocated, the pool is activated. This
doesn't impact in performance since gst_buffer_pool_set_active()
checks the current state of the pool. Nonetheless it logs out a
message if the state is the same, and it floods the logging subsystem
if it is enabled.
To avoid this log flooding first the pool state is checked before
changing it.
When a new sink caps arrive the internal decoder state is updated
and, if it is, request a downstream renegotiation.
Previously, when new caps arrived the whole decoder where destroyed
and recreated. Now, if the caps are compatible or has the same codec,
the internal decoder is kept, but a downstream renegotiation is
requested.
https://bugzilla.gnome.org/show_bug.cgi?id=776979
Adds two buffers as the default value of minimum buffer.
This would be used when creating and proposing vaapi bufferpool for
sink pad, hence the upstream element will keep, at least, these two
buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=775203
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
Rename the parameters 'vip' and 'flags' to 'alloc_info' and
'surface_alloc_flags' respectively. The purpose of this change is
to auto-document those parameters.
Also, aligned to this patch, the local 'alloc_info' variable was
renamed as 'surface_info', because it stores the possible surface's
video info, not the allocate one.
In order to auto-document the code, this patch renames the 'vip'
parameter in the functions related to gst_vaapi_video_allocator_new ()
to 'alloc_info', since it declares the allocation video info from
the vaapi buffer pool.
gst_vaapi_surface_new_with_format() is a wrapper for
gst_vaapi_surface_new_full (). In this case, the former is simpler
than the first. This patch changes that.
If src pad caps have changed, it needs to notify it downstream. In
addition, do not set passthrough if they have changed.
Otherwise, transform sometimes starts processing before caps change.
The passthrough value will be set in fixate later in this case.
https://bugzilla.gnome.org/show_bug.cgi?id=775204
Add a capsfilter forcing the caps
"video/x-raw(memory:VASurface), format=(string)NV12" between the
queue and the vaapipostproc so no renegotiation is required.
https://bugzilla.gnome.org/show_bug.cgi?id=776175
To detect and handle errors during allocator_configure_surface_info()
and allocator_conigure_image_info().
https://bugzilla.gnome.org/show_bug.cgi?id=776084
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
When an instance of GstVaapiVideoAllocator fails at initializing, the
log message should not include the allocator's object, because it is
going to be unrefed.
This reverts commit 3285121181.
videodecode's negotiate() vmethod is also called when events arrive,
but this would mean that the proper configuration of sink pad might
not be complete, thus we should not update the src pad.
Let's keep the old non-vmethod negotitate().
Added the inlined function allocator_configure_pools() moving out code
from gst_vaapi_video_allocator_new() to make clear that it is a
post-initalization of the object.
Since video_info stores the GstVideoInfo of the allocation caps,
it is clear if we rename it as allocation_info, to distinguish it
later from negotiation_info.
Instead of defining GstVaapiDmaBufAllocator as a hackish decorator of
GstDmaBufAllocator, now, since the expose of the GstDmaBufAllocator's
GType, GstVaapiDmaBufAllocator is a full feature GstAllocator inherited
from GstDmaBufAllocator.
https://bugzilla.gnome.org/show_bug.cgi?id=755072
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
Add a helper function to initialize the gst_debug_vaapivideomemory,
to use it either by the GstVaapiVideoAllocatorClass or
GstVaapiDmabufAllocator (which is a decorator of GstDmaBufAllocator).
Later, log possible errors when calling gst_vaapi_dmabuf_allocator_new ()
https://bugzilla.gnome.org/show_bug.cgi?id=755072
As the internal encoder is created at start(), let's release it at
stop() vmethod, to be consistent.
gst_vaapiencode_destroy() is called since it also resets the input and
output states, which is something that the base class does internally
after calling stop() vmethod.
https://bugzilla.gnome.org/show_bug.cgi?id=769266
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
Currently, specific encoder is created during set_format(). This might
lead to race condition when creating profiles with multiple encoders.
This patch moves ensure_encoder() call to start() vmethod to ensure
avoiding the race condition.
https://bugzilla.gnome.org/show_bug.cgi?id=773546
In commit ca0c3fd6 we remove the dynamic configuration if the bin
because we assumed that the bin will be always static as it is
registered.
Nonetheless we were wrong, because it is possible to request, with a
property, to avoid the use of the post-processor.
Since we want to add a way to disable the post-processor through
environment variables, this remove feature is required again.
If the environment variable GST_VAAPI_DISABLE_VPP is defined the
postprocessor inside of the vaapidecodebin is disabled, then
vaapidecodebin is an alias of the old vaapidecode.
https://bugzilla.gnome.org/show_bug.cgi?id=775041
Instead of creating the VA display before setting the bus to the
element, it is created when the element is opened.
Basically, this commit is a revert of
5e5d62cac7
That was done when the GStreamer's context sharing was not mature
enough as now. There is no reason to keep this hack.
Since the differentiation of negotiation caps and allocation caps,
there is no need to add a video crop meta with the negotiation caps.
Hence, removing it.
https://bugzilla.gnome.org/show_bug.cgi?id=773948
Since all the video converter were deprecated in gstreamer-1.2, we don't need
to handle them anymore in the vaapi's buffer meta.
This patch removes its usage and the buffer meta's API for that.
https://bugzilla.gnome.org/show_bug.cgi?id=745728
First, deactivate source pad pool when the out caps change, and if so,
destroy texture map, the source pad allocator and pool only if the
new caps are different from the ones already set.
This is related with bug 758907 when no vaapipostproc is used (no
vaapidecodebin). In order to negotiate downstream we need to destroy
the source pad allocator, otherwise the same allocated buffers are
used, failing the mapping.
Remove redundant GST_VAAPI_TYPE_VIDEO_INFO, since it is a duplicate of
GST_TYPE_VIDEO_INFO created before gstreamer 1.6, where the boxed type
was created.
https://bugzilla.gnome.org/show_bug.cgi?id=774782
This patch adds the created allocator to the allocation query either
in decide_allocation() and propose_allocation() vmehtods.
With it, there's no need to set the modified allocator's size in the
pool configuration.
https://bugzilla.gnome.org/show_bug.cgi?id=774782
We should set the correct buffer size when we are configuring the pool,
otherwise the buffer will be discarded when it returns to the pool.
Indeed when the ref-count of a buffer reaches zero, its pool will queue
it back (and ref it) if, and only if, the buffer size matches the
configured buffer size on the pool.
This issue can be debugged with GST_DEBUG=*PERF*:6, see gstbufferpool.c
https://bugzilla.gnome.org/show_bug.cgi?id=774782
When calling gst_vaapi_video_memory_copy() the allocator of the memory
to copy should be allocated by the vaapi allocator.
This patch does this verification.
Instead of having a gst_vaapi_video_memory_free() that is only going to
be called by gst_vaapi_video_allocator_free(), let's just remove the first
and merged into the second.
There were duplicated code in gst_video_meta_unmap_vaapi_memory() and
gst_vaapi_video_memory_unmap() when unmapping.
This patch refactors both methods adding the common function
unmap_vaapi_memory(). This also ensures, if direct rendering is enabled, it
is correctly reset.
Additionally, only when mapping flag has the WRITE bit, it set the image as
current, which was done in gst_video_meta_map_vaapi_memory() but no in
gst_vaapi_video_memory_map().
In order to make this, the mapping flags were required, so instead of
overloading mem_unmap() virtual function, mem_unmap_full() is overloaded.
https://bugzilla.gnome.org/show_bug.cgi?id=774213
There were duplicated code in gst_video_meta_map_vaapi_memory() and
gst_vaapi_video_memory_map() when doing the READ and WRITE mapping.
This patch refactors both methods adding the common function
map_vaapi_memory().
Additionally, only when flag has the READ bit it calls
ensure_images_is_current(), which was done in
gst_video_meta_map_vaapi_memory() but no in
gst_vaapi_video_memory_map().
https://bugzilla.gnome.org/show_bug.cgi?id=772151