The AV1 film_graim feature needs two surfaces the same time for
decoding. One is for recon surface which will be used as reference
later, and the other one is for display. The GstVaapiPicture should
contain the surface for display, while the vaBeginPicture() need
the recon surface as the target.
We add a gst_vaapi_picture_decode_with_surface_id API to handle this
kind of requirement.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/merge_requests/191>
gst_vaapi_encoder_h264_get_pending_reordered() does not consider the
case for HIERARCHICAL_B mode. The pipeline:
gst-launch-1.0 videotestsrc num-buffers=48 ! vaapih264enc prediction-type=2 \
keyframe-period=32 ! fakesink
get a assert:
ERROR:../gst-libs/gst/vaapi/gstvaapiencoder_h264.c:1996:reflist1_init_hierarchical_b:
assertion failed: (count != 0)
The last few B frames are not fetched in correct order when HIERARCHICAL_B
is enabled.
We also fix a latent bug for normal mode. The g_queue_pop_tail() of B frames
make the last several frames encoded in reverse order. The NAL of last few
frames come in reverse order in the bit stream, though it can still output
the correct image.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/merge_requests/405>
In scc mode, the I frame can ref to itself and it needs the L0 reference
list enabled. So we should set the I frame to P_SLICE type. We do not need
to change the ref_pic_list0/1 passed to VA driver, just need to enable the
VAEncPictureParameterBufferHEVC->pps_curr_pic_ref_enabled_flag to notify
the driver consider the current frame as reference. For bits conformance,
the NumRpsCurrTempList0 should be incremented by one to include the current
picture as the reference frame. We manually do it when packing the slice header.
Command line like:
gst-launch-1.0 videotestsrc num-buffers=10 ! \
capsfilter caps=video/x-raw,format=NV12, framerate=30/1,width=640,height=360 ! \
vaapih265enc ! capsfilter caps=video/x-h265,profile="{ (string)screen-extended-main }" ! \
filesink location=out.265
Can be used to specify that the encoder should use SCC profiles.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/merge_requests/379>
We need the allowed_profiles to store the allowed profiles in down
stream's caps.
Command line like:
vaapivp9enc ! capsfilter caps=video/x-vp9,profile="{ (string)1, \
(string)3 }"
We need to store GST_VAAPI_PROFILE_VP9_1 and GST_VAAPI_PROFILE_VP9_3
in this list.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/merge_requests/380>
We now use gst_h265_get_profile_from_sps() to replace the old way
of gst_h265_profile_tier_level_get_profile() to get more precise
profile. The new function consider the unstandard cases and give
a more suitable profile decision.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/merge_requests/395>
The removed code set all the reference frames to the current frame it is a key
one, but later, all the reference frames were rewritten with the decoded picture
buffers or VA_INVALID_SURFACE if they were not available.
Basically, all this time the first reference frame assignment has been ignored,
and it's not described by the spec, and this patch removes that code.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/merge_requests/400>
This code path is used when frames are rendered as textures through
GstVideoGLTextureUploadMeta with EGL, mainly under Wayland.
Originally the EGLImage was exported as GEM, which was handled by
Intel drivers, but Gallium ones cannot create VA surfaces from
GEM buffers, only DMABuf.
This patch checks the memory types supported by VA driver to choose
the render the EGLImages from GEM or DMABuf, because GEM is still
better where supported.
DMABuf is well handled either by intel-vaapi-driver and gallium.
Fixes: #137
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/merge_requests/122>
Some buffers and the associated FrameState state may still be pending at
that point. If the wayland connection is shared, then messages for the
buffer may still arrive. However, the associated event queue is already
deleted. So the result is a crash.
With a private connection the associated memory is leaked instead.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/merge_requests/342>
gst_vaapi_window_wayland_set_render_rect() may be called from an arbitrary
thread. That thread may be responsible for making the window visible.
At that point another thread will block in gst_vaapi_window_wayland_sync()
because the frame callback will not be called until the window is visible.
If that happens, then acquiring the display lock in
gst_vaapi_window_wayland_set_render_rect() would result in a deadlock.
Cache the size of the opaque rectangle separately and create the opaque
region right before applying it to the surface.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/merge_requests/342>
Implements new vmethod gst_vaapi_window_set_render_rectangle,
which is doing set the information of the rendered rectangle set by
user.
This is necessary on wayland at least to get exact information of
external surface.
And vaapisink calls this when gst_video_overlay_set_render_rectangle is
called.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/merge_requests/342>
The Wayland sub-surfaces API is used to embed the video into an application
window.
See Appendix A. Wayland Protocol Specification as the following.
"""
The aim of sub-surfaces is to offload some of the compositing work
within a window from clients to the compositor. A prime example is
a video player with decorations and video in separate wl_surface
objects.
This should allow the compositor to pass YUV video buffer processing to
dedicated overlay hardware when possible.
"""
Added new method gst_vaapi_window_wayland_new_with_surface()
Original-Patch-By: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
Zhao Halley <halley.zhao@intel.com>
changzhix.wei@intel.com
Hyunjun Ko <zzoon@igalia.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/merge_requests/342>
Some context does not report any valid format that we can support.
For example, the HEVC 444 12 bits decoder context, all the formats
it reports is not supported now, which make the formats list a NULL
array. We should check that pointer before we use it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/merge_requests/372>
In h265, bigger profile idc may not be compatible with the small profile
idc. And more important, there are multi profiles with the same profile
idc. Such as main-422-10, main-444 and main-444-10, they all have profile
idc 4.
So recording the max profile idc is not enough, the encoder needs to know
all allowed profiles when deciding the real profile.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/merge_requests/349>
In h265, higher profile idc number does not mean better compression
performance and may be not compatible with the lower profile idc.
So, it is not suitable to find the heighest idc for hw to ensure the
compatibility.
On the other side, when the entrypoint of the selected profile is valid,
it means the hw really support this profile, no need to check it again.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/merge_requests/349>
Instead of generating allowed srcpad caps with generic information,
now it takes the size an formats limits from the decoder's context.
This is possible since srcpad caps are generated after the internal
decoder is created.
The patch replaces gst_vaapi_decoder_get_surface_formats() with
gst_vaapi_decoder_get_suface_attributes().
From these attributes, formats are only used for VASurface memory
caps feature. For system memory caps feature, the old
gst_vaapi_plugin_get_allowed_srcpad_caps() is still used, since
i965 jpeg decoder cannot deliver mappable format for gstreamer.
And for the other caps features (dmabuf and texture upload) the
same static list are used.
This patch also adds DMABuf caps feature only if the context
supports that memory type. Nonetheless, we keep the pre-defined
formats since they are the subset of common derive formats formats
supported either by amd/gallium and both intel drivers, since,
when exporting the fd through vaAcquireBufferHandle()/
vaReleaseBufferHandle(), the formats of the derivable image cannot
be retriebable from the driver. Later we'll use the attribute
formats for the DMABuf feature too, when the code be ported to
vaExportSurfaceHandle().
Finally, the allowed srcpad caps are removed if the internal decoder
is destroyed, since context attribues will change.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/merge_requests/366>
Currently vaGetSurfaceBufferWl() is used to create wayland buffers.
Unfortunately this is not implemented by the 'media-driver' and Mesa VA-API
drivers. And the implementation provided by 'intel-vaapi-driver' is not
compatible with a Wayland server that uses the iris Mesa driver.
So create the Wayland buffers manually with the zwp_linux_dmabuf_v1 wayland
protocol. Formats and modifiers supported by the Wayland server are taken
into account. If necessary, VPP is enabled to convert the buffer into a
supported format.
Fall back to vaGetSurfaceBufferWl() if creating buffers via dambuf protocol
fails.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/merge_requests/346>