We were doing our initial "empty" commit on the subsurface instead of the
toplevel surface. As an incidence, we should not have received a configure
event ever, not just on mutter. This fixes the following warning when using
mutter compositor (aka gnome-shell):
waylandsink wlwindow.c:304:gst_wl_window_new_toplevel: The compositor did not send configure event.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2299>
The current DPB check of H265 is not very correct. The current frame
is already in the DPB when we check whether the DPB is full.
For example, the DPB max size is 16 and we have 15 ref frames in the
DPB, so the gst_h265_dpb_delete_unused() cleans no one, and then plus
the current frame, the DPB is 16. This causes an error return, but in
fact, the stream is correct.
We now integrate the DPB full check into the need_bump() function.
We add the correct frame into to DPB and then check whether the picture
num is bigger than max_num_pics of DPB(which means there is no room for
the current picture). If true, we bump the DPB immediately.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2291>
Some decoding APIs support delayed output for performance reasons.
One example would be to request decoding for multiple frames and
then query for the oldest frame in the output queue.
This also increases throughput for transcoding and improves seek
performance when supported by the underlying backend.
Introduce support in the vp8 base class, so that backends that
support render delays can actually implement it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2150>
This add src format validation, this avoid registering element for
drivers we don't support any of their src formats. This also special
case the AlphaDecodeBin wrapper, as we know that alphacombine element
only support I420 and NV12 for now.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2272>
codecalpha is a new plugin introduced to support VP8/VP9 alpha as
defined in the WebM and Matroska specifications. It splits the stream
into two streams, one for the alpha and one for the actual content,
then it decodes them separately with vpxdec and finally combine the
results as A420 or AV12 (i.e. YUV + an extra alpha plane).
The workflow above is setup by means of a bin, gstcodecalphabin.
This patch simulates the same workflow into the v4l2codecs namespace,
thus using the new v4l2 stateless decoders for hardware acceleration.
This is so we can register the new alpha decode elements only if the
hardware produces formats we support, i.e. I420 or NV12 for now.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2272>
Alpha combine works by appending the GstMemory for the alpha channel
to the GstBuffer containing I420, thereby pushing A420 on its src pad.
Add support for the same workflow for NV12, thereby producing the
recently introduced AV12 format (NV12 + Alpha).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2277>
We already declare the support of HEVC screen content extension profiles
in the profile mapping list, but we fail to generate the correct VA picture
parameters buffers. This may cause the GPU hang.
We need to fill the buffer of VAPictureParameterBufferHEVCExtension correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2255>
The function of gst_h265_get_profile_from_sps() is better than the
function gst_h265_profile_tier_level_get_profile() when we recognize
the profile of the stream, becaue it considers the compatibility.
It is also used by h265parse to recognize the profile. So it is
better to keep the same behaviour with the parse and other decoders.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2255>
We already declare the support of HEVC range extension profiles in
the profile mapping list, but we fail to generate the correct VA
picture and slice parameters buffers. This may cause the GPU hang.
We need to fill the buffer of VAPictureParameterBufferHEVCExtension
and VASliceParameterBufferHEVCExtension correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2255>
VA-API HEVC decoding needs to known which is the last slice of a
picture, but slices are processed sequencially, so we know the
last slice until all the slices are already pushed into the
VABuffer array.
In order to mark the last slice, they are pushed into the
VABuffer array with a delay of one slice: the first slice is
hold, and when the second slice come, the first one is pushed
while holding the second, and so on. Finally, at end_picture(),
the last slice is marked and pushed into the array.
Co-author: Victor Jaquez <vjaquez@igalia.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2246>
Each stream stores the `program_array_index` of its position in its
program's `streams` array. When we remove a stream from this array, we
need to correct the `program_array_index` of all streams that were
backshifted by the removal.
Also extract the removal into a new function and add some more safety
checks.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2266>
We should lock memory object with gst_d3d11_device_lock() first
then GST_D3D11_MEMORY_LOCK() need to be used.
One observed deadlock case is that:
- Thread A takes d3d11 device lock
- At the same time, Thread B tries CPU map to d3d11memory which requires
d3d11 device lock as well, but it's already taken by Thread A.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2267>