We always get a warning such as:
h265decoder gsth265decoder.c:1432:gst_h265_decoder_do_output_picture: \
<vah265dec0> Outputting out of order 255 -> 0, likely a broken stream
in H265 decoder.
The problem is caused because we fail to reset the last_output_poc when
we get IDR and BLA. The incoming IDR and BLA frame already bump all the
frames in the DPB, but we forget to reset the last_output_poc, which
make the POC out of order and generate the warning all the time.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2294>
There are a few places which require deep copy
(basesink on drain for example). Also this implementation can be
useful for future use case.
One probable future use case is that copying DPB texture to
another texture for in-place transform since our DPB texture is never
writable, and therefore copying is unavoidable.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2308>
... based on MediaFoundation work queue API.
By this commit, wasapi2 plugin will make use of pull mode scheduling
with audioringbuffer subclass.
There are several drawbacks of audiosrc/audiosink subclassing
(not audiobasesrc/audiobasesink) for WASAPI API, which are:
* audiosrc/audiosink classes try to set high priority to
read/write thread via MMCSS (Multimedia Class Scheduler Service)
but it's not allowed in case of UWP application.
In order to use MMCSS in UWP, application should use MediaFoundation
work queue indirectly.
Since audiosrc/audiosink scheduling model is not compatible with
MediaFoundation's work queue model, audioringbuffer subclassing
is required.
* WASAPI capture device might report larger packet size than expected
(i.e., larger frames we can read than expected frame size per period).
Meanwhile, in any case, application should drain all packets at that moment.
In order to handle the case, wasapi/wasapi2 plugins were making use of
GstAdapter which is obviously sub-optimal because it requires additional
memory allocation and copy.
By implementing audioringbuffer subclassing, we can avoid such inefficiency.
In this commit, all the device read/write operations will be moved
to newly implemented wasapi2ringbuffer class and
existing wasapi2client class will take care of device enumeration
and activation parts only.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2306>
This is a video specific sink used to test video CODEC conformance. This is similar
to a combination of filesink and testsink, but will skip over any type of
padding that GStreamer Video library introduces. This is needed in order to obtain the
correct checksum or raw yuv data.
This element currently support writing back non-padded raw I420 through the
location property and will calculate an MD5 and post it as an element message
of type conformance/checksum. More output format or checksum type could be
added in the future as needed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2287>
We were doing our initial "empty" commit on the subsurface instead of the
toplevel surface. As an incidence, we should not have received a configure
event ever, not just on mutter. This fixes the following warning when using
mutter compositor (aka gnome-shell):
waylandsink wlwindow.c:304:gst_wl_window_new_toplevel: The compositor did not send configure event.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2299>
The current DPB check of H265 is not very correct. The current frame
is already in the DPB when we check whether the DPB is full.
For example, the DPB max size is 16 and we have 15 ref frames in the
DPB, so the gst_h265_dpb_delete_unused() cleans no one, and then plus
the current frame, the DPB is 16. This causes an error return, but in
fact, the stream is correct.
We now integrate the DPB full check into the need_bump() function.
We add the correct frame into to DPB and then check whether the picture
num is bigger than max_num_pics of DPB(which means there is no room for
the current picture). If true, we bump the DPB immediately.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2291>
Some decoding APIs support delayed output for performance reasons.
One example would be to request decoding for multiple frames and
then query for the oldest frame in the output queue.
This also increases throughput for transcoding and improves seek
performance when supported by the underlying backend.
Introduce support in the vp8 base class, so that backends that
support render delays can actually implement it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2150>
This add src format validation, this avoid registering element for
drivers we don't support any of their src formats. This also special
case the AlphaDecodeBin wrapper, as we know that alphacombine element
only support I420 and NV12 for now.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2272>
codecalpha is a new plugin introduced to support VP8/VP9 alpha as
defined in the WebM and Matroska specifications. It splits the stream
into two streams, one for the alpha and one for the actual content,
then it decodes them separately with vpxdec and finally combine the
results as A420 or AV12 (i.e. YUV + an extra alpha plane).
The workflow above is setup by means of a bin, gstcodecalphabin.
This patch simulates the same workflow into the v4l2codecs namespace,
thus using the new v4l2 stateless decoders for hardware acceleration.
This is so we can register the new alpha decode elements only if the
hardware produces formats we support, i.e. I420 or NV12 for now.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2272>
Alpha combine works by appending the GstMemory for the alpha channel
to the GstBuffer containing I420, thereby pushing A420 on its src pad.
Add support for the same workflow for NV12, thereby producing the
recently introduced AV12 format (NV12 + Alpha).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2277>
We already declare the support of HEVC screen content extension profiles
in the profile mapping list, but we fail to generate the correct VA picture
parameters buffers. This may cause the GPU hang.
We need to fill the buffer of VAPictureParameterBufferHEVCExtension correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2255>