Originally, if a buffer arrives with crop meta but downstream doesn't
handle crop allocation meta, vapostproc tried to reconfigure itself to
non pass-through mode automatically. Sadly, this behavior was based on
the wrong assumption that propose_allocation() vmethod would bring
downstream allocation query, but it is not.
Now, if vapostproc is in pass-through mode, the cropping is passed to
downstream. Pass-through mode can be disabled via a parameter.
Finally, if pass-through mode isn't enabled, it's assumed the buffer
is going to be processed and, if cropping, downstream already
negotiated the cropped frame size, thus it's required to do the
cropping inside vapostproc to avoid artifacts because of the size of
downstream allocated buffers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2443>
We may need to drop the slices such as RASL pictures with the NoRaslOutputFlag, so
the current picture of h265decoder may be freed. We should not assign the frame->
output_buffer too early until we really output it. Or, the later coming slices will
allocate another picture and trigger the assert of:
gst_video_decoder_allocate_output_frame_with_params:
assertion 'frame->output_buffer == NULL' failed
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2421>
In H265, the stream may have odd bit depth such as 9 or 11. And
the bit depth of luma and chroma may differ. For example, the
stream with luma depth of 8 and chroma depth of 9 should use the
10 bit rtformat as the decoded picture format.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2420>
Renamed gst_va_decoder_set_format() to
gst_va_decoder_set_frame_size_with_surfaces() which resembles better
the passed parameters. Internally it creates the vaContext.
Added gst_va_decoder_set_frame_size() which is an alias of
gst_va_decoder_set_frame_size_with_surfaces() without surfaces. This
is the function which replaces gst_va_decoder_set_format() where
used.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2417>
The VP9 streams have the ability to change the resolution dynamically
at any time point. It does not send ad KEY frame before change the
resolution, even the INTER frame can change the resolution immediately.
So we need to check the resolution change for each frame and do the
re-negiotiation if needed.
Some insaned stream may play in resolution A first and then dynamically
changes to B, and after 1 or 2 frames, it use a show_existing_frame to
repeat the old frame of resolution A before. So, not only new_picture(),
but also duplicate_picture() need to check this.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2407>
Some codecs such as VP9, its config and context have the ability to
dynamically. When we only change the width and height, no need to
re-create the config and context. The helper function can just change
the resolution without re-creating config and context.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2407>
The current setting of color properties are not very correct and
we will get some kind of "unknown Color Standard for YUV format"
warnings printed out by drivers. The video-color already provides
some standard APIs for us, and we can use them directly.
We also change the logic to: Finding the exactly match or explicit
standard first. If not found, we continue to find the most similar
one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2385>
Adding the compatile profiles when we decide the final profile used for decoding.
The final profile candidates include:
1. The profile directly specified by SPS, which is the exact one.
2. The compatile profiles decided by the upstream element such as the h265parse.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2322>
The segmentation is stateful, its information may depend on the previous
segmentation setting. For example, if loop_filter_delta_enabled is TRUE,
the filter_level[GST_VP9_REF_FRAME_INTRA][1] should inherit the previous
frame's value and can not be calculated by the current frame's segmentation
data only. So we need to maintain the segmentation state inside the vp9
decoder and update it when the new frame header comes.
We also fix the CLAMP issue of lvl_seg and intra_lvl because of their wrong
uint type here.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2369>
The current way of DMA buffer mapping is simply forwarding the job
to parent's map function, which is a mmap(). That can not handle the
non-linear buffers, such as tiling, compressed, etc. The incorrect
mapping of such buffers causes broken images, which are recognized
as bugs. We should directly block this kind of mapping to avoid the
misunderstanding.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2353>
If the decoder has crop_top/left value > 0(e.g. the conformance
window in the H265). Which means that the real output picture
locates in the middle of the decoded buffer. If the downstream can
support VideoCropMeta, a VideoCropMeta is added to notify the
real picture's coordinate and size. But if not, we need to copy
it manually and the other_pool is needed. We always assume that
decoded picture starts from top-left corner, and so there is no
need to do this if crop_bottom/right value > 0.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2298>
The base va decoder's video_align is just used for calculation the
real decoded buffer's width and height. It does not have meaning
for the VideoMeta, because it does not align to the real picture
in the output buffer. We will use VideoCropMeta to replace it later.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2298>
The base va decoder's video_align is just used for calculation the
real decoded buffer's width and height. While the gst_video_info_align
just calculate the offset and stride based on the video_align. But
all the offsets and strides are overwritten in gst_va_dmabuf_allocator_try
or gst_va_allocator_try, which make that calculation useless.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2298>
We add 12 bits entries into this default mapping. And the old mapping
is not precise. For example, the NV12 should not be used as the default
mapping for VA_RT_FORMAT_YUV422 and VA_RT_FORMAT_YUV444, it is even not
a 422 or 444 format.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2332>
We already declare the support of HEVC screen content extension profiles
in the profile mapping list, but we fail to generate the correct VA picture
parameters buffers. This may cause the GPU hang.
We need to fill the buffer of VAPictureParameterBufferHEVCExtension correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2255>
The function of gst_h265_get_profile_from_sps() is better than the
function gst_h265_profile_tier_level_get_profile() when we recognize
the profile of the stream, becaue it considers the compatibility.
It is also used by h265parse to recognize the profile. So it is
better to keep the same behaviour with the parse and other decoders.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2255>
We already declare the support of HEVC range extension profiles in
the profile mapping list, but we fail to generate the correct VA
picture and slice parameters buffers. This may cause the GPU hang.
We need to fill the buffer of VAPictureParameterBufferHEVCExtension
and VASliceParameterBufferHEVCExtension correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2255>
VA-API HEVC decoding needs to known which is the last slice of a
picture, but slices are processed sequencially, so we know the
last slice until all the slices are already pushed into the
VABuffer array.
In order to mark the last slice, they are pushed into the
VABuffer array with a delay of one slice: the first slice is
hold, and when the second slice come, the first one is pushed
while holding the second, and so on. Finally, at end_picture(),
the last slice is marked and pushed into the array.
Co-author: Victor Jaquez <vjaquez@igalia.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2246>
The VA acceleration now has more usages in linux-like platforms,
such as the MSDK. The different plugins based on the VA acceleration
need to share some common logic and types. We now move the display
related functions and types into a common va lib.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2196>
We have only one copy of gst_va_base_dec_parent_class inside the
vabasedec, so it can not handle the case when there are multi va
decoders inside one pipeline. The pipeline:
gst-launch-1.0 filesrc location=xxx.h264 ! h264parse \
! vah264dec ! msdkh265enc ! vah265dec ! fakesink
generates a assertion of
"invalid cast from 'GstVaH264Dec' to 'GstH265Decoder"
and gets a crash.
We should keep the parent_class for each decoder type.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2231>