- Add the missing field parameter and put the output parameter at the
end.
- Use a switch to verify valid values instead of hard-to-follow range
checks.
- Don't consider bad values a programming error, just a regular failure.
- Set all data fields at the end so we can pass a pointer to an
uninitialized structure without GCC complaining.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5450>
- Fix skipsize on _update_backlog failure.
- Add robustness to AU completion detection by using AUD when present. If we've
received a AUD we overwrite the first VCL NAL detection when the result was
negative. VCL following AUD is the first VCL of next AU.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5862>
Since the AV1 specification is not explicitly mentioning about
the array size bounds, array sizes in scalability structure
should be defined as possible maximum sizes that can have.
Also, this commit removes GST_AV1_MAX_SPATIAL_LAYERS define from
public header which is API break but the define is misleading
and this patch is introducing ABI break already
ZDI-CAN-22300
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5823>
- AU boundary detection reviewed to follow more closely H.264 spec. and more
specifically clauses 7.4.1.2.3 and 7.4.1.2.4.
- The gist of the changes is a look-a-head in then next AU required identify the
last vcl-nal of current AU and firt vcl-nal of next AU (according to
7.4.1.2.4) followed by the identification of the first nal of next AU
(according to 7.4.1.2.3).
- A backlog of all nals of current AU and next AU up to the point where current
AU can identified completed is kept.
- In NAL alignement mode vcl-nal are sent immediatly but the history is kept to
allow AU boundary detection. Non-vcl-nal can be delayed up to the reception of
the next vcl-nal to allow a correct AUD insertion.
- Based on this improved AU boudary detection we can avoid erronous AUD
insertion, like the one highlighted by test
test_parse_sliced_with_prefix_and_sei_nal_au.
- Add support for MVC AU boundary detection. (H.7.4.1.2.4)
- Explicitly report SVC not supported. We don't have the SVC NAL parsing
required to identify boundary. (missing dependency_id and quality_id fields
from SVC, see G.7.4.1.2.4)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5741>
adjust log level from GST_ERROR to GST_WARNING when h264 caps have
codec_data but no avc format or have no codec data or stream-format.
Because theses are not real errors, it is easy to mislead if print error
logs.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4675>
The framerate should only be replaced (and corrected for alternating field)
when it is parsed from the bitstream. Otherwise, the upstream framerate
from caps should be trusted and assumed correct.
Related to gst-plugins-bad!2020
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4259>
When the output alignment is smaller than the input alignment, for
example, When the output alignment is "FRAME" and the parse is likely
connecting to a decoder, the current PTS setting for AV1 frames inside
a TU is not very correct.
For example, a TU may begin with non-displayed frames and end with a
displayed frame. The current way will assign the PTS to the first
non-displayed frame, which is a decode-only frame and the PTS will be
discarded in the video decoder. While the last displayed frame has
invalid PTS, and so the video decoder needs to guess its PTS based on
the frame rate and previous frame's PTS. This is not a decent and
robust way. And more important, when the previous frames provide DTS,
the video decoder will also guess the PTS based on the previous frames'
DTS and trigger the warning like:
gstvideodecoder.c:3147:gst_video_decoder_prepare_finish_frame: \
<vavp9dec0> decreasing timestame
It sets the reordered_output and makes the decoder in free run mode.
We should correct the PTS for a TU, let the non-displayed frames have
no PTS while set the correct PTS to the displayed one. Also, when the
AV1 stream has multi spatial layers, there are more than one displayed
frames inside one TU with the same PTS.
Note: If the input alignment is not TU aligned, we can not know the
exact PTS of this TU, and so we just clear the PTS of the decode only
frame and leave others unchanged.
We also correct all the PTS if the output is OBU aligned. All their
PTS and DTS are set to the input buffer's PTS.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3182>
When the incoming data has big alignment than the output, we do not need to
call finish_frame() and exit the current handle_frame() for each splitted
frame. We can push them all at one shot with in one handle_frame(), whcih
may improve the performance and can help us to find the edge of TU.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3182>