... and set to caps if necessary.
Note 1) the mastering display info and content light level SEI meessages
are persistent in the corresponding codec video sequence (i.e., GOP).
So any bitstream containing those SEI messages
(and also all pictures are intended to be HDR rendered) should be ensured that
each first slice of codec video sequence follows those SEI messages.
Note 2) The codec video sequence is a group an [IRAP + NoRaslOutputFlag == 1]
and following AUs which are not [IRAP + NoRaslOutputFlag == 1]
The NoRaslOutputFlag is equal to 1 for each IDR AU, BLA AU and some CRA AU.
For a CRA AU to have NoRaslOutputFlag equal to 1, following condition should required.
* When the CRA AU is the first AU in the bitstream in decoding order
* or the CRA AU is the first AU that follows an end of sequence NAL in decoding order
* or the HandleCraAsBlaFlag equal to 1.
Due to the limited context in parse element, in this commint, CRA AU will not considered as
having the NoRaslOutputFlag equal to 1. Therefore, in the worst case,
mastering-display-info and content-light-level could be cleared one GOP after
when stream was chagned from HDR to SDR.
Expose SEI data in the H.264 bitstream parser API and
extract closed captions and other things that are not
specified in the H.264 spec itself in the videoparser.
Based on patch by: Mathieu Duponchelle <mathieu@centricular.com>
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/940
when computing timecode metas. Depending on the value of that flag,
n_frames is to be interpreted as a number of fields or a number of
frames. As GstVideoTimeCodeMeta always deals with frames, we want
to scale that number when needed.
This debug code will help determine why certain instances of closed
captions that are present in the Picture User Data are not actually
processed by the pipeline
vps/sps/pps in codec_data shouldn't be considered as inband data.
Otherwise, h26{4,5}parse never insert them to nal when transform
(packetized to byte-stream) use case
Similar change as the on I did in h264parse. We want to process SEI
recovery point as keyframe so muxers will mark them as seek points and
decoders will be able to start decoding from them rather than waiting
for an IDR.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/790
The spec states that "recovery point SEI message assists a decoder in
determining when the decoding process will produce acceptable
pictures for display after the decoder initiates random access or after the
encoder indicates a broken link in the coded video sequence."
Mark those as keyframes so muxers will mark them as seek points and
decoders will be able to start decoding from them rather than waiting
for an IDR.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/790
Direct applying the commit 7bb6443. This could fix also unexpected
nal dropping when nonzero "config-interval" is set.
(e.g., gst-launch-1.0 videotestsrc ! x265enc key-int-max=30 !
h265parse config-interval=30 ! avdec_h265 ! videoconvert ! autovideosink)
Similar to the h264parse, have_{vps,sps,pps} variables will be used
for deciding on when to submit updated caps or not, and rather mean
"have new SPS/PPS to be submitted?"
See also https://bugzilla.gnome.org/show_bug.cgi?id=732203https://bugzilla.gnome.org/show_bug.cgi?id=754124
We used to have the same enum to represent H265 profiles and idc values.
Those are no longer the same with extension profiles defined from
version 2 of the spec.
Split those enums so the semantic of each is clearer and we'll be able
to add extension profiles to GstH265Profile.
Also add gst_h265_profile_tier_level_get_profile() to retrieve the
GstH265Profile from the GstH265ProfileTierLevel. It will be used to
implement the detection of extension profiles.
https://bugzilla.gnome.org/show_bug.cgi?id=793876
This information could be used for example to pick a decoder supporting
a specific chroma and/or bit depth, like 4:2:2 10 bits.
It can also be used to inform earlier decoder about the format it is
about to decode.
https://bugzilla.gnome.org/show_bug.cgi?id=792039
When input is not in byte-stream format there is no need to wait for the first
buffer before setting src caps. We already have all the information from the
input codec_data.
This allow us to already configure downstream elements allowing them,
for example, to already allocate their internal buffers as they know
the format of the input they are about to receive.
Same change as the one I just did in h264parse.
https://bugzilla.gnome.org/show_bug.cgi?id=790709
When input is in AVC format there is no need to wait for the first buffer
before setting src caps. We already have all the information from the
input codec_data.
This allow us to already configure downstream elements allowing them,
for example, to already allocate their internal buffers as they know
the format of the input they are about to receive.
https://bugzilla.gnome.org/show_bug.cgi?id=790709
Try prioritizing downstream's caps over upstream's if possible so the
parser can configured in "passthrough" if possible and save it from
doing useless conversions.
Exact same change as the one I just did in h264parse.
https://bugzilla.gnome.org/show_bug.cgi?id=790628
Try prioritizing downstream's caps over upstream's if possible so the
parser can configured in "passthrough" if possible and save it from
doing useless conversions.
https://bugzilla.gnome.org/show_bug.cgi?id=790628
In this case, we assume that the format is jpc, and we infer the color
space from the number of components. This allows the parser to process a
jpc disk file coming from a filesrc element.
https://bugzilla.gnome.org/show_bug.cgi?id=783291
The RSIZ capabilities tag stores the JPEG 2000 profile. In the case of
broadcast profiles, it also stores the broadcast main level, which
specifies the bit rate.
https://bugzilla.gnome.org/show_bug.cgi?id=782337