According to the spec, the least significant bit is reserved and should
always we set to 1. Though, some wrong file has been found. Considering
how low important this reserved bit is, relax the validation.
Related to #660
1. only recalculate src codec format if sink caps change
2. use correct value for "jp2c" magic in J2C box ID
3. only parse J2K magic once, and store result
4. more sanity checks comparing caps to parsed codec
A guint32 greater than 2^31 would be interpreted as negative by
gst_util_uint64_scale_int() and critical. Use the 64-bit integer version
of the function instead.
The SPS parsing functions take a parse_vui_param flag
to skip VUI parsing, but there's no indication in the output
SPS struct that the VUI was skipped.
The only caller that ever passed FALSE seems to be the
important gst_h264_parser_parse_nal() function, meaning - so the
cached SPS were always silently invalid. That needs changing
anyway, meaning noone ever passes FALSE.
I don't see any use for saving a few microseconds in
order to silently produce garbage, and since this is still
unstable API, let's remove the parse_vui_param.
Fix a recently introduced segfault. Don't de-reference a NULL
SPS pointer when attempting to update source caps before SPS
has been seen in the stream.
We generally always prefer the information from upstream for other
metadata (pixel-aspect-ration, etc.) and should also do so here.
Other parsers (h264parse) already do the same.
When sps parsing fails we use a fallback sps from the caps, since we
have got an sps we need to update parser state and header as in the case the
sps was successfully parsed
When sps parsing fails we use a fallback sps from the caps, since we
have got an sps we need to update parser state and header as in the case the
sps was successfully parsed
Closes#503
Set more unhandled flags to general_constraint_indicator_flags field.
The field is required for building "Codecs" parameter as defined
ISO/IEC 14496-15 Annex E. The resulting "Codecs" string might be used
in various places (e.g., HLS/DASH manifest, browser, player, etc)
... and set to caps if necessary.
Note 1) the mastering display info and content light level SEI meessages
are persistent in the corresponding codec video sequence (i.e., GOP).
So any bitstream containing those SEI messages
(and also all pictures are intended to be HDR rendered) should be ensured that
each first slice of codec video sequence follows those SEI messages.
Note 2) The codec video sequence is a group an [IRAP + NoRaslOutputFlag == 1]
and following AUs which are not [IRAP + NoRaslOutputFlag == 1]
The NoRaslOutputFlag is equal to 1 for each IDR AU, BLA AU and some CRA AU.
For a CRA AU to have NoRaslOutputFlag equal to 1, following condition should required.
* When the CRA AU is the first AU in the bitstream in decoding order
* or the CRA AU is the first AU that follows an end of sequence NAL in decoding order
* or the HandleCraAsBlaFlag equal to 1.
Due to the limited context in parse element, in this commint, CRA AU will not considered as
having the NoRaslOutputFlag equal to 1. Therefore, in the worst case,
mastering-display-info and content-light-level could be cleared one GOP after
when stream was chagned from HDR to SDR.
Expose SEI data in the H.264 bitstream parser API and
extract closed captions and other things that are not
specified in the H.264 spec itself in the videoparser.
Based on patch by: Mathieu Duponchelle <mathieu@centricular.com>
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/940
when computing timecode metas. Depending on the value of that flag,
n_frames is to be interpreted as a number of fields or a number of
frames. As GstVideoTimeCodeMeta always deals with frames, we want
to scale that number when needed.
This debug code will help determine why certain instances of closed
captions that are present in the Picture User Data are not actually
processed by the pipeline
vps/sps/pps in codec_data shouldn't be considered as inband data.
Otherwise, h26{4,5}parse never insert them to nal when transform
(packetized to byte-stream) use case
Similar change as the on I did in h264parse. We want to process SEI
recovery point as keyframe so muxers will mark them as seek points and
decoders will be able to start decoding from them rather than waiting
for an IDR.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/790
The spec states that "recovery point SEI message assists a decoder in
determining when the decoding process will produce acceptable
pictures for display after the decoder initiates random access or after the
encoder indicates a broken link in the coded video sequence."
Mark those as keyframes so muxers will mark them as seek points and
decoders will be able to start decoding from them rather than waiting
for an IDR.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/790
Direct applying the commit 7bb6443. This could fix also unexpected
nal dropping when nonzero "config-interval" is set.
(e.g., gst-launch-1.0 videotestsrc ! x265enc key-int-max=30 !
h265parse config-interval=30 ! avdec_h265 ! videoconvert ! autovideosink)
Similar to the h264parse, have_{vps,sps,pps} variables will be used
for deciding on when to submit updated caps or not, and rather mean
"have new SPS/PPS to be submitted?"
See also https://bugzilla.gnome.org/show_bug.cgi?id=732203https://bugzilla.gnome.org/show_bug.cgi?id=754124
We used to have the same enum to represent H265 profiles and idc values.
Those are no longer the same with extension profiles defined from
version 2 of the spec.
Split those enums so the semantic of each is clearer and we'll be able
to add extension profiles to GstH265Profile.
Also add gst_h265_profile_tier_level_get_profile() to retrieve the
GstH265Profile from the GstH265ProfileTierLevel. It will be used to
implement the detection of extension profiles.
https://bugzilla.gnome.org/show_bug.cgi?id=793876
This information could be used for example to pick a decoder supporting
a specific chroma and/or bit depth, like 4:2:2 10 bits.
It can also be used to inform earlier decoder about the format it is
about to decode.
https://bugzilla.gnome.org/show_bug.cgi?id=792039
When input is not in byte-stream format there is no need to wait for the first
buffer before setting src caps. We already have all the information from the
input codec_data.
This allow us to already configure downstream elements allowing them,
for example, to already allocate their internal buffers as they know
the format of the input they are about to receive.
Same change as the one I just did in h264parse.
https://bugzilla.gnome.org/show_bug.cgi?id=790709
When input is in AVC format there is no need to wait for the first buffer
before setting src caps. We already have all the information from the
input codec_data.
This allow us to already configure downstream elements allowing them,
for example, to already allocate their internal buffers as they know
the format of the input they are about to receive.
https://bugzilla.gnome.org/show_bug.cgi?id=790709
Try prioritizing downstream's caps over upstream's if possible so the
parser can configured in "passthrough" if possible and save it from
doing useless conversions.
Exact same change as the one I just did in h264parse.
https://bugzilla.gnome.org/show_bug.cgi?id=790628
Try prioritizing downstream's caps over upstream's if possible so the
parser can configured in "passthrough" if possible and save it from
doing useless conversions.
https://bugzilla.gnome.org/show_bug.cgi?id=790628
In this case, we assume that the format is jpc, and we infer the color
space from the number of components. This allows the parser to process a
jpc disk file coming from a filesrc element.
https://bugzilla.gnome.org/show_bug.cgi?id=783291
The RSIZ capabilities tag stores the JPEG 2000 profile. In the case of
broadcast profiles, it also stores the broadcast main level, which
specifies the bit rate.
https://bugzilla.gnome.org/show_bug.cgi?id=782337
Inserts AU delimeter by default if missing au delimeter from upstream.
This should be done only in case of byte-stream format.
Note that:
We have to compensate for the new bytes added for the AU, otherwise
insertion of PPS/SPS will use wrong offsets and overwrite wrong data.
Also mark the AU delimiter blob const, and use frame->out_buffer for
storing the output to keep baseparse assumptions valid.
Original-Patch-By: Michal Lazo <michal.lazo@mdragon.org>
Helped by Sebastian Dröge <sebastian@centricular.com>
https://bugzilla.gnome.org/show_bug.cgi?id=736213
Those are the rules:
In the SPS:
* if frame_mbs_only_flag=1 => all frame progressive
* if frame_mbs_only_flag=0 => field_pic_flag defines if each frame is
progressive or interlaced, thus the mode is 'mixed' in GStreamer
terms.
https://bugzilla.gnome.org/show_bug.cgi?id=779309