Sequence-layer and frame-layer are serialized in little-endian byte
order except for STRUCT_C and framedata fields as described in SMPTE 421M Annex
L.
https://bugzilla.gnome.org/show_bug.cgi?id=736750
This commit fix several issues with sequence layer header forging on
update_caps():
- 0x00000004 unsigned integer is before STRUCT_C.
- Set reserved bits of STRUCT_C to their values for simple/main
profiles in sequence layer header format and ASF header format.
- Sequence layer shall be represented as a sequence of 32 bits unsigned
integers and shall be serialized in little-endian byte order except
for STRUCT_C which shall be serialized in big-endian byte-order.
See SMPTE 421M Annex L for more details about sequence layer format.
https://bugzilla.gnome.org/show_bug.cgi?id=736474
Do more elaborate validation of the input caps: what fields
are required and/or not allowed. Don't assume AVC3 format
input without codec_data field is byte-stream format. Fix
up some now-unreachable code (CID 1232800).
It should try to use bytestream in these cases that the format
is set to _FORMAT_NONE as it seems that is what the 'else' clause
for bytestream can handle (by defaulting to _FORMAT_BYTESTREAM).
Always use a GstAdapter when collecting access units (alignment="au")
in either byte-stream or avcC format. This is required to properly
preserve config headers like SPS and PPS when invalid or broken NAL
units are subsequently parsed.
More precisely, this fixes scenario like:
<SPS> <PPS> <invalid-NAL> <slice>
where we used to reset the output frame buffer when an invalid or
broken NAL is parsed, i.e. SPS and PPS NAL units were lost, thus
preventing the next slice unit to be decoded, should this also
represent any valid data.
https://bugzilla.gnome.org/show_bug.cgi?id=732203
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Carefully track cases when skipping broken or invalid NAL units is
necessary. In particular, always allow NAL units to be processed
and let that gst_h264_parse_process_nal() function decide on whether
the current NAL needs to be dropped or not.
This fixes parsing of streams with SEI NAL buffering_period() message
inserted between SPS and PPS, or SPS-Ext NAL following a traditional
SPS NAL unit, among other cases too.
Practical examples from the H.264 AVC conformance suite include
alphaconformanceG, CVSE2_Sony_B, CVSE3_Sony_H, CVSEFDFT3_Sony_E
when parsing in stream-format=byte-stream,alignment=au mode.
https://bugzilla.gnome.org/show_bug.cgi?id=732203
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Improve parser state tracking by introducing new flags reflecting
it: "got-sps", "got-pps" and "got-slice". This is an addition for
robustness purposes.
Older have_sps and have_pps variables are kept because they have
a different meaning. i.e. they are used for deciding on when to
submit updated caps or not, and rather mean "have new SPS/PPS to
be submitted?"
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Use gst_h264_parser_identify_nalu_unchecked() to identify the next
NAL unit. We don't want to parse the full NAL unit, but only the
header bytes and possibly the first RBSP byte for identifying the
first_mb_in_slice syntax element.
Also fix check for failure when returning from that function. The
only success condition for that is GST_H264_PARSER_OK, so use it.
https://bugzilla.gnome.org/show_bug.cgi?id=732154
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
The gst_h264_parse_pps() function dynamically allocates the slice
group ids map array, so that needs to be cleared before parsing a
new PPS NAL unit again, or when it is no longer needed.
Likewise, a clean copy to the internal NAL parser state needs to be
performed so that to avoid a double-free corruption.
https://bugzilla.gnome.org/show_bug.cgi?id=707282
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
The recovery point SEI message helps a decoder in determining if the
decoding process would produce acceptable pictures for display after
the decoder initiates random access or after the encoder indicates
a broken link in the coded video sequence.
This is not used in the h264parse element, but it could help debugging.
https://bugzilla.gnome.org/show_bug.cgi?id=723380
In ISO/IEC 14496-15, the minimum size of a HEVCDecoderConfigurationRecord
(i.e., the contents of a hvcC box) is 23 bytes. However, the code in h265parse
checks that the size of this data is not less than 28 bytes, and it refuses to
accept caps if the check fails. The result is that standards-conformant streams
that don't carry any parameter sets in their hvcC boxes won't play.
https://bugzilla.gnome.org//show_bug.cgi?id=731783
Presence of picture extension header identifies the stream as mpeg2.
We are supposed to set the mpegversion to 2 if there is a picextension
instead of blindly setting the version to 1
https://bugzilla.gnome.org/show_bug.cgi?id=726028
It is not perfect but it allows us to be sure that the mandatory 'framerate'
field is present in the caps.
As soon as some information is found in the stream, that will be
updated.
https://bugzilla.gnome.org/show_bug.cgi?id=723243
An SEI RBSP could contains more than one SEI message as specified in
7.4.2.3.1.
This commit change the parser API: the gst_h264_parser_parse_sei()
function now create and fill a GArray containing GstH264SEIMessage.
https://bugzilla.gnome.org/show_bug.cgi?id=721715
mpeg4videoparse might not push buffers while parsing. If those buffers
contain the DISCONT flag, it gets lost and downstream won't get any
buffer with the flag.
Fix it by adding the DISCONT to the next pushed buffer.
This makes backwards playback work.
Conversion to byte-stream/nal crashes without that because the
baseparse frame of all NALUs is finished for the first NALU, then
used again for parsing the second NALU. Just that now the buffer
of the frame is already gone. Instead we create temporary frames
for every NALU.
In case more data than a start code alone is needed to decide whether
it ends a frame, arrange for more input data and decide when available.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=711627
When the input buffer is empty and we need more data to determine
whether or not to terminate the previous frame, the last start code
location needs to be set to 4 bytes before the the current position
(size of start_code is 32-bits)
https://bugzilla.gnome.org/show_bug.cgi?id=711627
the initial par_n = par_d = 0; was always overwritten since the switch/case
handles all values
And remove the 0 case (it's the same handling as default)
When outputting in AVC3 stream format, the codec_data should not
contain any SPS or PPS, because they are embedded inside the stream.
In case of avc->bytestream h264parse will push the SPS and PPS from
codec_data downstream at the start of the stream, at intervals
controlled by "config-interval" and when there is a codec_data change.
In the case of avc3->bytstream h264parse detects that there is
already SPS/PPS in the stream and sets h264parse->push_codec to FALSE.
Therefore avc3->bytstream was already supported, except for the stream
type.
In the case of bystream->avc h264parse will generate codec_data caps
from the parsed SPS/PPS in the stream. However it does not remove these
SPS/PPS from the stream. bytestream->avc3 is the same as bytestream->avc
except that the codec_data must not have any SPS/PPS in it.
|--------------+-------------+-------------------|
|stream-format | SPS in-band | SPS in codec_data |
|--------------+-------------+-------------------|
| avc | maybe | always |
|--------------+-------------+-------------------|
| avc3 | always | never |
|--------------+-------------+-------------------|
Amendment 2 of ISO/IEC 14496-15 (AVC file format) is defining a new
structure for fragmented MP4 called "avc3". The principal difference
between AVC1 and AVC3 is the location of the codec initialisation
data (e.g. SPS, PPS). In AVC1 this data is placed in the initial MOOV box
(moov.trak.mdia.minf.stbl.stsd.avc1) but in AVC3 this data goes in the
first sample of every fragment.
https://bugzilla.gnome.org/show_bug.cgi?id=702004
The Sequence Header Data Structure STRUCT_C for Advanced Profile
has only a one valid field which is the profile indicator. Don't
use the reserved fields for fps update like Simple/Main profile.
https://bugzilla.gnome.org/show_bug.cgi?id=705667
The Sequence Header Data Structure STRUCT_A for advanced profile
may be eight consecutive zero bytes.Don't try to override the
width and height values in this case.
https://bugzilla.gnome.org/show_bug.cgi?id=705667
Updating caps results in downstream elements potentially reconfiguring themselves
(such as decoders). If we do this in the middle of keyframes, we would result
in those elements being reconfigured and handling garbage until the next keyframe.
Instead of this only send (potentially) new codec_data when we have *both* SPS and
PPS.
https://bugzilla.gnome.org/show_bug.cgi?id=705333
The caps should always represent what the user is supposed to see.
So if there is a sequence_display_extension associated with the
stream then use the display_horizontal_size/display_vertical_size
to update the src caps (if they are less than the values provided
by sequence header).
https://bugzilla.gnome.org/show_bug.cgi?id=704009
Restore the original h264parser behaviour to report cropped dimensions
in size caps.
https://bugzilla.gnome.org/show_bug.cgi?id=694068
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Migrate the code to use the new parser API based on GstMpegVideoPacket.
Also try to optimize gst_mpegv_parse_process_config() by using more of
GstMpegVideoPacket and determining the extension_start_code_identifier
prior to calling the parser function for that extension packet.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Don't send any source caps yet if we're still in
drop-buffers-until-we-get-a-sequence-header mode.
Fixes transmuxing of many MPEG-TS/PS streams into
formats which require things like width, height or
codec_data on the input caps.
Also fixes issues when using playbin with decoder
sinks that want width/height etc.
https://bugzilla.gnome.org/show_bug.cgi?id=695879
API is now in baseparse in gstreamer.
Timestamps in MPEG-TS streams are based on the last timestamp
before the start code of the picture. GstBaseParse sets the
timestamp based on the beginning of the sequence header, if
one exists before the picture. This fixes the case where the
timestamp occurs in the MPEG-TS stream between the seq header
and picture start code.
Timestamps in MPEG-TS streams are based on the last timestamp
before the start code of the picture. GstBaseParse sets the
timestamp based on the beginning of the sequence header, if
one exists before the picture. This fixes the case where the
timestamp occurs in the MPEG-TS stream between the seq header
and picture start code.
Otherwise we will intersect with the srcpad template caps and add all the caps fields
that the parser will ever set, no matter if downstream restricts this field or not.
This requires upstream to set this field on the caps to successfully negotiate.
https://bugzilla.gnome.org/show_bug.cgi?id=690184
This allows filtering out videos for hardware decoders that do not
support GMC at all or only support a limited number of sprite warping
points (usually 1).
Right now decodebin will concider the pad template caps as fixed and if a decoder
has restriction on for example height/width it won't be autoplugged because
gst_caps_is_subset fails as those fields are missing from the pad template caps.
We fix the issue here making sure that the pad caps are fixed using data from
the stream.