Don't use the apis in codec-utils to extract the profile,tier and level
syntax elements since it is wrong if there are emulation prevention
bytes existing in the byte-stream data.
https://bugzilla.gnome.org/show_bug.cgi?id=747613
The detection for missing format/alignment is done way before this
codepath is reached (at which point we have already decided of a
format and alignment).
CID #1232800
This prevents it from going into passthrough after receiving 2
byte-stream caps (different ones) as it would keep the have_pps and
have_sps set to true and would just go into passthrough without
updating its caps.
This patch makes it reset its stream information to restart properly
when new caps are received.
https://bugzilla.gnome.org/show_bug.cgi?id=745409
This patch calls gst_h264_parser_parse_subset_sps() when a
SPS subset NAL type is found.
All the bits required for parsing the SPS subset in NALs were
already there, just we need to call them when the this NAL type
is found.
With this parsing, the number of views (minus 1) attribute is
filled, which was a requirement for negotiating the stereo-high
profile.
https://bugzilla.gnome.org/show_bug.cgi?id=743174
Initial support for MVC NAL units. It is only needed to propagate the
complete set of NAL units downstream at this time.
https://bugzilla.gnome.org/show_bug.cgi?id=696135
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Some video bitstreams report a too restrictive set of profiles. If a video
decoder was to strictly follow the indicated profile, it wouldn't support that
stream, whereas it could in theory and in practice. So we should relax the
profile restriction for allowing the decoder to get connected with parser.
https://bugzilla.gnome.org/show_bug.cgi?id=739992
Everytime a buffer is being provided from baseparse, we are parsing all the data from the beginning.
But since we would have already parsed some of the data in the previous iterations,
it doesnt make much sense to keep parsing the same everytime.
Hence skipping the data which is already read in previous iterations to improve the parsing performance.
https://bugzilla.gnome.org/show_bug.cgi?id=740058
Some video bitstreams report a too restrictive set of profiles. If a video
decoder was to strictly follow the indicated profile, it wouldn't support that
stream, whereas it could in theory and in practice. So we should relax the
profile restriction for allowing the decoder to get connected with parser.
https://bugzilla.gnome.org/show_bug.cgi?id=739992
Read PNG data chunk in one go by letting the parser
base class know the size we need, so that it doesn't
drip-feed us small chunks of data (causing a lot of
reallocs and memcpy in the process) until we have
everything.
Improves parsing performance of very large PNG files
(65MB) from ~13 seconds to a couple of millisecs.
https://bugzilla.gnome.org/show_bug.cgi?id=736176
This commit add an helper to convert a frame to frame-layer format and
use it to implement these two stream-format conversion:
- asf --> sequence-layer-frame-layer
- asf --> frame-layer
In simple/main profile, we basically have a raw frame, so building a
frame layer isn't too complicated. But in advanced profile, the first
frame-layer should contain sequence-header, entrypoint, and frame and
each keyframe should contain entrypoint, so we have to handle these
carefully.
https://bugzilla.gnome.org/show_bug.cgi?id=738526
Add an helper to check that output stream-format is coherent with
profile and header-format. It also check if we know how to do the
conversion if the input stream-format differs from selected
output-format.
So, in case output stream-format is not allowed, it will now fail at
negotiation rather than in pre_push_frame.
https://bugzilla.gnome.org/show_bug.cgi?id=738526
This commit introduces an helper to convert an ASF frame to BDUs format with
startcodes and use this helper to implements following stream-format
conversions:
- asf --> bdu
- asf --> sequence-layer-bdu
- asf --> sequence-layer-raw-frame
https://bugzilla.gnome.org/show_bug.cgi?id=738526
It add the support of following stream-format conversion:
- bdu --> sequence-layer-bdu
- bdu-frame --> sequence-layer-bdu-frame
- frame-layer --> sequence-layer-frame-layer
For these conversion, the only requirements is to push a sequence-layer
buffer prior to data.
https://bugzilla.gnome.org/show_bug.cgi?id=738526
It prepares the template for stream-format conversion and it implements
the following conversion:
- sequence-layer-bdu --> bdu
- sequence-layer-bdu-frame --> bdu-frame
- sequence-layer-frame-layer --> frame-layer
Work is done in the pre_push_frame() method.
https://bugzilla.gnome.org/show_bug.cgi?id=738526
This parses the frame_packing_arragement() payload in SEI message.
This information can be used by decoders to appropriately rearrange the
samples which belong to Stereoscopic and Multiview High profiles.
https://bugzilla.gnome.org/show_bug.cgi?id=685215
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Some VC1 decoder can have different caps according to wmv format, ie
WMV3 or WVC1.
So instead of keeping the first available caps, we interserct with
current WMV format.
https://bugzilla.gnome.org/show_bug.cgi?id=738532
When stream-format is ASF or sequence-layer-raw-frame, we basically have
a raw frame so we can parse it to extract some information such the
keyframe flag. The only requirement is to have a valid sequence-header.
This commit parse the frame header and set the DELTA_UNIT buffer flag in
case the frame is not a keyframe.
https://bugzilla.gnome.org/show_bug.cgi?id=738519
frame-layer header is represented as a sequence of 32 bit unsigned
integer serialized in little-endian byte order, so framesize is on the
first 3 bytes.
SMPTE 421M Annex L.
https://bugzilla.gnome.org/show_bug.cgi?id=738243
GST_BASE_PARSE_FRAME_FLAG_PARSING value is wrong, and the same flag is
not being used presently. Hence changing the value and commenting it out.
This needs to be included in baseparse.h later on
https://bugzilla.gnome.org/show_bug.cgi?id=737411
Or not negotiated in the case we would be actually not negotiated
Currently we are getting assertions from
gst_pb_utils_add_codec_description_to_tag_list because of NULL
caps.
https://bugzilla.gnome.org/show_bug.cgi?id=737186
If we don't have a seq_layer_buffer, we also don't have a valid
seq_layer because there are set together in
gst_vc1_parse_handle_seq_layer().
So when output header format is sequence-layer and when we don't have a
seq_layer_buffer, we forge one from seq_hdr.
https://bugzilla.gnome.org/show_bug.cgi?id=736781
Sequence-layer and frame-layer are serialized in little-endian byte
order except for STRUCT_C and framedata fields as described in SMPTE 421M Annex
L.
https://bugzilla.gnome.org/show_bug.cgi?id=736750
This commit fix several issues with sequence layer header forging on
update_caps():
- 0x00000004 unsigned integer is before STRUCT_C.
- Set reserved bits of STRUCT_C to their values for simple/main
profiles in sequence layer header format and ASF header format.
- Sequence layer shall be represented as a sequence of 32 bits unsigned
integers and shall be serialized in little-endian byte order except
for STRUCT_C which shall be serialized in big-endian byte-order.
See SMPTE 421M Annex L for more details about sequence layer format.
https://bugzilla.gnome.org/show_bug.cgi?id=736474
Do more elaborate validation of the input caps: what fields
are required and/or not allowed. Don't assume AVC3 format
input without codec_data field is byte-stream format. Fix
up some now-unreachable code (CID 1232800).
It should try to use bytestream in these cases that the format
is set to _FORMAT_NONE as it seems that is what the 'else' clause
for bytestream can handle (by defaulting to _FORMAT_BYTESTREAM).
Always use a GstAdapter when collecting access units (alignment="au")
in either byte-stream or avcC format. This is required to properly
preserve config headers like SPS and PPS when invalid or broken NAL
units are subsequently parsed.
More precisely, this fixes scenario like:
<SPS> <PPS> <invalid-NAL> <slice>
where we used to reset the output frame buffer when an invalid or
broken NAL is parsed, i.e. SPS and PPS NAL units were lost, thus
preventing the next slice unit to be decoded, should this also
represent any valid data.
https://bugzilla.gnome.org/show_bug.cgi?id=732203
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Carefully track cases when skipping broken or invalid NAL units is
necessary. In particular, always allow NAL units to be processed
and let that gst_h264_parse_process_nal() function decide on whether
the current NAL needs to be dropped or not.
This fixes parsing of streams with SEI NAL buffering_period() message
inserted between SPS and PPS, or SPS-Ext NAL following a traditional
SPS NAL unit, among other cases too.
Practical examples from the H.264 AVC conformance suite include
alphaconformanceG, CVSE2_Sony_B, CVSE3_Sony_H, CVSEFDFT3_Sony_E
when parsing in stream-format=byte-stream,alignment=au mode.
https://bugzilla.gnome.org/show_bug.cgi?id=732203
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Improve parser state tracking by introducing new flags reflecting
it: "got-sps", "got-pps" and "got-slice". This is an addition for
robustness purposes.
Older have_sps and have_pps variables are kept because they have
a different meaning. i.e. they are used for deciding on when to
submit updated caps or not, and rather mean "have new SPS/PPS to
be submitted?"
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Use gst_h264_parser_identify_nalu_unchecked() to identify the next
NAL unit. We don't want to parse the full NAL unit, but only the
header bytes and possibly the first RBSP byte for identifying the
first_mb_in_slice syntax element.
Also fix check for failure when returning from that function. The
only success condition for that is GST_H264_PARSER_OK, so use it.
https://bugzilla.gnome.org/show_bug.cgi?id=732154
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
The gst_h264_parse_pps() function dynamically allocates the slice
group ids map array, so that needs to be cleared before parsing a
new PPS NAL unit again, or when it is no longer needed.
Likewise, a clean copy to the internal NAL parser state needs to be
performed so that to avoid a double-free corruption.
https://bugzilla.gnome.org/show_bug.cgi?id=707282
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
The recovery point SEI message helps a decoder in determining if the
decoding process would produce acceptable pictures for display after
the decoder initiates random access or after the encoder indicates
a broken link in the coded video sequence.
This is not used in the h264parse element, but it could help debugging.
https://bugzilla.gnome.org/show_bug.cgi?id=723380
In ISO/IEC 14496-15, the minimum size of a HEVCDecoderConfigurationRecord
(i.e., the contents of a hvcC box) is 23 bytes. However, the code in h265parse
checks that the size of this data is not less than 28 bytes, and it refuses to
accept caps if the check fails. The result is that standards-conformant streams
that don't carry any parameter sets in their hvcC boxes won't play.
https://bugzilla.gnome.org//show_bug.cgi?id=731783
Presence of picture extension header identifies the stream as mpeg2.
We are supposed to set the mpegversion to 2 if there is a picextension
instead of blindly setting the version to 1
https://bugzilla.gnome.org/show_bug.cgi?id=726028
It is not perfect but it allows us to be sure that the mandatory 'framerate'
field is present in the caps.
As soon as some information is found in the stream, that will be
updated.
https://bugzilla.gnome.org/show_bug.cgi?id=723243
An SEI RBSP could contains more than one SEI message as specified in
7.4.2.3.1.
This commit change the parser API: the gst_h264_parser_parse_sei()
function now create and fill a GArray containing GstH264SEIMessage.
https://bugzilla.gnome.org/show_bug.cgi?id=721715
mpeg4videoparse might not push buffers while parsing. If those buffers
contain the DISCONT flag, it gets lost and downstream won't get any
buffer with the flag.
Fix it by adding the DISCONT to the next pushed buffer.
This makes backwards playback work.
Conversion to byte-stream/nal crashes without that because the
baseparse frame of all NALUs is finished for the first NALU, then
used again for parsing the second NALU. Just that now the buffer
of the frame is already gone. Instead we create temporary frames
for every NALU.
In case more data than a start code alone is needed to decide whether
it ends a frame, arrange for more input data and decide when available.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=711627
When the input buffer is empty and we need more data to determine
whether or not to terminate the previous frame, the last start code
location needs to be set to 4 bytes before the the current position
(size of start_code is 32-bits)
https://bugzilla.gnome.org/show_bug.cgi?id=711627
the initial par_n = par_d = 0; was always overwritten since the switch/case
handles all values
And remove the 0 case (it's the same handling as default)
When outputting in AVC3 stream format, the codec_data should not
contain any SPS or PPS, because they are embedded inside the stream.
In case of avc->bytestream h264parse will push the SPS and PPS from
codec_data downstream at the start of the stream, at intervals
controlled by "config-interval" and when there is a codec_data change.
In the case of avc3->bytstream h264parse detects that there is
already SPS/PPS in the stream and sets h264parse->push_codec to FALSE.
Therefore avc3->bytstream was already supported, except for the stream
type.
In the case of bystream->avc h264parse will generate codec_data caps
from the parsed SPS/PPS in the stream. However it does not remove these
SPS/PPS from the stream. bytestream->avc3 is the same as bytestream->avc
except that the codec_data must not have any SPS/PPS in it.
|--------------+-------------+-------------------|
|stream-format | SPS in-band | SPS in codec_data |
|--------------+-------------+-------------------|
| avc | maybe | always |
|--------------+-------------+-------------------|
| avc3 | always | never |
|--------------+-------------+-------------------|
Amendment 2 of ISO/IEC 14496-15 (AVC file format) is defining a new
structure for fragmented MP4 called "avc3". The principal difference
between AVC1 and AVC3 is the location of the codec initialisation
data (e.g. SPS, PPS). In AVC1 this data is placed in the initial MOOV box
(moov.trak.mdia.minf.stbl.stsd.avc1) but in AVC3 this data goes in the
first sample of every fragment.
https://bugzilla.gnome.org/show_bug.cgi?id=702004
The Sequence Header Data Structure STRUCT_C for Advanced Profile
has only a one valid field which is the profile indicator. Don't
use the reserved fields for fps update like Simple/Main profile.
https://bugzilla.gnome.org/show_bug.cgi?id=705667
The Sequence Header Data Structure STRUCT_A for advanced profile
may be eight consecutive zero bytes.Don't try to override the
width and height values in this case.
https://bugzilla.gnome.org/show_bug.cgi?id=705667
Updating caps results in downstream elements potentially reconfiguring themselves
(such as decoders). If we do this in the middle of keyframes, we would result
in those elements being reconfigured and handling garbage until the next keyframe.
Instead of this only send (potentially) new codec_data when we have *both* SPS and
PPS.
https://bugzilla.gnome.org/show_bug.cgi?id=705333
The caps should always represent what the user is supposed to see.
So if there is a sequence_display_extension associated with the
stream then use the display_horizontal_size/display_vertical_size
to update the src caps (if they are less than the values provided
by sequence header).
https://bugzilla.gnome.org/show_bug.cgi?id=704009
Restore the original h264parser behaviour to report cropped dimensions
in size caps.
https://bugzilla.gnome.org/show_bug.cgi?id=694068
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Migrate the code to use the new parser API based on GstMpegVideoPacket.
Also try to optimize gst_mpegv_parse_process_config() by using more of
GstMpegVideoPacket and determining the extension_start_code_identifier
prior to calling the parser function for that extension packet.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Don't send any source caps yet if we're still in
drop-buffers-until-we-get-a-sequence-header mode.
Fixes transmuxing of many MPEG-TS/PS streams into
formats which require things like width, height or
codec_data on the input caps.
Also fixes issues when using playbin with decoder
sinks that want width/height etc.
https://bugzilla.gnome.org/show_bug.cgi?id=695879
API is now in baseparse in gstreamer.
Timestamps in MPEG-TS streams are based on the last timestamp
before the start code of the picture. GstBaseParse sets the
timestamp based on the beginning of the sequence header, if
one exists before the picture. This fixes the case where the
timestamp occurs in the MPEG-TS stream between the seq header
and picture start code.
Timestamps in MPEG-TS streams are based on the last timestamp
before the start code of the picture. GstBaseParse sets the
timestamp based on the beginning of the sequence header, if
one exists before the picture. This fixes the case where the
timestamp occurs in the MPEG-TS stream between the seq header
and picture start code.
Otherwise we will intersect with the srcpad template caps and add all the caps fields
that the parser will ever set, no matter if downstream restricts this field or not.
This requires upstream to set this field on the caps to successfully negotiate.
https://bugzilla.gnome.org/show_bug.cgi?id=690184
This allows filtering out videos for hardware decoders that do not
support GMC at all or only support a limited number of sprite warping
points (usually 1).
Right now decodebin will concider the pad template caps as fixed and if a decoder
has restriction on for example height/width it won't be autoplugged because
gst_caps_is_subset fails as those fields are missing from the pad template caps.
We fix the issue here making sure that the pad caps are fixed using data from
the stream.