Expose SEI data in the H.264 bitstream parser API and
extract closed captions and other things that are not
specified in the H.264 spec itself in the videoparser.
Based on patch by: Mathieu Duponchelle <mathieu@centricular.com>
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/940
when computing timecode metas. Depending on the value of that flag,
n_frames is to be interpreted as a number of fields or a number of
frames. As GstVideoTimeCodeMeta always deals with frames, we want
to scale that number when needed.
This debug code will help determine why certain instances of closed
captions that are present in the Picture User Data are not actually
processed by the pipeline
vps/sps/pps in codec_data shouldn't be considered as inband data.
Otherwise, h26{4,5}parse never insert them to nal when transform
(packetized to byte-stream) use case
Similar change as the on I did in h264parse. We want to process SEI
recovery point as keyframe so muxers will mark them as seek points and
decoders will be able to start decoding from them rather than waiting
for an IDR.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/790
The spec states that "recovery point SEI message assists a decoder in
determining when the decoding process will produce acceptable
pictures for display after the decoder initiates random access or after the
encoder indicates a broken link in the coded video sequence."
Mark those as keyframes so muxers will mark them as seek points and
decoders will be able to start decoding from them rather than waiting
for an IDR.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/790
Direct applying the commit 7bb6443. This could fix also unexpected
nal dropping when nonzero "config-interval" is set.
(e.g., gst-launch-1.0 videotestsrc ! x265enc key-int-max=30 !
h265parse config-interval=30 ! avdec_h265 ! videoconvert ! autovideosink)
Similar to the h264parse, have_{vps,sps,pps} variables will be used
for deciding on when to submit updated caps or not, and rather mean
"have new SPS/PPS to be submitted?"
See also https://bugzilla.gnome.org/show_bug.cgi?id=732203https://bugzilla.gnome.org/show_bug.cgi?id=754124
We used to have the same enum to represent H265 profiles and idc values.
Those are no longer the same with extension profiles defined from
version 2 of the spec.
Split those enums so the semantic of each is clearer and we'll be able
to add extension profiles to GstH265Profile.
Also add gst_h265_profile_tier_level_get_profile() to retrieve the
GstH265Profile from the GstH265ProfileTierLevel. It will be used to
implement the detection of extension profiles.
https://bugzilla.gnome.org/show_bug.cgi?id=793876
This information could be used for example to pick a decoder supporting
a specific chroma and/or bit depth, like 4:2:2 10 bits.
It can also be used to inform earlier decoder about the format it is
about to decode.
https://bugzilla.gnome.org/show_bug.cgi?id=792039
When input is not in byte-stream format there is no need to wait for the first
buffer before setting src caps. We already have all the information from the
input codec_data.
This allow us to already configure downstream elements allowing them,
for example, to already allocate their internal buffers as they know
the format of the input they are about to receive.
Same change as the one I just did in h264parse.
https://bugzilla.gnome.org/show_bug.cgi?id=790709
When input is in AVC format there is no need to wait for the first buffer
before setting src caps. We already have all the information from the
input codec_data.
This allow us to already configure downstream elements allowing them,
for example, to already allocate their internal buffers as they know
the format of the input they are about to receive.
https://bugzilla.gnome.org/show_bug.cgi?id=790709
Try prioritizing downstream's caps over upstream's if possible so the
parser can configured in "passthrough" if possible and save it from
doing useless conversions.
Exact same change as the one I just did in h264parse.
https://bugzilla.gnome.org/show_bug.cgi?id=790628
Try prioritizing downstream's caps over upstream's if possible so the
parser can configured in "passthrough" if possible and save it from
doing useless conversions.
https://bugzilla.gnome.org/show_bug.cgi?id=790628
In this case, we assume that the format is jpc, and we infer the color
space from the number of components. This allows the parser to process a
jpc disk file coming from a filesrc element.
https://bugzilla.gnome.org/show_bug.cgi?id=783291
The RSIZ capabilities tag stores the JPEG 2000 profile. In the case of
broadcast profiles, it also stores the broadcast main level, which
specifies the bit rate.
https://bugzilla.gnome.org/show_bug.cgi?id=782337
Inserts AU delimeter by default if missing au delimeter from upstream.
This should be done only in case of byte-stream format.
Note that:
We have to compensate for the new bytes added for the AU, otherwise
insertion of PPS/SPS will use wrong offsets and overwrite wrong data.
Also mark the AU delimiter blob const, and use frame->out_buffer for
storing the output to keep baseparse assumptions valid.
Original-Patch-By: Michal Lazo <michal.lazo@mdragon.org>
Helped by Sebastian Dröge <sebastian@centricular.com>
https://bugzilla.gnome.org/show_bug.cgi?id=736213
Those are the rules:
In the SPS:
* if frame_mbs_only_flag=1 => all frame progressive
* if frame_mbs_only_flag=0 => field_pic_flag defines if each frame is
progressive or interlaced, thus the mode is 'mixed' in GStreamer
terms.
https://bugzilla.gnome.org/show_bug.cgi?id=779309
... rather than when determining when to end the frame.
The opportunity to do so might not come when forced to drain,
and it seems nicer anyway to do so at parse wrapup time.
MSVC warns about this because it's a C++ compiler, and this actually
results in useful things such as the incorrect 'gboolean' return value
for functions that return GstFlowReturn, so let's do explicit
conversions to reduce the noise and increase its efficacy.
https://github.com/mesonbuild/meson
With contributions from:
Tim-Philipp Müller <tim@centricular.com>
Matej Knopp <matej.knopp@gmail.com>
Jussi Pakkanen <jpakkane@gmail.com> (original port)
Highlights of the features provided are:
* Faster builds on Linux (~40-50% faster)
* The ability to build with MSVC on Windows
* Generate Visual Studio project files
* Generate XCode project files
* Much faster builds on Windows (on-par with Linux)
* Seriously fast configure and building on embedded
... and many more. For more details see:
http://blog.nirbheek.in/2016/05/gstreamer-and-meson-new-hope.htmlhttp://blog.nirbheek.in/2016/07/building-and-developing-gstreamer-using.html
Building with Meson should work on both Linux and Windows, but may
need a few more tweaks on other operating systems.
When skipping data, check if they are filler bytes. If so, drop the
data instead of skipping. We don't want to output filler bytes, but they
shouldn't cause a discontinuity.
https://bugzilla.gnome.org/show_bug.cgi?id=768125
If the input alignment claims AU alignment, each received
buffer should contain a complete video frame, so never hold over parts
of buffers for later processing. Also reduces latency, as packets
are parsed/converted and output immediately instead of 1 buffer
later.
Fixes a problem where an (arguably disallowed) padding byte on the
end of a buffer is detected as an extra byte in the following
start code, and messes up the timestamping that should apply to
that start code.
And always set the sampling field on the src caps, if necessary guessing a
correct value for it from the colorspace field.
Also, did some cleanup: removed sampling enum - redundant.
https://bugzilla.gnome.org/show_bug.cgi?id=766236
We get into this code path if the profile is already constrained-baseline and
downstream does not support constrained-baseline. So we should try baseline
and the other compatible profiles.
https://bugzilla.gnome.org/show_bug.cgi?id=764448
The parser handles the downstream force-key-unit event incorrectly,
it tries to parse it as an upstream force-key-unit event, does not
check the return value, and then uses uninitialized memory in
"all_headers" boolean variable.
https://bugzilla.gnome.org/show_bug.cgi?id=763793
This is a regression from since mpegvideoparser was switched to
use the codecparsing library.
The problem is that the high bit of the profile_and_level is used
to specify non-hierarchical profiles and levels. Unfortunately we
were discarding that information.
Expose that escape bit, and use it in the element
https://bugzilla.gnome.org/show_bug.cgi?id=763220
Enabling passthorugh mode is causing multiple issue:
For nal aligned multiresoluton streams, passthrough mode
make h264parse unable to advertise the new resoultions.
Also causing issues while parsing MVC streams which have two
separate layers (base-view and non-base-view).
This fix is only a temporary workaround.
For MVC, proper fixes needed in many places:
(handle prefix nal unit, handle non-base-view slice nal extension,
fix the picture_start detection for multi-layer-mvc streams etc)
https://bugzilla.gnome.org/show_bug.cgi?id=758656
Since commit b77f8e172a the new value
assigned to mview_mode hasn't been used. That commit changed the following
"if" check to an "else if", which means the original value of mview_mode
is used.
When converting from avc to byte-stream, there will not be any codec_data
in the src caps. Remove it before the equality check to avoid sending caps
events downstream on every SPS/PPS change.
https://bugzilla.gnome.org/show_bug.cgi?id=761014
If we have a stream that contains an unchanging SPS/PPS for every video frame,
we don't need to to constantly query downstream for it's supported caps if the
current caps are compatible with the negotiated caps.
https://bugzilla.gnome.org/show_bug.cgi?id=761014
When sps data is NULL, the buffer allocated and mapped is not being freed.
In this scenario there is no need to allocate the buffer as we are supposed to return NULL.
https://bugzilla.gnome.org/show_bug.cgi?id=761070
rename gst-launch --> gst-launch-1.0
replace old elements with new elements(ffmpegcolorspace -> videoconvert, ffenc_** -> avenc_**)
fix caps in examples
https://bugzilla.gnome.org/show_bug.cgi?id=759432
This is to support byte-stream decoder that does not remember the
PPS/SPS after a flush. This is not needed by all decoders, but is
harmless for those that do remember.
https://bugzilla.gnome.org/show_bug.cgi?id=758405
As it's recursive, gst_pad_get_allowed_caps() may also return
empty for anything incompatible downstream. EMPTY is not valid caps
value for gst_caps_fixate(). This lead to assertion and then crash.
Ideally, the negotiate function should be re-factored to have a return
value, and we could make the negotiation fails earlier.
https://bugzilla.gnome.org/show_bug.cgi?id=754122
The SPS struct might be filled out by a call to
gst_h264_parser_parse_subset_sps, which fills out
dynamically allocated data and requires a call
to gst_h264_sps_clear() to free it. Also make sure
to clear out any allocated SPS data when returning
an error.
https://bugzilla.gnome.org/show_bug.cgi?id=753306
Some video bitstreams report a too restrictive set of profiles. If a video
decoder was to strictly follow the indicated profile, it wouldn't support that
stream, whereas it could in theory and in practice. So we should relax the
profile restriction for allowing the decoder to get connected with parser.
https://bugzilla.gnome.org/show_bug.cgi?id=747613
VPS is not mandatory, and need not check for its presence before setting
the caps. Because of the check, in streams which don't have VPS,
sticky event mishandling happens.
https://bugzilla.gnome.org/show_bug.cgi?id=752807
Don't throw away AU delimiter(s) that precede the SPS/PPS. Should
fix MPEG-TS playback on iOS/Quicktime when muxing streams that
already have AU delimiters.
See https://bugzilla.gnome.org/show_bug.cgi?id=736213 for getting
h264parse to insert AU delimiters when they don't already
exist.
Move the pixel-aspect-ratio calculations higher up in caps
determination, so the results are available for a call to
gst_video_multiview_guess_half_aspect() when stereoscopic video
is detected.
Wait until at least one keyframe has been parsed before
deciding to switch to passthrough mode, in case the
stream contains SEI messages that supplement the output
caps - for example by providing stereoscopic information
We were off by one byte in the matching
It should be (using 24 bit matching):
* startcode : 0000 0000 0000 0000 1000 00xx
* mask (bin) : 1111 1111 1111 1111 1111 1100
* mask (hex) : f f f f f c
* match : 0 0 0 0 8 0
https://bugzilla.gnome.org/show_bug.cgi?id=750685
Like SPS/PPS they do contain information which will be needed to
decode the following data (as per definition of the flag)
Also ensures that the series of SPS/PPS/SEI NALU before a keyframe
can be considered as one contiguous header
In the H263 spec, CPFMT is present only if the use of a custom
picture format is signalled in PLUSEPTYPE and UFEP is "001",
so we need to check params->format and only if the value is
6 (custom source format) the CPFMT should be read, otherwise
it's not present and wrong data will be parsed.
When reading the CPFMT, the width and height were not
calculated correctly (wrong bitmask).
https://bugzilla.gnome.org//show_bug.cgi?id=749253
Don't use the apis in codec-utils to extract the profile and level
syntax elements since it is wrong if there are emulation prevention
bytes existing in the byte-stream data.
https://bugzilla.gnome.org/show_bug.cgi?id=747613
Don't use the apis in codec-utils to extract the profile,tier and level
syntax elements since it is wrong if there are emulation prevention
bytes existing in the byte-stream data.
https://bugzilla.gnome.org/show_bug.cgi?id=747613
The detection for missing format/alignment is done way before this
codepath is reached (at which point we have already decided of a
format and alignment).
CID #1232800
This prevents it from going into passthrough after receiving 2
byte-stream caps (different ones) as it would keep the have_pps and
have_sps set to true and would just go into passthrough without
updating its caps.
This patch makes it reset its stream information to restart properly
when new caps are received.
https://bugzilla.gnome.org/show_bug.cgi?id=745409
This patch calls gst_h264_parser_parse_subset_sps() when a
SPS subset NAL type is found.
All the bits required for parsing the SPS subset in NALs were
already there, just we need to call them when the this NAL type
is found.
With this parsing, the number of views (minus 1) attribute is
filled, which was a requirement for negotiating the stereo-high
profile.
https://bugzilla.gnome.org/show_bug.cgi?id=743174
Initial support for MVC NAL units. It is only needed to propagate the
complete set of NAL units downstream at this time.
https://bugzilla.gnome.org/show_bug.cgi?id=696135
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Some video bitstreams report a too restrictive set of profiles. If a video
decoder was to strictly follow the indicated profile, it wouldn't support that
stream, whereas it could in theory and in practice. So we should relax the
profile restriction for allowing the decoder to get connected with parser.
https://bugzilla.gnome.org/show_bug.cgi?id=739992
Everytime a buffer is being provided from baseparse, we are parsing all the data from the beginning.
But since we would have already parsed some of the data in the previous iterations,
it doesnt make much sense to keep parsing the same everytime.
Hence skipping the data which is already read in previous iterations to improve the parsing performance.
https://bugzilla.gnome.org/show_bug.cgi?id=740058
Some video bitstreams report a too restrictive set of profiles. If a video
decoder was to strictly follow the indicated profile, it wouldn't support that
stream, whereas it could in theory and in practice. So we should relax the
profile restriction for allowing the decoder to get connected with parser.
https://bugzilla.gnome.org/show_bug.cgi?id=739992
Read PNG data chunk in one go by letting the parser
base class know the size we need, so that it doesn't
drip-feed us small chunks of data (causing a lot of
reallocs and memcpy in the process) until we have
everything.
Improves parsing performance of very large PNG files
(65MB) from ~13 seconds to a couple of millisecs.
https://bugzilla.gnome.org/show_bug.cgi?id=736176
This commit add an helper to convert a frame to frame-layer format and
use it to implement these two stream-format conversion:
- asf --> sequence-layer-frame-layer
- asf --> frame-layer
In simple/main profile, we basically have a raw frame, so building a
frame layer isn't too complicated. But in advanced profile, the first
frame-layer should contain sequence-header, entrypoint, and frame and
each keyframe should contain entrypoint, so we have to handle these
carefully.
https://bugzilla.gnome.org/show_bug.cgi?id=738526
Add an helper to check that output stream-format is coherent with
profile and header-format. It also check if we know how to do the
conversion if the input stream-format differs from selected
output-format.
So, in case output stream-format is not allowed, it will now fail at
negotiation rather than in pre_push_frame.
https://bugzilla.gnome.org/show_bug.cgi?id=738526
This commit introduces an helper to convert an ASF frame to BDUs format with
startcodes and use this helper to implements following stream-format
conversions:
- asf --> bdu
- asf --> sequence-layer-bdu
- asf --> sequence-layer-raw-frame
https://bugzilla.gnome.org/show_bug.cgi?id=738526
It add the support of following stream-format conversion:
- bdu --> sequence-layer-bdu
- bdu-frame --> sequence-layer-bdu-frame
- frame-layer --> sequence-layer-frame-layer
For these conversion, the only requirements is to push a sequence-layer
buffer prior to data.
https://bugzilla.gnome.org/show_bug.cgi?id=738526
It prepares the template for stream-format conversion and it implements
the following conversion:
- sequence-layer-bdu --> bdu
- sequence-layer-bdu-frame --> bdu-frame
- sequence-layer-frame-layer --> frame-layer
Work is done in the pre_push_frame() method.
https://bugzilla.gnome.org/show_bug.cgi?id=738526