The av1C box is optional so dropping parsing does not break anything
fundamentally, and there seems to be no historical record how version 0
even looks like while the comments and the parsing disagreed with each
other.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4027>
When using qtdemux in a pipeline that should only work as a pure demuxer (not
for actual playback), qtdemux shouldn't emit new GstSegments to correct
the start time (jump to the future) to ensure that the user experiences no
playback delay. By doing so, it's generating the wrong segments when an append
of data from the past happens. When that happens, downstream elements such as
parsers (eg: aacparse) may clip those buffers laying before the GstSegment and
create problems on the GStreamer client app (eg: WebKit).
Getting buffers clipped out because of the wrong GstSegments started becoming
a problen when this commit was introduced:
ab6e49e9cc audioparsers: add back segment clipping to parsers that have lost it
This clipping makes test DASH shaka 35 from MVT tests[1] to fail in
WebKitGTK/WPE (at least) and can potentially cause a number of other problems
in the WebKit Media Source Extensions (MSE) code.
Note that this new behaviour of not emitting new GstSegments only makes sense
when qtdemux is being used as a pure demuxer and not as part of a regular
pipeline. This is why the variant field has been added. When equal to
VARIANT_MSE_BYTESTREAM, it will make qtdemux behave differently in push mode,
taking decisions that meet the expectations for an MSE-like processing mode.
This kind of tweaks have been done in the past for MSS streams, for instance.
That code has been refactored to use VARIANT_MSS_FRAGMENTED now, instead of
its own dedicated boolean flag.
Co-authored by: Alicia Boya García <ntrrgc@gmail.com>
...who suggested to use "variant: mse-bytestream" in the caps to identify that
mode, as proposed in her unmerged patch:
https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/issues/467
[1] https://github.com/rdkcentral/mvt
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3990>
All the RTP src pads were sharing the same stream-id while each actually
carry a different stream.
This was causing problem for example when funneling the streams together
and then trying to split them using 'streamiddemux'.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3866>
Deserialize socket control messages as GstSocketTimestampMessage only
if (level, type) is (SOL_SOCKET, SCM_TIMESTAMPNS).
Without this patch, messages with types SCM_RIGHTS or SCM_CREDENTIALS
could be deserialized as GstSocketTimestampMessage instead of
GUnixFDMessage or GUnixCredentialsMessage from gio.
Fixes#1736
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3777>
AVC-Intra is a range of H.264-compliant intra-only codecs from
Panasonic. The codes and descriptions have been taken from VLC.
The (encumbered) sample I have here produces byte-stream H.264,
including SPS and PPS and no `avcC` box.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3739>
When calculating the presentation offset for CMAF input in live
playback, subtract the stream_time of the fragment from the
calculated presentation offset, so that the first fragment
is played at running time zero.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3680>
This is recommended by various specifications for such framerates, while
for integer framerates we continue using centiframes to allow for some
more accuracy.
Using N means that no rounding error accumulates, eventually leading to
outputting a packet with a different duration.
Some tools such as MediaInfo determine that a stream is variable
framerate if any packet has a different duration than the others, and
there is no reason I can see for not using the full 4 bytes of
resolution that the mp4 timescale offers.
Example problematic pipeline:
```
videotestsrc num-buffers=5001 ! video/x-raw,framerate=60000/1001,width=320,height=240 ! \
videoconvert ! x264enc bitrate=80000 speed-preset=1 tune=zerolatency ! h264parse ! \
video/x-h264,profile=high-10 ! mp4mux ! filesink location="result2.mp4"
```
This results in a media file that MediaInfo detects as variable
framerate because the 5000th packet has duration 99 instead of 100.
With this patch, the timescale is 60000 and all packets have duration
1001.
Related issue for context: https://bugzilla.gnome.org/show_bug.cgi?id=769041
Co-authored-by: Sebastian Dröge <sebastian@centricular.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3049>
This reverts the decision from
https://bugzilla.gnome.org/show_bug.cgi?id=754230
where it was decided that we rather play safe and only use the `tfdt` if
it is "significantly different" to the sum of sample durations.
As the specification says
If the time expressed in the track fragment decode time (‘tfdt’) box
exceeds the sum of the durations of the samples in the preceding
movie and movie fragments, then the duration of the last sample
preceding this track fragment is extended such that the sum now
equals the time given in this box.
we have to use the `tfdt` in general to allow for it to signal gaps in
the stream.
A muxer producing fragments might not yet know the full duration of the
last sample of a previous fragment if the next fragment starts with a
gap, and knowing the actual start of the next fragment would potentially
require to violate latency requirements.
Additionally, the existence of `tfdt` allows to avoid accumulating
rounding errors from summing up the durations.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3586>
The rtpjitterbuffer test drop_messages_interval uses a GstClockTime for
the message drop interval. This property is defined as a guint. On
systems with 64-bit time_t but 32-bit uint, this can cause the
g_object_set function to fail to read the arguments properly.
Fixes: #1656
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3580>
If we keep the old events they can be end up being passed to the app, that could
discard the protection information because it has been seen before.
Drive by improvement: use g_queue_clear_full instead of foreach+clear for
protection events.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3547>
jpegdec is capable to parse input frames, but if jpegparse is before,
there's no need to reparse frames. This patch configure jpegdec as
packetized, skipping parsing, if negotiated sink caps has the boolean
field 'parsed' as true.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2464>