Because we treat raw audio chunks/samples as keyframes, they were interfering
with seek time adjustment.
Became apparent when the accompanying video stream was I-frame only,
for example ProRes.
Since raw audio streams can be seeked freely, it's fine to just ignore them here,
giving priority to the real keyframes in the video stream.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4946>
With one regular image file path provided (without %05d),
the element was stuck in a dead loop counting the frames:
gst_image_sequence_src_count_frames
This allows to display any image file out of the element
for a given number of buffers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5471>
We were already converting the pad last timestamp to running time but
not the segment position.
This segment position is used by gst_aggregator_simple_get_next_time()
to compute the waiting time when aggregating.
Those waiting times were wrong in my live pipeline using the system
clock, resulting in the aggregator to never wait at all.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5460>
scanlines->m1 = same line of the previous field
scanlines->t0 = line above of the current field
scanlines->b0 = line below of the current field
scanlines->mp = same line of the next field
Deinterlacing a field weaved frame:
When deinterlacing the top field, the next bottom field is available
(part of the same frame). but when deinterlacing the bottom field,
the next top field (part of the next frame) is not available and
scanlines->mp equals NULL.
In this case it's better to use greedy algorithm using the prevous field
(twice) rather then linear interpolation of the current field.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5331>
If we end up with GST_CLOCK_TIME_NONE as running time for an RTP packet
then this can't be used for bitrate estimation, and also not for
constructing the next RTCP SR. Both would end up with completely wrong
values, and an RTCP SR with wrong values can easily break
synchronization in receivers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5329>
The timestamp offset can be negative, and it can be a bigger negative
number than the latency introduced by the rtpjitterbuffer so the overall
timeout offset can be negative.
Using the negative offset for calculating how many packets can still
arrive in time when encountering a lost packet in an equidistant stream
would then overflow and instead of considering fewer packets lost a lot
more packets are considered lost.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5296>
This elements pass RTP packets along unchanged and appear as a RTP
payloader element.
This is useful, for example when using the gstreamer-rtsp-server
library, in the case where you are receiving RTP packets from a
different source and want to serve them over RTSP. Since the
gst-rtsp-server library expect the element marked as payX to be a RTP
payloader element and assumes certain properties are available.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5204>
The hack enforcing strictly increasing timestamps was, according to the
code comments, because librtmp was confused with backwards timestamps.
rtmp2sink is not using librtmp as rtmpsink did, so this is no longer
required.
Also changing the timestamps is causing audio glitches when streaming to
Youtube.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5212>
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/2771
This EOS branch exists so that if a seek with a stop is made, qtdemux
stops accepting bytes from the sink after the entire requested playback
range is demuxed, as otherwise we could keep download content that is
not being used.
This patch fixes two flaws that were present in that EOS check:
1) A comparison was made between track time and movie time without conversion.
This made the check trigger early in files with edit lists. This patch fixes
this by converting the track PTS to movie PTS (stream time) for the check.
2) To avoid sending a EOS prematurely when the segment stop is within a GOP and
B-frames are present, the check for EOS should only be done for keyframes. I
gather this was already the intention with the existing code, but because it
used `stream->on_keyframe` instead of the local variable `keyframe` the old
code was checking if the *previous* frame was a keyframe.
It's interesting to note that these two flaws in the old code mask each other
in most cases: the track PTS will have reached the movie end PTS, but EOS would
only be sent if the previous frame was a keyframe. A simple case where they
wouldn't mask each other, reproducing the bug, is a sequence of 3 frame GOPs
with structure I-B-P.
The following validateflow tests have been added to future-proof the
fix:
* validate.test.mp4.qtdemux_ibpibp_non_frag_pull.default
* validate.test.mp4.qtdemux_ibpibp_non_frag_push.default
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5021>
We were checking if the tag list is writable, but it may actually be
shared through the same event (tee upstream or multiple consumers).
Fix a bug where multiple branches have a videoflip element checking the
taglist. The first one was changing the orientation back to rotate-0
which was resetting the other instances.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5097>
Fix this pipeline where the tag list is not writable:
gst-launch-1.0 videotestsrc ! taginject tags="image-orientation=rotate-90" ! videoflip video-direction=auto \
! autovideosink
GStreamer-CRITICAL **: 12:34:36.310: gst_tag_list_add: assertion 'gst_tag_list_is_writable (list)' failed
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4987>
Refusing an incoming segment in < GST_MATROSKA_READ_STATE_DATA should only be
done if the incoming segment is not in GST_FORMAT_TIME.
In GST_FORMAT_TIME, we are just storing the values and returning, so we can
invert the order of the checks.
Fixes proper segment propagation in matroska/webm DASH use-cases
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3914>
... otherwise streams with constant size samples defined with a single
`sample_size` for all samples in the `stsz` box fall in the category
`chunks_are_samples` in `qtdemux_stbl_init`, overriding the actual
sample count.
`FOURCC_soun` would set this automatically for `compression_id == 0xfffe`,
however `compression_id` is read from the Audio Sample Entry box at an offset
marked as "pre-defined" in some version of the spec and set to 0 both by
GStreamer and FFmpeg for opus streams.
Considering the stream `sampled` flag is set explicitely by other fourcc
variants, doing so for opus seems consistent.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4903>
The "Encapsulation of Opus in ISO Base Media File Format" [1] specifications,
§ 4.3.2 Opus Specific Box, indicates that data must be stored as big-endian.
In `build_opus_extension`, `gst_byte_writer_put*_le ()` variants were used,
causing audio streams conversion to Opus in mp4 to offset samples due to the
PreSkip field incorrect value (29ms early in our test cases).
[1] https://opus-codec.org/docs/opus_in_isobmff.html#4.3.2
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4875>
When doing a segment seek, the base offset in the new segment
would be increased by segment.position which is basically the
timestamp of the last packet. This does not include the duration
of the last packet though, so might be slightly shorter than the
actual duration of the clip or the requested segment.
Increase the base offset by the segment duration instead when
accumulating segments, which is more correct as it doesn't cut
off the last frame and makes the effective loop segment duration
consistent with the actual duration returned from a duration
query.
In case a segment stop was specified it's also possible that
some data was sent beyond the stop that's necessary for decoding
so the base offset increment should be based on that then and
not on the timestamp of the last buffer pushed out.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4604>
The muxer used a fixed value of 2 channels because the TR 102 366 spec
says they're to be ignored. However, the demuxer still trusted them,
resulting in bad caps.
Make the muxer fill in the correct channel count anyway (FFmpeg already
does) and make the demuxer ignore the value.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4739>
Allow a project to use gstreamer-full as a static library
and link to create a binary without dependencies.
Introduce the option 'gst-full-target-type' to
select the build type, dynamic(default) or static.
In gstreamer-full/static build configuration gstreamer (gst.c)
needs the symbol gst_init_static_plugins which is defined
in gstreamer-full.
All the tests and examples are linking with gstreamer but the
symbol gst_init_static_plugins is only defined in the gstreamer-full
library. gstreamer-full can not be built first as it needs to know what plugins
will be built.
One option would be to build all the examples and tests after
gstreamer-full as the tools.
Disable tools build in subprojects too as it will be built at the end of
build process.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4128>
Since c0bf793c05 ("flvmux: Set PTS based on
running time") the timestamp of the output buffer is already in running
time. So using that for 'srcpad->segment.position' does not work correctly
because gst_aggregator_simple_get_next_time() will convert it again with
gst_segment_to_running_time().
This means that the timestamp returned by
gst_aggregator_simple_get_next_time() may be incorrect. For example, if
flvmux is added to a already runinng pipeline then the timestamp is too
small and gst_aggregator_wait_and_check() returns immediately. As a result,
buffers may be muxed in the wrong order.
To fix this, use the PTS of the incoming buffer instead of the outgoing
buffer. Also add the duration as get_next_time() is supposed to return the
timestamp of the next buffer, not the current one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4701>
The index is already incremented by 3 every iteration so multiplying it
by 3 additionally on each array access is doing it twice and does not
work.
This caused invalid files to be created if there's more than one CEA608
triplet in a buffer, and out of bounds memory reads.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4634>
Make splitmuxsrc deal better with stream reordering by
making the largest observed PTS contiguous in the
next fragment. Previously, it selected DTS, but then
aligned that with the segment start of the next fragment,
which holds PTS values - leading to glitches in
streams that don't have PTS = DTS at the start.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4637>
It could indeed be used uninitialized, but only if one of the
g_return_val_if_fail() caused an early return.
../subprojects/gst-plugins-good/gst/rtpmanager/rtpjitterbuffer.c: In function ‘rtp_jitter_buffer_append_query’:
../subprojects/gst-plugins-good/gst/rtpmanager/rtpjitterbuffer.c🔢10: warning: ‘head’ may be used uninitialized
[-Wmaybe-uninitialized]
1234 | return head;
| ^~~~
../subprojects/gst-plugins-good/gst/rtpmanager/rtpjitterbuffer.c:1232:12: note: ‘head’ was declared here
1232 | gboolean head;
| ^~~~
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4616>
This is a fix for a data race leading to:
> GLib-CRITICAL: g_hash_table_foreach:
> assertion 'version == hash_table->version' failed
Identified sequence:
* `rtp_session_on_timeout` acquires the lock on `session` and proceeds with its
processing.
* `rtp_session_process_rtcp` is called (debug log : received RTCP packet) and
attempts to acquire the lock on `session`, which is still held by
`rtp_session_on_timeout`.
* as part of an hash table iterator, `rtp_session_on_timeout` transitively
invokes `source_caps` which releases the lock on `session` so as to call
`session->callbacks.caps`.
* Since `rtp_session_process_rtcp` was waiting for the lock to be released, it
succeeds in acquiring it and proceeds with `rtp_session_process_rr` which
transitively calls `g_hash_table_insert` via `add_source`.
* After `source_caps` re-acquires the lock and gives the control flow back to
`rtp_session_on_timeout`, the hash table iterator is changed, resulting in the
assertion failure.
This commits copies `sess->ssrcs[sess->mask_idx]` and iterates on the copy so
the iterator is not affected by a concurrent change due to the lock being
released in the `source_caps` callback.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4555>
Since c2f890ab, element properties are gathered from the parse-launch
line and passed at object construction.
This caused the following issue to happen in videoflip:
* videoflip installed a CONSTRUCT property named method, now deprecated
* videoflip now also overrides that property with a video-direction
property
GObject construction causes method to be set first at construct time,
with the user-provided value, then video-direction with the default
value.
The user-provided value was thus overridden, causing a regression.
Fix by not installing the properties as CONSTRUCT, and explicitly
implementing constructed() instead in order to ensure that we do still
call gst_video_flip_set_method() at least once during construction.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/2529
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4536>
Atomically set and get the picture_id. This changeset only atomically gets
the picture-id when such property is queried on the element, on every other
place where it is accessed internally it is accessed directly.
This is because there is no MT scenario where we would be modifying this value
and reading it internally in parallel.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4530>
In recent versions of Chrome (M106) a change on their jitter buffer means that
they are very susceptible to PictureID discontinuities.
Then avoid at all cost resetting the PictureID. Moreover, according to
the RFCs for VP8 and VP9 payloads; the PictureID can start off at any
random value. So there is no logical problem of incrementing it here
rather than resetting it, as long as it is a different PictureID.
WebRTC's recent corruption issue:
https://bugs.chromium.org/p/webrtc/issues/detail?id=15101
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4530>
If we don't do that, clients can rely on this signal to see the final pad
topology but it won't be the real one as some of them will disappear after
emitting that signal. This can happen after injecting a different init segment.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4535>
While testing the [implementation for insertable streams] in `webrtcsink` &
`webrtcsrc`, I encountered critical warnings, which turned out to result from
two race conditions in `rtpsession`. Both race conditions produce:
> GLib-CRITICAL: g_hash_table_foreach:
> assertion 'version == hash_table->version' failed
This commit fixes one of the race conditions observed.
In its simplest form, the test consists in 2 pipelines and a Signalling server:
* pipelines_sink: audiotestsrc ! webrtcsink
* pipelines_src: webrtcsrc ! appsrc
1. Set `pipelines_sink` to `Playing`.
2. The Signalling server delivers the `producer_id`.
3. Initialize `pipelines_src` to establish a session with `producer_id`.
4. Set `pipelines_src` to `Playing`.
5. Wait for a buffer to be received by the `appsrc`.
6. Set `pipelines_src` to `Null`.
7. Set `pipelines_sink` to `Null`.
The race condition happens in the following sequence:
* `webrtcsink` runs a task to periodically retrieve statistics from `webrtcbin`.
This transitively ends up executing `rtp_session_create_stats`.
* `pipelines_sink` is set to `Null`.
* In `Paused` to `Ready`, `gst_rtp_session_change_state()` calls
`rtp_session_reset()`.
* The assertion failure occurs when `rtp_session_reset` is called while
`rtp_session_create_stats` is executing.
This is because `rtp_session_create_stats` acquires the lock on `session` prior
to calling `g_hash_table_foreach`, but `rtp_session_reset` doesn't acquire the
lock before calling `g_hash_table_remove_all`.
Acquiring the lock in `rtp_session_reset` fixes the issue.
[implementing insertable streams support]: https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1176
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4528>
This reverts commit f29c19be58. If this is
called for the reference context then we would run into an infinite
loop, which is not really better than an assertion.
By fixing up DTS to never be ahead of the PTS in the previous commit
this situation should be impossible to hit now.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4498>
Fix the following use:
- upstream sends a video with a rotation tag, say 90°
- upstream switches to another video without rotation
- the second video was still rotated by videoflip
Fix this by resetting the orientation when receiving STREAM_START.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4377>
The previous code would only check if two packets in a row were duplicates. If
not (i.e. a packet is a duplicate of a packet received slightly before) the code
would generate completely bogus FCI because it assumes there were no duplicates
present in the array.
In order to be efficient, just store all received packets and remove the
duplicates just before the FCI is generated once the array of observations have
been sorted by seqnum.
Fixes TWCC usage with moderate to high packet duplication.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4328>
With GST_SEEK_FLAG_SNAP_AFTER present, the previous version would
adjust seek time based on the keyframe farthest away from desired_time.
This was incorrect, because we always want the *earliest* suitable keyframe
to seek to, not the last one.
With this fix, in case of the SNAP_AFTER, we now look for the closest keyframe
that can be found after desired_time. Behaviour for SNAP_BEFORE should remain
unchanged.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4183>
The flowcombiner and active_streams shouldn't be cleared in the
mse-bytestream variant, only in the mss-fragmented one. Otherwise the
soft reset leaves qtdemux in a state where it still believes that it has
streams, but they've been cleared. In that case, a null pointer
dereference happens and the app crashes.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4199>
The abort() method of SourceBuffer in Media Source Extensions is
expected to flush the demuxer and discard the current fragment,
if any. The configuration of tracks, if any, should be preserved.
qtdemux has different behavior for flush events depending on the
context.
This patch activates the intended behaviour only for streams of the
VARIANT_MSE_BYTESTREAM type, conformant to the ISO BMFF Bytestream
specification[1]. This flush behaviour is the same as the one
already in use for adaptivedemux sources.
[1] https://www.w3.org/TR/mse-byte-stream-format-isobmff/https://bugzilla.gnome.org/show_bug.cgi?id=795424
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4101>
This patch prevents a possible race condition from taking place between the EOS event handling and rtcp send
function/thread.
The condition starts by getting the GST_EVENT_EOS event on the send_rtp_sink pad, which causes two core things
to happen -- the event gets pushed down to the send_rtp_src pad and all sessions get marked "bye" prior to
completion of the event handler. In another thread the rtp_session_on_timeout function gets called after an
expiration of gst_clock_id_wait in the rtcp_thread function. This results in a call to the
ess->callbacks.send_rtcp(), which is configured as a function pointer to gst_rtp_session_send_rtcp via the
RTPSessionCallbacks structure passed to rtp_session_set_callbacks in the gst_rtp_session_init function.
In the race condition, the call to gst_rtp_session_send_rtcp can have the all_sources_bye boolean set to true
while GST_PAD_IS_EOS(rtpsession->send_rtp_sink) evaluates to false. This is the result of gst_rtp_session_send_rtcp
running before the send_rtp_sink's GST_EVENT_EOS handler completes. The exact point at which this condition occurs
is if there's a context switch to the rtcp_thread right after the call to rtp_session_mark_all_bye in the
GET_EVENT_EOS handler, but before the handler returns.
Normally, this would not be an issue because the rtcp_thread continues to run and indirectly call
gst_rtp_session_send_rtcp. However, the call to rtp_source_reset sets the sent_bye boolean to false, which ends up
causing rtp_session_are_all_sources_bye to return false. This gets passed to gst_rtp_session_send_rtcp and the EOS
event never gets sent.
The race condition results in the EOS event never getting passed to the rtcp_src pad, which prevents the bin and
pipeline from ever completing with EOS.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3798>
A data offset with an offset smaller than the moof length is wrong
in smooth streaming streams.
The samples will not be located and eventually playback will
error out. So compensate assuming data is in mdat following moof.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3840>
The av1C box is optional so dropping parsing does not break anything
fundamentally, and there seems to be no historical record how version 0
even looks like while the comments and the parsing disagreed with each
other.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3882>
When using qtdemux in a pipeline that should only work as a pure demuxer (not
for actual playback), qtdemux shouldn't emit new GstSegments to correct
the start time (jump to the future) to ensure that the user experiences no
playback delay. By doing so, it's generating the wrong segments when an append
of data from the past happens. When that happens, downstream elements such as
parsers (eg: aacparse) may clip those buffers laying before the GstSegment and
create problems on the GStreamer client app (eg: WebKit).
Getting buffers clipped out because of the wrong GstSegments started becoming
a problen when this commit was introduced:
ab6e49e9cc audioparsers: add back segment clipping to parsers that have lost it
This clipping makes test DASH shaka 35 from MVT tests[1] to fail in
WebKitGTK/WPE (at least) and can potentially cause a number of other problems
in the WebKit Media Source Extensions (MSE) code.
Note that this new behaviour of not emitting new GstSegments only makes sense
when qtdemux is being used as a pure demuxer and not as part of a regular
pipeline. This is why the variant field has been added. When equal to
VARIANT_MSE_BYTESTREAM, it will make qtdemux behave differently in push mode,
taking decisions that meet the expectations for an MSE-like processing mode.
This kind of tweaks have been done in the past for MSS streams, for instance.
That code has been refactored to use VARIANT_MSS_FRAGMENTED now, instead of
its own dedicated boolean flag.
Co-authored by: Alicia Boya García <ntrrgc@gmail.com>
...who suggested to use "variant: mse-bytestream" in the caps to identify that
mode, as proposed in her unmerged patch:
https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/issues/467
[1] https://github.com/rdkcentral/mvt
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3867>
All the RTP src pads were sharing the same stream-id while each actually
carry a different stream.
This was causing problem for example when funneling the streams together
and then trying to split them using 'streamiddemux'.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3855>
In theory, `dispose()` functions should be idempotent and should be
prepared not to crash or cause a double-free if an unref done from
inside caused a recursive call to `dispose()` of the same object.
https://developer.gnome.org/gobject/stable/howto-gobject-destruction.html
This patch modifies the `dispose()` method to honor these constraints.
Since the double `dispose()` call won't actually occur in qtdemux (there
is no cycle detection mechanism that could invoke it to work that way),
this is more of a code cleanup than a user-facing problem fix.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3822>
Deserialize socket control messages as GstSocketTimestampMessage only
if (level, type) is (SOL_SOCKET, SCM_TIMESTAMPNS).
Without this patch, messages with types SCM_RIGHTS or SCM_CREDENTIALS
could be deserialized as GstSocketTimestampMessage instead of
GUnixFDMessage or GUnixCredentialsMessage from gio.
Fixes#1736
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3777>
AVC-Intra is a range of H.264-compliant intra-only codecs from
Panasonic. The codes and descriptions have been taken from VLC.
The (encumbered) sample I have here produces byte-stream H.264,
including SPS and PPS and no `avcC` box.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3739>
This is recommended by various specifications for such framerates, while
for integer framerates we continue using centiframes to allow for some
more accuracy.
Using N means that no rounding error accumulates, eventually leading to
outputting a packet with a different duration.
Some tools such as MediaInfo determine that a stream is variable
framerate if any packet has a different duration than the others, and
there is no reason I can see for not using the full 4 bytes of
resolution that the mp4 timescale offers.
Example problematic pipeline:
```
videotestsrc num-buffers=5001 ! video/x-raw,framerate=60000/1001,width=320,height=240 ! \
videoconvert ! x264enc bitrate=80000 speed-preset=1 tune=zerolatency ! h264parse ! \
video/x-h264,profile=high-10 ! mp4mux ! filesink location="result2.mp4"
```
This results in a media file that MediaInfo detects as variable
framerate because the 5000th packet has duration 99 instead of 100.
With this patch, the timescale is 60000 and all packets have duration
1001.
Related issue for context: https://bugzilla.gnome.org/show_bug.cgi?id=769041
Co-authored-by: Sebastian Dröge <sebastian@centricular.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3049>
This reverts the decision from
https://bugzilla.gnome.org/show_bug.cgi?id=754230
where it was decided that we rather play safe and only use the `tfdt` if
it is "significantly different" to the sum of sample durations.
As the specification says
If the time expressed in the track fragment decode time (‘tfdt’) box
exceeds the sum of the durations of the samples in the preceding
movie and movie fragments, then the duration of the last sample
preceding this track fragment is extended such that the sum now
equals the time given in this box.
we have to use the `tfdt` in general to allow for it to signal gaps in
the stream.
A muxer producing fragments might not yet know the full duration of the
last sample of a previous fragment if the next fragment starts with a
gap, and knowing the actual start of the next fragment would potentially
require to violate latency requirements.
Additionally, the existence of `tfdt` allows to avoid accumulating
rounding errors from summing up the durations.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3586>
If we keep the old events they can be end up being passed to the app, that could
discard the protection information because it has been seen before.
Drive by improvement: use g_queue_clear_full instead of foreach+clear for
protection events.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3547>