When using qtdemux in a pipeline that should only work as a pure demuxer (not
for actual playback), qtdemux shouldn't emit new GstSegments to correct
the start time (jump to the future) to ensure that the user experiences no
playback delay. By doing so, it's generating the wrong segments when an append
of data from the past happens. When that happens, downstream elements such as
parsers (eg: aacparse) may clip those buffers laying before the GstSegment and
create problems on the GStreamer client app (eg: WebKit).
Getting buffers clipped out because of the wrong GstSegments started becoming
a problen when this commit was introduced:
ab6e49e9cc audioparsers: add back segment clipping to parsers that have lost it
This clipping makes test DASH shaka 35 from MVT tests[1] to fail in
WebKitGTK/WPE (at least) and can potentially cause a number of other problems
in the WebKit Media Source Extensions (MSE) code.
Note that this new behaviour of not emitting new GstSegments only makes sense
when qtdemux is being used as a pure demuxer and not as part of a regular
pipeline. This is why the variant field has been added. When equal to
VARIANT_MSE_BYTESTREAM, it will make qtdemux behave differently in push mode,
taking decisions that meet the expectations for an MSE-like processing mode.
This kind of tweaks have been done in the past for MSS streams, for instance.
That code has been refactored to use VARIANT_MSS_FRAGMENTED now, instead of
its own dedicated boolean flag.
Co-authored by: Alicia Boya García <ntrrgc@gmail.com>
...who suggested to use "variant: mse-bytestream" in the caps to identify that
mode, as proposed in her unmerged patch:
https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/issues/467
[1] https://github.com/rdkcentral/mvt
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3867>
All the RTP src pads were sharing the same stream-id while each actually
carry a different stream.
This was causing problem for example when funneling the streams together
and then trying to split them using 'streamiddemux'.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3855>
In theory, `dispose()` functions should be idempotent and should be
prepared not to crash or cause a double-free if an unref done from
inside caused a recursive call to `dispose()` of the same object.
https://developer.gnome.org/gobject/stable/howto-gobject-destruction.html
This patch modifies the `dispose()` method to honor these constraints.
Since the double `dispose()` call won't actually occur in qtdemux (there
is no cycle detection mechanism that could invoke it to work that way),
this is more of a code cleanup than a user-facing problem fix.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3822>
Deserialize socket control messages as GstSocketTimestampMessage only
if (level, type) is (SOL_SOCKET, SCM_TIMESTAMPNS).
Without this patch, messages with types SCM_RIGHTS or SCM_CREDENTIALS
could be deserialized as GstSocketTimestampMessage instead of
GUnixFDMessage or GUnixCredentialsMessage from gio.
Fixes#1736
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3777>
AVC-Intra is a range of H.264-compliant intra-only codecs from
Panasonic. The codes and descriptions have been taken from VLC.
The (encumbered) sample I have here produces byte-stream H.264,
including SPS and PPS and no `avcC` box.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3739>
This is recommended by various specifications for such framerates, while
for integer framerates we continue using centiframes to allow for some
more accuracy.
Using N means that no rounding error accumulates, eventually leading to
outputting a packet with a different duration.
Some tools such as MediaInfo determine that a stream is variable
framerate if any packet has a different duration than the others, and
there is no reason I can see for not using the full 4 bytes of
resolution that the mp4 timescale offers.
Example problematic pipeline:
```
videotestsrc num-buffers=5001 ! video/x-raw,framerate=60000/1001,width=320,height=240 ! \
videoconvert ! x264enc bitrate=80000 speed-preset=1 tune=zerolatency ! h264parse ! \
video/x-h264,profile=high-10 ! mp4mux ! filesink location="result2.mp4"
```
This results in a media file that MediaInfo detects as variable
framerate because the 5000th packet has duration 99 instead of 100.
With this patch, the timescale is 60000 and all packets have duration
1001.
Related issue for context: https://bugzilla.gnome.org/show_bug.cgi?id=769041
Co-authored-by: Sebastian Dröge <sebastian@centricular.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3049>
This reverts the decision from
https://bugzilla.gnome.org/show_bug.cgi?id=754230
where it was decided that we rather play safe and only use the `tfdt` if
it is "significantly different" to the sum of sample durations.
As the specification says
If the time expressed in the track fragment decode time (‘tfdt’) box
exceeds the sum of the durations of the samples in the preceding
movie and movie fragments, then the duration of the last sample
preceding this track fragment is extended such that the sum now
equals the time given in this box.
we have to use the `tfdt` in general to allow for it to signal gaps in
the stream.
A muxer producing fragments might not yet know the full duration of the
last sample of a previous fragment if the next fragment starts with a
gap, and knowing the actual start of the next fragment would potentially
require to violate latency requirements.
Additionally, the existence of `tfdt` allows to avoid accumulating
rounding errors from summing up the durations.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3586>
If we keep the old events they can be end up being passed to the app, that could
discard the protection information because it has been seen before.
Drive by improvement: use g_queue_clear_full instead of foreach+clear for
protection events.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3547>
Currently in rtp_session_send_rtp(), the existing ntp-64 RTP header
extension timestamp is updated with the actual NTP time before sending
the packet. However, there are some circumstances where we would like
to preserve the original timestamp obtained from reference timestamp
buffer metadata.
This commit provides the ability to configure whether or not to update
the ntp-64 header extension timestamp with the actual NTP time via the
update-ntp64-header-ext boolean property. The property is also exposed
via rtpbin. Default property value of TRUE will preserve existing
behavior (update ntp-64 header ext with actual NTP time).
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1580
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3451>
Previously we tried to route an incoming RTCP FB FIR to the correct ssrc
using the "media source" component of the RTCP FB message. However,
according to RFC5104 (section 4.3.1.2) the "media source" SHALL be set
to 0. Instead the ssrc(s) in use are propagated via the FCI data. Now
a specific GstForceKeyUnit event is sent for every ssrc.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3292>
RTP source statistics are tracked for local senders by
treating them as a receiver of their own outbound packets.
Accordingly, track the highest packet seqnum so that the
packets-lost calculation generates a sensible number instead
of always reporting -$number_of_packets_sent
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3454>
When getting a "404 Not Found" response from the DESCRIBE request, the
source produced a "No supported authentication protocol was found" error
instead of passing on the 404, which was confusing.
Only produce this error message when we're handling a response of "401
Unauthorized" without a compatible WWW-Authenticate header.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3414>
The only case where we definitely need to write a new trun is when the
data_offset value does not match the end of the list of entries.
Needing multiple trun atoms is required when interleaving multiple
streams together.
All other cases can be covered by adding more entries to the existing
trun atom.
Fixes playback of fragemented mp4 in ffplay and chrome.
Using e.g. mp4mux fragment-duration=1000 fragment-mode=dash-or-mss
and
mp4mux fragment-duration=1000 fragment-mode=first-moov-then-finalise
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3426>
In order to figure out if the "raw" audio contained within the wav
container is actually DTS, wavparse calls the typefinder helper
except that means it runs all typefinders.
Since it only cares about checking for DTS, we should only run the
audio/x-dts typefinder (if present). Commit 858e516 did not really
fix things.
Use the new type helper with the caps to fix this.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3417>
Set udpsrc seqnums on all events sent to the udpsrc's, and before
forwarding events out of rtspsrc set the latest seek seqnum on them if
any.
Also produce a consistent seqnum in rtspsrc from the very beginning.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3409>