This patch prevents a possible race condition from taking place between the EOS event handling and rtcp send
function/thread.
The condition starts by getting the GST_EVENT_EOS event on the send_rtp_sink pad, which causes two core things
to happen -- the event gets pushed down to the send_rtp_src pad and all sessions get marked "bye" prior to
completion of the event handler. In another thread the rtp_session_on_timeout function gets called after an
expiration of gst_clock_id_wait in the rtcp_thread function. This results in a call to the
ess->callbacks.send_rtcp(), which is configured as a function pointer to gst_rtp_session_send_rtcp via the
RTPSessionCallbacks structure passed to rtp_session_set_callbacks in the gst_rtp_session_init function.
In the race condition, the call to gst_rtp_session_send_rtcp can have the all_sources_bye boolean set to true
while GST_PAD_IS_EOS(rtpsession->send_rtp_sink) evaluates to false. This is the result of gst_rtp_session_send_rtcp
running before the send_rtp_sink's GST_EVENT_EOS handler completes. The exact point at which this condition occurs
is if there's a context switch to the rtcp_thread right after the call to rtp_session_mark_all_bye in the
GET_EVENT_EOS handler, but before the handler returns.
Normally, this would not be an issue because the rtcp_thread continues to run and indirectly call
gst_rtp_session_send_rtcp. However, the call to rtp_source_reset sets the sent_bye boolean to false, which ends up
causing rtp_session_are_all_sources_bye to return false. This gets passed to gst_rtp_session_send_rtcp and the EOS
event never gets sent.
The race condition results in the EOS event never getting passed to the rtcp_src pad, which prevents the bin and
pipeline from ever completing with EOS.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3798>
A data offset with an offset smaller than the moof length is wrong
in smooth streaming streams.
The samples will not be located and eventually playback will
error out. So compensate assuming data is in mdat following moof.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3840>
The av1C box is optional so dropping parsing does not break anything
fundamentally, and there seems to be no historical record how version 0
even looks like while the comments and the parsing disagreed with each
other.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3882>
When using qtdemux in a pipeline that should only work as a pure demuxer (not
for actual playback), qtdemux shouldn't emit new GstSegments to correct
the start time (jump to the future) to ensure that the user experiences no
playback delay. By doing so, it's generating the wrong segments when an append
of data from the past happens. When that happens, downstream elements such as
parsers (eg: aacparse) may clip those buffers laying before the GstSegment and
create problems on the GStreamer client app (eg: WebKit).
Getting buffers clipped out because of the wrong GstSegments started becoming
a problen when this commit was introduced:
ab6e49e9cc audioparsers: add back segment clipping to parsers that have lost it
This clipping makes test DASH shaka 35 from MVT tests[1] to fail in
WebKitGTK/WPE (at least) and can potentially cause a number of other problems
in the WebKit Media Source Extensions (MSE) code.
Note that this new behaviour of not emitting new GstSegments only makes sense
when qtdemux is being used as a pure demuxer and not as part of a regular
pipeline. This is why the variant field has been added. When equal to
VARIANT_MSE_BYTESTREAM, it will make qtdemux behave differently in push mode,
taking decisions that meet the expectations for an MSE-like processing mode.
This kind of tweaks have been done in the past for MSS streams, for instance.
That code has been refactored to use VARIANT_MSS_FRAGMENTED now, instead of
its own dedicated boolean flag.
Co-authored by: Alicia Boya García <ntrrgc@gmail.com>
...who suggested to use "variant: mse-bytestream" in the caps to identify that
mode, as proposed in her unmerged patch:
https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/issues/467
[1] https://github.com/rdkcentral/mvt
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3867>
All the RTP src pads were sharing the same stream-id while each actually
carry a different stream.
This was causing problem for example when funneling the streams together
and then trying to split them using 'streamiddemux'.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3855>
In theory, `dispose()` functions should be idempotent and should be
prepared not to crash or cause a double-free if an unref done from
inside caused a recursive call to `dispose()` of the same object.
https://developer.gnome.org/gobject/stable/howto-gobject-destruction.html
This patch modifies the `dispose()` method to honor these constraints.
Since the double `dispose()` call won't actually occur in qtdemux (there
is no cycle detection mechanism that could invoke it to work that way),
this is more of a code cleanup than a user-facing problem fix.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3822>
Deserialize socket control messages as GstSocketTimestampMessage only
if (level, type) is (SOL_SOCKET, SCM_TIMESTAMPNS).
Without this patch, messages with types SCM_RIGHTS or SCM_CREDENTIALS
could be deserialized as GstSocketTimestampMessage instead of
GUnixFDMessage or GUnixCredentialsMessage from gio.
Fixes#1736
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3777>
AVC-Intra is a range of H.264-compliant intra-only codecs from
Panasonic. The codes and descriptions have been taken from VLC.
The (encumbered) sample I have here produces byte-stream H.264,
including SPS and PPS and no `avcC` box.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3739>
This is recommended by various specifications for such framerates, while
for integer framerates we continue using centiframes to allow for some
more accuracy.
Using N means that no rounding error accumulates, eventually leading to
outputting a packet with a different duration.
Some tools such as MediaInfo determine that a stream is variable
framerate if any packet has a different duration than the others, and
there is no reason I can see for not using the full 4 bytes of
resolution that the mp4 timescale offers.
Example problematic pipeline:
```
videotestsrc num-buffers=5001 ! video/x-raw,framerate=60000/1001,width=320,height=240 ! \
videoconvert ! x264enc bitrate=80000 speed-preset=1 tune=zerolatency ! h264parse ! \
video/x-h264,profile=high-10 ! mp4mux ! filesink location="result2.mp4"
```
This results in a media file that MediaInfo detects as variable
framerate because the 5000th packet has duration 99 instead of 100.
With this patch, the timescale is 60000 and all packets have duration
1001.
Related issue for context: https://bugzilla.gnome.org/show_bug.cgi?id=769041
Co-authored-by: Sebastian Dröge <sebastian@centricular.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3049>
This reverts the decision from
https://bugzilla.gnome.org/show_bug.cgi?id=754230
where it was decided that we rather play safe and only use the `tfdt` if
it is "significantly different" to the sum of sample durations.
As the specification says
If the time expressed in the track fragment decode time (‘tfdt’) box
exceeds the sum of the durations of the samples in the preceding
movie and movie fragments, then the duration of the last sample
preceding this track fragment is extended such that the sum now
equals the time given in this box.
we have to use the `tfdt` in general to allow for it to signal gaps in
the stream.
A muxer producing fragments might not yet know the full duration of the
last sample of a previous fragment if the next fragment starts with a
gap, and knowing the actual start of the next fragment would potentially
require to violate latency requirements.
Additionally, the existence of `tfdt` allows to avoid accumulating
rounding errors from summing up the durations.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3586>
If we keep the old events they can be end up being passed to the app, that could
discard the protection information because it has been seen before.
Drive by improvement: use g_queue_clear_full instead of foreach+clear for
protection events.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3547>
Currently in rtp_session_send_rtp(), the existing ntp-64 RTP header
extension timestamp is updated with the actual NTP time before sending
the packet. However, there are some circumstances where we would like
to preserve the original timestamp obtained from reference timestamp
buffer metadata.
This commit provides the ability to configure whether or not to update
the ntp-64 header extension timestamp with the actual NTP time via the
update-ntp64-header-ext boolean property. The property is also exposed
via rtpbin. Default property value of TRUE will preserve existing
behavior (update ntp-64 header ext with actual NTP time).
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1580
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3451>
Previously we tried to route an incoming RTCP FB FIR to the correct ssrc
using the "media source" component of the RTCP FB message. However,
according to RFC5104 (section 4.3.1.2) the "media source" SHALL be set
to 0. Instead the ssrc(s) in use are propagated via the FCI data. Now
a specific GstForceKeyUnit event is sent for every ssrc.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3292>
RTP source statistics are tracked for local senders by
treating them as a receiver of their own outbound packets.
Accordingly, track the highest packet seqnum so that the
packets-lost calculation generates a sensible number instead
of always reporting -$number_of_packets_sent
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3454>
When getting a "404 Not Found" response from the DESCRIBE request, the
source produced a "No supported authentication protocol was found" error
instead of passing on the 404, which was confusing.
Only produce this error message when we're handling a response of "401
Unauthorized" without a compatible WWW-Authenticate header.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3414>
The only case where we definitely need to write a new trun is when the
data_offset value does not match the end of the list of entries.
Needing multiple trun atoms is required when interleaving multiple
streams together.
All other cases can be covered by adding more entries to the existing
trun atom.
Fixes playback of fragemented mp4 in ffplay and chrome.
Using e.g. mp4mux fragment-duration=1000 fragment-mode=dash-or-mss
and
mp4mux fragment-duration=1000 fragment-mode=first-moov-then-finalise
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3426>
In order to figure out if the "raw" audio contained within the wav
container is actually DTS, wavparse calls the typefinder helper
except that means it runs all typefinders.
Since it only cares about checking for DTS, we should only run the
audio/x-dts typefinder (if present). Commit 858e516 did not really
fix things.
Use the new type helper with the caps to fix this.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3417>
Set udpsrc seqnums on all events sent to the udpsrc's, and before
forwarding events out of rtspsrc set the latest seek seqnum on them if
any.
Also produce a consistent seqnum in rtspsrc from the very beginning.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3409>
This is small regression from commit f7abd81a.
When calling `gst_element_query()` no pad is associated with that query, but the
current code always forwards the query to the associated pad, which is NULL in
previous case. This patch checks for the pad before forwarding the query.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3404>
In a few cases throughout qtdemux, the results of QT_UINT32 were being
stored in a signed integer, which could cause subtle bugs in the case of
an integer overflow, even allowing the the result to equal a negative
number!
This patch prevents this by simply storing the results of this function
call properly in an unsigned integer type. Additionally, we fix up the
length checking with stsd parsing to prevent cases of child atoms
exceeding their parent atom sizes.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3344>
In order to figure out if the "raw" audio contained within the wav
container is actually DTS, right now we call the typefinder helper
which runs all typefinders.
Speed up this type finding process by specifying the extension.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3294>
We've seen occasional crashes in the `wavparse` module associated with
referencing a buffer in `gst_wavparse_chain` that's already been freed. The
reference is stolen when the buffer is transferred to the adapter with
`gst_adapter_push` and, IIUC, assuming the source doesn't hold a reference to
the buffer, the buffer could be freed during interaction with the adapter in
`gst_wavparse_stream_headers`.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3179>
in certain ways.
In the case that a test is provided for, the size of the `fmt ` chunk is
changed from 16 bytes to 18 bytes (bytes 17 - 20 below):
```
$ hexdump -C corruptheadertestsrc.wav
00000000 52 49 46 46 e4 fd 00 00 57 41 56 45 66 6d 74 20 |RIFF....WAVEfmt |
00000010 12 00 00 00 01 00 01 00 80 3e 00 00 00 7d 00 00 |.........>...}..|
00000020 02 00 10 00 64 61 74 61 |....data|
00000028
```
(Note that the original file is much larger. This was the smallest sub-file
I could find that would generate the crash.)
Note that, while the same issue doesn't cause a crash in pull mode, there's a
different issue in that the file is processed successfully as if it was a .wav
file with zero samples.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3173>
This is a regression that was introduced in
cca2f555d1 (yes, 9 years ago).
The only place where a demuxer streaming thread should be stopped is when the
sinkpad is deactivated from pull mode (i.e. PAUSED->READY).
Attempting to stop the task in this function would cause this to happen when a
FLUSH_STOP or STREAM_START event is received... which can cause deadlocks.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3109>
If the SETUP request returns an IPv6 server address in the Transport
field, we would generate an incorrect URI, and multiudpsink would fail
to initialize:
```
rtspsrc gstrtspsrc.c:9780:dump_key_value:<source> key: 'Transport', value: 'RTP/AVP;unicast;source=fe80::dc27:25ff:fe5e:bd13:8080;client_port=62696-62697;server_port=4000-4001'
...
rtspsrc gstrtspsrc.c:4595:gst_rtspsrc_stream_configure_udp_sinks:<source> configure RTP UDP sink for fe80::dc27:25ff:fe5e:bd13:8080:4000
...
multiudpsink gstmultiudpsink.c:1229:gst_multiudpsink_configure_client:<udpsink0> error: Invalid address family (got 23)
```
We can't look at stream->is_ipv6 because we can't rely on the server
returning the right value there. In the issue reported about this,
server reported itself as `KuP RTSP Server/0.1`, and the SDP was:
```
c=IN IP4
m=video 54608 RTP/AVP 96
a=rtpmap:96 H264/90000
```
So we need to parse the string value and figure out the family
ourselves.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1058
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1819>
Timers for RTX packets are dealt with later in update_rtx_timers(), and
timers for non-RTX packets would potentially also be unscheduled a
second time from there so avoid that.
Also don't shadow the timer variable from the outer scope but instead
make use of it directly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2973>