Two RTP Header extensions are very relevant for rtprtxsend/receive.
1. "urn:ietf:params:rtp-hdrext:sdes:rtp-stream-id": will always be removed
2. "urn:ietf:params:rtp-hdrext:sdes:repaired-rtp-stream-id": will be written
instead of the "rtp-stream-id" header extension.
Currently it's only a simple replacement of one header extension for
another however a future change would only add the relevant extension
based on some heuristics (like, video frames only on one of the rtp key
frame buffers, or only until the rtx ssrc has been validated by the peer)
in order to reduce the required bandwidth.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1759>
When syncing to an RFC7273 clock this will add the original
reconstructed reference clock timestamp to buffers in form
of a GstReferenceTimestampMeta.
This is useful when we want to process or analyse data based
on the original timestamps untainted by any local adjustments,
for example reconstruct AES67 audio streams with sample accuracy.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1964>
When syncing to an RFC7273 clock this will add the original
reconstructed reference clock timestamp to buffers in form
of a GstReferenceTimestampMeta.
This is useful when we want to process or analyse data based
on the original timestamps untainted by any local adjustments,
for example reconstruct AES67 audio streams with sample accuracy.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1964>
When syncing to an RFC7273 clock this will add the original
reconstructed reference clock timestamp to buffers in form
of a GstReferenceTimestampMeta.
This is useful when we want to process or analyse data based
on the original timestamps untainted by any local adjustments,
for example reconstruct AES67 audio streams with sample accuracy.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1964>
Previously the result of the calculations included inaccuracies caused
by the NTP clock estimation, which caused the timestamps to jitter
+/- 1/clockrate.
By reorganizing the calculations it is possible to get rid of this
inaccuracy and calculate deterministic and exact packet timestamps based
on the actual NTP clock as long as the estimation is not off by more
than 2**31 clockrate units.
The only remaining inaccuracy that is introduced now is caused by the
conversion from the NTP clock to the pipeline clock.
Also split up debug output, demote many messages to the trace debug
level and output more intermediate results.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1955>
This is difficult to encounter in ordinary networks, but is
encountered when using tc-netem to add random delays to packets, and
also when your UDP stream is bonded over multiple links with varying
characteristics.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1952>
Gapless playback is handled by adjusting buffer timestamps & durations
and by adding GstAudioClippingMeta.
Support for "Frankenstein" streams (= poorly stitched together streams)
is also added, so that gapless playback support doesn't prevent those
from being properly played.
Co-authored-by: Sebastian Dröge <sebastian@centricular.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1028>
This patch fixes a seg.fault in gst_structure_new() with warnings as below.
GLib-GObject-WARNING **:
../gobject/gtype.c:4330: type id '0' is invalid
GLib-GObject-WARNING **:
can't peek value table for type '<invalid>' which is not currently referenced
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1918>
The point here is that rtpsession will create a new rtpsource when
the field "rtx-ssrc" is present, and when not doing rtx, that means
a random ssrc will create a new rtpsource that will be included in RTCP
messages for the current session.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1882>
When doing only a single stream of audio/video this hardly matters,
but when doing many at the same time, the fact that you have to get
a hold of the glib global type-system lock every time you process a buffer,
means that there is a limit to how many streams you can process in
parallel.
Luckily the fix is very simple, by doing a cast rather than a full
type-check.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1873>
Not having this field is equivalent with it being 1/1 so consider
it like that. The generic caps functions are not aware of these
semantics and would consider the caps different, causing a negotiation
failure when caps are changing from caps with to caps without or the
other way around.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1826>
Not having these fields is equivalent with them being mono/0 so consider
them like that. The generic caps functions are not aware of these
semantics and would consider the caps different, causing a negotiation
failure when caps are changing from caps with to caps without or the
other way around.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1826>
If the tracks element was parsed from the SeekEntry, don't
parse it a second time and recreate tracks, as this
loses any tags that were read using the seek table.
If a genuinely new Tracks element is found, do read that
as it is needed for MSE support.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1798>
LOAD macro relies in m7 being zero for interleaving purposes. Using LOAD
on the m7 register makes it interleave with its new content instead of
with 0.
The effect of this bug was bobbing on some static lines that appeared
over fast-moving content.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1816>
This allows downstream of a payloader to know the RTP header's marker
flag without first having to map the buffer and parse the RTP header.
Especially inside RTP header extension implementations this can be
useful to decide which packet corresponds to e.g. the last packet of a
video frame.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1776>
The RTP payload seems to be required as it carries the frame count
information. Also, gst_rtp_base_payload_allocate_output_buffer had
the second argument incorrect.
Strangely some devices like Shanling MP4 and Sony XM3 would still
work without this while some like the Sony XM4 do not.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1797>
Each stream may have its own segment timeline
(i.g., different segment.start or segment.base)
depending on edit-list and composition-to-decode atom.
Make sure whether time position of a stream has been actually
far behind than that of current target stream.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1352>
Constantly updating the ts_offset results in audiable glitches
when streaming audio using ntp-sync=true. By requiring a minimum
offset before updating ts_offset this can be mitigated. Added a
parameter which can be used to set min_ts_offset in ntp-sync mode.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1409>
When multiple streams are bundled together, there may be more
than one red payload type to handle.
In addition, as the red decoder works by filling in gaps in
the seqnums, there needs to be one rtp_history queue per sequence
domain.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1429>
In redenc, when input buffers have a header for the TWCC extension,
we now add one to our wrapper buffers.
In ulpfecenc we add one in that case to our protection buffers.
This makes TWCC functional when UlpRed is used in webrtcbin.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1414>
In order to allow "level-asymmetry-allowed" we now handle a new
"profile" field, which as the same semantics as the "profile" field in
H.264 stream so that we can force payloaded stream to have the right
format when using the `gst_sdp_media_get_caps_from_media` to set caps
filter after the payloader. This allows a simple negotiation in standard
RTP negotiation based on SDPs (like webrtc) for that particular case,
closely respecting the specs.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1410>
since `gst_caps_replace()` and `gst_pad_set_caps()` both ref the caps and neither of them takes the ownership of the caps -> it must be unreffed in `gst_multi_file_src_set_property()`
to test the leak (on Unix): `echo coucou > /tmp/file.txt && GST_TRACERS=leaks GST_DEBUG="GST_TRACER:7" gst-launch-1.0 multifilesrc location=/tmp/file.txt caps='txt' ! fakesink`
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1436>
Follow-up on 97d83056b3, only check
for intersection with the current srccaps when checking if a sinkpad
can accept caps.
I must have been lucky in my firefox testing then, and always entered
the code path with audio getting negotiated first, thus not failing
the is_subset check when srccaps had been negotiated as
application/x-rtp, and an accept-caps query was made for the video
caps with a defined extmap.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1384>
A previous patch has caused rtpfunnel to output twcc-related
information downstream, however this leaked into upstream
negotiation (through funnel->srccaps), causing payloader to
negotiate twcc caps even when not prompted to do so by the user.
Fix this by only enforcing that upstream sends us application/x-rtp
caps as was the case originally.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1278>
Negative composition time offsets are only allowed with version 1 of the
box, however we parse it as a signed value also for version 0 boxes as
unfortunately there are such files out there and it's unlikely to have
(valid) huge composition offsets.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1294>
g_sequence_remove_range's end iter is exclusive, so if one
wants to remove that item as well, it should be called with
the next iter.
This could in theory fix an issue where:
* The sequence isn't entirely trimmed, with an old item lingering
* Following FEC packets are immediately discarded because they
arrived later than corresponding media packets, long enough for
seqnums to wrap around
* We now try to reconstruct a media packet with a completely obsolete
FEC packet, chaos ensues.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1341>
The pipeline flow for receiving looks like this:
rtpsession ! rtpssrcdemux ! session_fec_decoder ! rtpjitterbuffer ! \
rtpptdemux ! stream_fec_decoder ! ...
There are two places where a fec decoder could be placed.
1. As requested from the 'request-fec-decoder' signal: after rtpptdemux
for each ssrc/pt produced
2. after rtpssrcdemux but before rtpjitterbuffer: added for the
rtpst2022-1-fecenc/dec elements,
However, there was some cross-contamination of the elements involved and
the request-fec-decoder signal was also being used to request the fec
decoder for the session_fec_decoder which would then be cached and
re-used for subsequent fec decoder requests. This would cause the same
element to be attempted to be linked to multiple elements in different
places in the pipeline. This would fail and cause all kinds of havoc
usually resulting in a not-linked error being returned upstream and an
error message being posted by the source.
Fix by not using the request-fec-decoder signal for requesting the
session_fec_decoder and instead solely rely on the added properties for
that case.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1300>
WebVTT in ISO MP4 is specified in ISO 14496-30,
and needed for DASH support. It's stored in an
mp4 specific format. To handle it compatibly,
the wvtt boxes are converted back into WebVTT text
and pushed as application/x-subtitle-vtt
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1182>
Swap the `need_process` boolean check on qtdemux streams
for a direct function pointer to the splitting function,
so we can stop adding extra cases to the single growing
`gst_qtdemux_process_buffer()` function.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1182>
Instead of assuming that the PTS of a keyframe is the lowest PTS of a
GOP, wait until the DTS has passed this PTS and take the minimum PTS up
to that point. That way the minimum PTS of a GOP can be determined, at
least for closed GOP streams. Open GOP streams still can't be handled
properly.
By knowing the minimum PTS of each GOP, keyframes can be requested at
the correct time relative to the GOP (and thus fragment) start and
fragment overflow calculations can calculate the correct durations of
the GOPs.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1005>
Since the base class now does the parsing, there is no need
to reproduce that code in all the subclasses, just pass the attributes
which are the only relevant bit anyway.
Also, only store the direction if the subclass accepted the caps
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/906>