This is needed for the case you don't know in advance all the sessions
you will be using, but would like to place all the related AUX element
in the same GstBin. As per current implementation, each time an sender
AUX bin is requested and returned, RTPBin will walk the src pads and
create sessions for these pads.
In the current implementation, if a src pad already have a sessions, it
returns an error and stops. As a side effect, if an AUX bin is reused in
a following AUX bin request, it can only work if the pads are created on
the last request.
This change simply relax the restriction in order to keep walking, and
just ensure that all newly created pads have a sessions.
In commit 28e5f9098 (rtpbin: use PacketInfo for the sender, 2013-09-13)
the rtp_source_send_rtp signature changed but the documentation was not
adjusted to match the new one.
Update the documentation to match the function signature.
Some functions now accept a generic 'gpointer data' parameter because
they can work either on a single buffer or a buffer list.
However the comments were still referring to the old 'GstBuffer *buffer'
parameter, so update the comments to match the actual functions
signature.
So far we assumed that if all sources are bye, this meant we needed to
send an EOS on the RTCP sink. The problem is that this case may happens
if we only had one internal source and it detected a collision.
So now we limit the EOS forwarding to when there is a send_rtp_sink pad
and that this pad has received EOS. We don'tcheck the recv_rtp_sink
since the code does not wait for the bye to be send before sending EOS
to the RTCP src pad.
This is useful when implementing custom retransmission mechanism like
RIST to prevent RTCP from being produces for the retransmitted SSRC.
This would also be used in general for various purpose when customizing
an RTP base pipeline.
If it goes over 2^15 packets, it will think it has rolled over
and start dropping all packets. So make sure the seqnum distance is not too big.
But let's not limit it to a number that is too small to avoid emptying it
needlessly if there is a spurious huge sequence number, let's allow at
least 10k packets in any case.
This reverts commit dcd3ce9751.
This functionality was implemented for gstopenwebrtc, but it
turned out this was not actually needed for webrtc bundling
support, as shown in webrtcbin. It also doesn't correspond
to any standards.
This is an API break, but nothing should actually depend on
this, at least not for its initial purpose.
Changes in rtpbin.c were reverted manually, to preserve some
refactoring that had occurred in the original commit.
Fixes#537
When the EOS event is received, run all timers immediately and avoid
pushing the EOS downstream before this has been run. This ensures that
the lost packet statistics are accurate.
After EOS is received, it is pointless to wait for further events,
specially waiting on timers. This patches fixes two cases where we could
wait instead of returning GST_FLOW_EOS and trigger a spin of the loop
function when EOS is queued, regardless if this EOS is the queue head or
not.
This is an extra internal recurisve lock use to avoid having to take
both sink pad streams lock all the time. This patch renamed it
INTERLNAL_STREAM_LOCK/UNLOCK() to avoid confusion with possible upstream
GST_PAD API.
This reverts "6f3734c305 rtpssrcdemux: Only forward stick events while
holding the sinkpad stream lock" and actually hold on the internal
stream lock. This prevents in some needed case having a second
streaming thread poping in and messing up event ordering.
While forwarding serialized event, we use gst_pad_forward() function.
In the forward callback (GstPadForwardFunction) we always return
TRUE. Returning true there will stop the dispatching procedure. As a
side effect, only one events is receiving the events. This breaks
when sending EOS from the applicaiton, it also breaks the latency
tracer.
Reset RTPSession when rtpsession changes state from PAUSED to READY.
Without this change, a stored last_rtptime in RTPSource could interfere
with RTP timestamp generation in RTCP Sender Report.
Fixes#510
Always wait with starting the RTCP thread until either a RTP or RTCP
packet is sent or received. Special handling is needed to make sure the
RTCP thread is started when requesting an early RTCP packet.
We want to wait with starting the RTCP thread until it's needed in order
to not send RTCP packets for an inactive source.
https://bugzilla.gnome.org/show_bug.cgi?id=795139
The code before copied GstStructure twice. The first time inside
gst_value_set_structure and the second time in g_value_array_append.
Optimized version does no copies, just transfers ownership to
GValueArray. It takes advantage of the fact that array has already
enough elements preallocated and the memory is zero initialized.
https://bugzilla.gnome.org/show_bug.cgi?id=795139
If obtain_internal_source() returns a source that is not internal it
means there exists a non-internal source with the same ssrc. Such an
ssrc collision should be handled by sending a GstRTPCollision event
upstream and choose a new ssrc, but for now we simply drop the packet.
Trying to process the packet further will cause it to be pushed
usptream (!) since the source is not internal (see source_push_rtp()).
https://bugzilla.gnome.org/show_bug.cgi?id=795139
If there is an external source which is about to timeout and be removed
from the source hashtable and we receive feedback RTCP packet with the
media ssrc of the source, we unlock the session in
rtp_session_process_feedback before emitting 'on-feedback-rtcp' signal
allowing rtcp timer to kick in and grab the lock. It will get rid of
the source and rtp_session_process_feedback will be left with RTPSource
with ref count 0.
The fix is to grab the ref to the RTPSource object in
rtp_session_process_feedback.
https://bugzilla.gnome.org/show_bug.cgi?id=795139
These are the sources we send from, so there is no reason to
report receive statistics for them (as we do not receive on them,
and the remote side has no knowledge of them).
https://bugzilla.gnome.org/show_bug.cgi?id=795139
The code responsible for creating retransmitted buffers
assumed the stored buffer had been created with
rtp_buffer_new_allocate when copying the extension data,
which isn't necessarily the case, for example when
the rtp buffers come from a udpsrc.
https://bugzilla.gnome.org/show_bug.cgi?id=794958
Similar to the get-session and get-internal-session signals,
we expose a get-storage signal in addition to the
get-internal-storage signal to give access to the actual
element for applications that need to set properties on the
element, in particular "size-time"
https://bugzilla.gnome.org/show_bug.cgi?id=794910
Packets with these payload types will be dropped. A use case
for this is FEC, where we want FEC packets to go through the
jitterbuffer, but not be output by rtpbin.
https://bugzilla.gnome.org/show_bug.cgi?id=792696
When the signal returns a floating reference, as its return type
is transfer full, we need to sink it ourselves before passing
it to gst_bin_add (which is transfer floating).
This allows us to unref it in bin_remove_element later on, and
thus to also release the reference we now own if the signal
returns a non-floating reference as well.
As we now still hold a reference to the element when removing it,
we also need to lock its state and setting it to NULL before
unreffing it
Also update the request_aux_sender test.
https://bugzilla.gnome.org/show_bug.cgi?id=792543
When XR packet is detected, warning message leads to misunderstandings.
Until RFC3611 is implemented in gst-plugins-base, the level needs to
be downgraded to avoid confusion.
https://bugzilla.gnome.org/show_bug.cgi?id=789746
Doesn't do anything fancy yet, but still avoids lots of
unnecessary locking/unlocking that would happen if the
default chain_list fallback function in GstPad got invoked.
Timestamp offsets needs to be checked to detect unrealistic values
caused for example by NTP clocks not in sync. The new parameter
max-ts-offset lets the user decide an upper offset limit. There
are two different cases for checking the offset based on if
ntp-sync is used or not:
1) ntp-sync enabled
Only negative offsest are allowed since a positive offset would
mean that the sender and receiver clocks are not in sync.
Default vaule of max-ts-offset = 0 (disabled)
2) ntp-sync disabled
Both positive and negative offsets are allowed.
Default vaule of max-ts-offset = 3000000000
The reason for different default values is to be backwards
compatible.
https://bugzilla.gnome.org/show_bug.cgi?id=785733
Instant large changes to ts_offset may cause timestamps to move
backwards and also cause visible effects in media playback. The new
option max-ts-offset-adjustment lets the application control the rate to
apply changes to ts_offset.
https://bugzilla.gnome.org/show_bug.cgi?id=784002
* use INFO/DEBUG/LOG/TRACE equaly and meaningfully;
previously rtprtxsend:LOG and rtprtxreceive:LOG would generate
a totally different amount of log traffic and sometimes it was
impossible to see the information you wanted without useless
spam being printed around
* improve the wording, give a reasonable and self-explanatory
amount of information
* print SSRCs in hex
* avoid G_FOO_FORMAT for readability (we are just printing integers)
If one requests the send_rtcp_src_%u pad before a recv_rtcp_sink_%u pad,
the session/pad would never be created and NULL was returned.
Switching the request order would work.
https://bugzilla.gnome.org/show_bug.cgi?id=786718
Callers of the API (rtpsource, rtpjitterbuffer) pass clock_rate
as a signed integer, and the comparison "<= 0" is used against
it, leading me to think the intention was to have the field
be typed as gint32, not guint32.
This led to situations where we could call scale_int with
a MAX_UINT32 (-1) guint32 as the denom, thus raising an
assertion.
https://bugzilla.gnome.org/show_bug.cgi?id=785991
gst_util_uint64_scale_int takes a gint as denom parameter
whereas ctx->clock_rate is a guint32.
It happens when gst_rtp_packet_rate_ctx_reset set clock_rate
to -1.
So just define clock_rate as gint like it is done in rtpsource.h
https://bugzilla.gnome.org/show_bug.cgi?id=784250
When set this property will allow the jitterbuffer to start delivering
packets as soon as N most recent packets have consecutive seqnum. A
faststart-min-packets of zero disables this feature. This heuristic is
also used in rtpsource which implements the probation mechanism and a
similar heuristic is used to handle long gaps.
https://bugzilla.gnome.org/show_bug.cgi?id=769536
This debug statement is meant to print the time since the last (early)
RTCP transmission, not the last regular RTCP transmission (which also
happens to be set a few lines above to current_time, so the debug output
is just confusing)
In function rtp_jitter_buffer_calculate_pts: If gap in incoming RTP
timestamps is more than (3 * jbuf->clock_rate) we call
rtp_jitter_buffer_reset_skew which resets pts to 0. So components down
the pipeline (playes, mixers) just skip frames/samples until pts becomes
equal to pts before gap.
In version 1.10.2 and before this checking was bypassed for packets with
"estimated dts", and gaps were handled correctly.
https://bugzilla.gnome.org/show_bug.cgi?id=778341
When providing items with a seqnum, there is a (very small) probability
that an element with the same seqnum already exists. Don't forget
to free that item if it wasn't inserted.
And avoid returning undefined values when dealing with duplicate items
If an element queries the number of retransmission buffers pushed
*while* the push is still taking place (and before the object lock
is taken just after) it would end up with the wrong statistic
being reported.
Increment it just before the push, avoids races when getting statistics
https://bugzilla.gnome.org/show_bug.cgi?id=768723
A new signal named on-bundled-ssrc is provided and can be
used by the application to redirect a stream to a different
GstRtpSession or to keep the RTX stream grouped within the
GstRtpSession of the same media type.
https://bugzilla.gnome.org/show_bug.cgi?id=772740
When doing rtx, the jitterbuffer will always add an rtx-timer for the next
sequence number.
In the case of the packet corresponding to that sequence number arriving,
that same timer will be reused, and simply moved on to wait for the
following sequence number etc.
Once an rtx-timer expires (after all retries), it will be rescheduled as
a lost-timer instead for the same sequence number.
Now, if this particular sequence-number now arrives (after the timer has
become a lost-timer), the reuse mechanism *should* now set a new
rtx-timer for the next sequence number, but the bug is that it does
not change the timer-type, and hence schedules a lost-timer for that
following sequence number, with the result that you will have a very
early lost-event for a packet that might still arrive, and you will
never be able to send any rtx for this packet.
Found by Erlend Graff - erlend@pexip.comhttps://bugzilla.gnome.org/show_bug.cgi?id=773891
The lost-event was using a different time-domain (dts) than the outgoing
buffers (pts). Given certain network-conditions these two would become
sufficiently different and the lost-event contained timestamp/duration
that was really wrong. As an example GstAudioDecoder could produce
a stream that jumps back and forth in time after receiving a lost-event.
The previous behavior calculated the pts (based on the rtptime) inside the
rtp_jitter_buffer_insert function, but now this functionality has been
refactored into a new function rtp_jitter_buffer_calculate_pts that is
called much earlier in the _chain function to make pts available to
various calculations that wrongly used dts previously
(like the lost-event).
There are however two calculations where using dts is the right thing to
do: calculating the receive-jitter and the rtx-round-trip-time, where the
arrival time of the buffer from the network is the right metric
(and is what dts in fact is today).
The patch also adds two tests regarding B-frames or the
“rtptime-going-backwards”-scenario, as there were some concerns that this
patch might break this behavior (which the tests shows it does not).
The new timeout is always going to be (timeout + delay), however, the
old behavior compared the current timeout to just (timeout), basically
being (delay) off.
This would happen if rtx-delay == rtx-retry-timeout, with the result that
a second rtx attempt for any buffers would be scheduled immediately instead
of after rtx-delay ms.
Simply calculate (new_timeout = timeout + delay) and then use that instead.
https://bugzilla.gnome.org/show_bug.cgi?id=773905
Instead of sending EOS when a source byes we have to wait for
all the sources to be gone, which means they already sent BYE and
were removed from the session. We now handle the EOS in the rtcp
loop checking the amount of sources in the session.
https://bugzilla.gnome.org/show_bug.cgi?id=773218
The basic idea is this:
1. For *larger* rtx-rtt, weigh a new measurement as before
2. For *smaller* rtx-rtt, be a bit more conservative and weigh a bit less
3. For very large measurements, consider them "outliers"
and count them a lot less
The idea being that reducing the rtx-rtt is much more harmful then
increasing it, since we don't want to be underestimating the rtt of the
network, and when using this number to estimate the latency you need for
you jitterbuffer, you would rather want it to be a bit larger then a bit
smaller, potentially losing rtx-packets. The "outlier-detector" is there
to prevent a single skewed measurement to affect the outcome too much.
On wireless networks, these are surprisingly common.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
Assuming equidistant packet spacing when that's not true leads to more
loss than necessary in the case of reordering and jitter. Typically this
is true for video where one frame often consists of multiple packets
with the same rtp timestamp. In this case it's better to assume that the
missing packets have the same timestamp as the last received packet, so
that the scheduled lost timer does not time out too early causing the
packets to be considered lost even though they may arrive in time.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
There is no need to schedule another EXPECTED timer if we're already
past the retry period. Under normal operation this won't happen, but if
there are more timers than the jitterbuffer is able to process in
real-time, scheduling more timers will just make the situation worse.
Instead, consider this packet as lost and move on. This scenario can
occur with high loss rate, low rtt and high configured latency.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
This patch fixes an issue with the estimated gap duration when there is
a gap immediately after a lost timer has been processed. Previously
there was a discrepancy beteen the gap in seqnum and gap in dts which
would cause wrong calculated duration. The issue would only be seen with
retranmission enabled since when it's disabled lost timers are only
created when a packet is received and the actual gap length and last dts
is known.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
Stats should also be collected for unsuccessful packets.
rtx-rtt is very important for determining the necessary configured
latency on the jitterbuffer. It's especially important to be able to
increase the latency when retransmitted packets arrive too late and are
considered lost. This patch includes these late packets in the
calculation of the various rtx stats, making them more correct and
useful.
Also in the case where the original packet arrives after a NACK is sent,
the received RTX packet should update the stats since it provides useful
information about RTT.
The RTT is only updated if and only if all requested retranmissions are
received. That way the RTT is guaranteed to make sense. If not we don't
know which request the packet is a response to and the RTT may be bogus.
A consequence of this patch is that RTT is not updated for a request
when one of the RTX packets for that seqnum is lost, but that since
measured RTT will be more accurate.
The implementation store the RTX information from the timed out timers
and use this when the retransmitted packet arrives. For performance
these timers are stored separately from the "normal" timers in order to
not impact performance (see attached performance test).
https://bugzilla.gnome.org/show_bug.cgi?id=769768
When disabled we can save some iterations over timers.
There is probably an argument for rtx-delay-reorder to exist, but
for normal operations, handling jitter (reordering) is something a
jitterbuffer should do, and this variable feels like functionality that
is not "in-sync" with what the jitterbuffer is trying to achieve.
Example: You have 50ms jitter on your network, and are receiving
audio packets with 10ms durations. An audio packet should not be
considered late until its rtx-timeout has expired (and hence a rtx-event
is sent), but with rtx-delay-reorder, events will be sent pretty much
all the time due to the jitter on the network.
Point being: The jitterbuffer should adapt its size to the measured network
jitter, and then rtx-delay-reorder needs to adapt as well, or simply
get out of the way and let the other (better) rtx-mechanisms do their job.
Also change find_timer to only use seqnum as an argument, since there
will only ever be one timer per seqnum at any given time. In the
one case where the type matters, the caller simply checks the type.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
To be able to cap the number of allowed streams for one session.
This is useful for preventing DoS attacks, where a sender can change
SSRC for every buffer, effectively bringing rtpbin to a halt.
https://bugzilla.gnome.org/show_bug.cgi?id=770292
https://github.com/mesonbuild/meson
With contributions from:
Tim-Philipp Müller <tim@centricular.com>
Jussi Pakkanen <jpakkane@gmail.com> (original port)
Highlights of the features provided are:
* Faster builds on Linux (~40-50% faster)
* The ability to build with MSVC on Windows
* Generate Visual Studio project files
* Generate XCode project files
* Much faster builds on Windows (on-par with Linux)
* Seriously fast configure and building on embedded
... and many more. For more details see:
http://blog.nirbheek.in/2016/05/gstreamer-and-meson-new-hope.htmlhttp://blog.nirbheek.in/2016/07/building-and-developing-gstreamer-using.html
Building with Meson should work on both Linux and Windows, but may
need a few more tweaks on other operating systems.
The current 'l' pointer will be NULL when the loop
is interrupted with a 'break' statement. Need to have
it advance to the next list item before interrupting.
With non-time segments, it now assumes that the arrival time of packets
is not relevant and that only the RTP timestamp matter and it produces
an output segment start at running time 0.
https://bugzilla.gnome.org/show_bug.cgi?id=766438
Some endpoints (like Tandberg E20) can send BYE packet containing our
internal SSRC. I this case we would detect SSRC collision and get rid
of the source at some point. But because we are still sending packets
with that SSRC the source will be recreated immediately.
This brand new internal source will not have some variables incorrectly
set in its state. For example 'seqnum-base` and `clock-rate` values will be
-1.
The fix is not to act on BYE RTCP if it contains internal or unknown
SSRC.
https://bugzilla.gnome.org/show_bug.cgi?id=762219
When a packet arrives that has already been considered lost as part of a
large gap the "lost timer" for this will be cancelled. If the remaining
packets of this large gap never arrives, there will be missing entries
in the queue and the loop function will keep waiting for these packets
to arrive and never push another packet, effectively stalling the
pipeline.
The proposed fix conciders parts of a large gap definitely lost (since
they are calculated from latency) and ignores the late arrivals.
In practice the issue is rare since large gaps are scheduled immediately,
and for the stall to happen the late arrival needs to be processed
before this times out.
https://bugzilla.gnome.org/show_bug.cgi?id=765933
The access to the session hash table must happen while the session lock is
taken, otherwise another thread might modify the hash table while we're
creating the stats.
https://bugzilla.gnome.org/show_bug.cgi?id=766025
The head of the queue is the oldest packet (as in lowest seqnum), the tail is
the newest packet. To calculate the fill level, we should calculate tail-head
while considering wraparounds. Not the other way around.
Other code is already doing this in the correct order.
https://bugzilla.gnome.org/show_bug.cgi?id=764889
When downstream blocks, "lost" timers are created to notify the
outgoing thread that packets are lost.
The problem is that for high packet-rate streams, we might end up with
a big list of lost timeouts (had a use-case with ~1000...).
The problem isn't so much the amount of lost timeouts to handle, but
rather the way they were handled. All timers would first be iterated,
then the one selected would be handled ... to re-iterate the list again.
All of this is being done while the jbuf lock is taken, which in some use-cases
would return in holding that lock for 10s... blocking any buffers from
being accepted in input... which would then arrive late ... which would
create plenty of lost timers ... which would cause the same issue.
In order to avoid that situation, handle the lost timers immediately when
iterating the list of pending timers. This modifies the complexity from
a quadratic to a linear complexity.
https://bugzilla.gnome.org/show_bug.cgi?id=762988
We would queue 5 consective packets before considering a reset and a proper
discont here. Instead of expecting the next output packet to have the current
seqnum (i.e. the fifth), expect it to have the first seqnum. Otherwise we're
going to drop all queued up packets.
generate_rtcp can produce empty packets when reduced size RTCP is turned on.
Skip them since it doesn't make sense to push them and they cause errors with
elements that expect RTCP packets to contain data (like srtpenc).
No need to use G_GINT64_FORMAT for potentially negative values of
GstClockTimeDiff. Since 1.6 these can be handled with GST_STIME_ARGS.
Plus it creates more readable values in the logs.
https://bugzilla.gnome.org/show_bug.cgi?id=757480
Add statitics from each rtp source to the rtp session property.
'source-stats' is a GValueArray where each element is a GstStructure of
stats for one rtp source.
The availability of new stats is signaled via g_object_notify.
https://bugzilla.gnome.org/show_bug.cgi?id=752669
By not doing this, the muxer is not effectively a rtpmuxer, rather a
funnel, since it should be a single stream that exists the muxer.
If not specified, take the first ssrc seen on a sinkpad, allowing upstream
to decide ssrc in "passthrough" with only one sinkpad.
Also, let downstream ssrc overrule internal configured one
We hence has the following order for determining the ssrc used by
rtpmux:
0. Suggestion from GstRTPCollision event
1. Downstream caps
2. ssrc-Property
3. (First) upstream caps containing ssrc
4. Randomly generated
https://bugzilla.gnome.org/show_bug.cgi?id=752694
Estimating it from the RTP time will give us the PTS, so in cases of PTS!=DTS
we would produce wrong DTS. As now the estimated DTS is based on the clock,
don't store it in the jitterbuffer items as it would otherwise be used in the
skew calculations and would influence the results. We only really need the DTS
for timer calculations.
https://bugzilla.gnome.org/show_bug.cgi?id=749536
The amount of time that is completely expired and not worth waiting for,
is the duration of the packets in the gap (gap * duration) - the
latency (size) of the jitterbuffer (priv->latency_ns). This is the duration
that we make a "multi-lost" packet for.
The "late" concept made some sense in 0.10 as it reflected that a buffer
coming in had not been waited for at all, but had a timestamp that was
outside the jitterbuffer to wait for. With the rewrite of the waiting
(timeout) mechanism in 1.0, this no longer makes any sense, and the
variable no longer reflects anything meaningful (num > 0 is useless,
the duration is what matters)
Fixed up the tests that had been slightly modified in 1.0 to allow faulty
behavior to sneak in, and port some of them to use GstHarness.
https://bugzilla.gnome.org/show_bug.cgi?id=738363
This reverts commit 05bd708fc5.
The reverted patch is wrong and introduces a regression because there
may still be time to receive some of the packets included in the gap
if they are reordered.
If we have a clock, update "now" now with the very latest running time we have.
If timers are unscheduled below we otherwise wouldn't update now (it's only updated
when timers expire), and also for the very first loop iteration now would otherwise
always be 0.
Also the time is used for the timeout functions, e.g. to calculate any times
for the next timeouts and we would otherwise pass too old times there.
https://bugzilla.gnome.org/show_bug.cgi?id=751636