This debug statement is meant to print the time since the last (early)
RTCP transmission, not the last regular RTCP transmission (which also
happens to be set a few lines above to current_time, so the debug output
is just confusing)
In function rtp_jitter_buffer_calculate_pts: If gap in incoming RTP
timestamps is more than (3 * jbuf->clock_rate) we call
rtp_jitter_buffer_reset_skew which resets pts to 0. So components down
the pipeline (playes, mixers) just skip frames/samples until pts becomes
equal to pts before gap.
In version 1.10.2 and before this checking was bypassed for packets with
"estimated dts", and gaps were handled correctly.
https://bugzilla.gnome.org/show_bug.cgi?id=778341
When providing items with a seqnum, there is a (very small) probability
that an element with the same seqnum already exists. Don't forget
to free that item if it wasn't inserted.
And avoid returning undefined values when dealing with duplicate items
If an element queries the number of retransmission buffers pushed
*while* the push is still taking place (and before the object lock
is taken just after) it would end up with the wrong statistic
being reported.
Increment it just before the push, avoids races when getting statistics
https://bugzilla.gnome.org/show_bug.cgi?id=768723
A new signal named on-bundled-ssrc is provided and can be
used by the application to redirect a stream to a different
GstRtpSession or to keep the RTX stream grouped within the
GstRtpSession of the same media type.
https://bugzilla.gnome.org/show_bug.cgi?id=772740
When doing rtx, the jitterbuffer will always add an rtx-timer for the next
sequence number.
In the case of the packet corresponding to that sequence number arriving,
that same timer will be reused, and simply moved on to wait for the
following sequence number etc.
Once an rtx-timer expires (after all retries), it will be rescheduled as
a lost-timer instead for the same sequence number.
Now, if this particular sequence-number now arrives (after the timer has
become a lost-timer), the reuse mechanism *should* now set a new
rtx-timer for the next sequence number, but the bug is that it does
not change the timer-type, and hence schedules a lost-timer for that
following sequence number, with the result that you will have a very
early lost-event for a packet that might still arrive, and you will
never be able to send any rtx for this packet.
Found by Erlend Graff - erlend@pexip.comhttps://bugzilla.gnome.org/show_bug.cgi?id=773891
The lost-event was using a different time-domain (dts) than the outgoing
buffers (pts). Given certain network-conditions these two would become
sufficiently different and the lost-event contained timestamp/duration
that was really wrong. As an example GstAudioDecoder could produce
a stream that jumps back and forth in time after receiving a lost-event.
The previous behavior calculated the pts (based on the rtptime) inside the
rtp_jitter_buffer_insert function, but now this functionality has been
refactored into a new function rtp_jitter_buffer_calculate_pts that is
called much earlier in the _chain function to make pts available to
various calculations that wrongly used dts previously
(like the lost-event).
There are however two calculations where using dts is the right thing to
do: calculating the receive-jitter and the rtx-round-trip-time, where the
arrival time of the buffer from the network is the right metric
(and is what dts in fact is today).
The patch also adds two tests regarding B-frames or the
“rtptime-going-backwards”-scenario, as there were some concerns that this
patch might break this behavior (which the tests shows it does not).
The new timeout is always going to be (timeout + delay), however, the
old behavior compared the current timeout to just (timeout), basically
being (delay) off.
This would happen if rtx-delay == rtx-retry-timeout, with the result that
a second rtx attempt for any buffers would be scheduled immediately instead
of after rtx-delay ms.
Simply calculate (new_timeout = timeout + delay) and then use that instead.
https://bugzilla.gnome.org/show_bug.cgi?id=773905
Instead of sending EOS when a source byes we have to wait for
all the sources to be gone, which means they already sent BYE and
were removed from the session. We now handle the EOS in the rtcp
loop checking the amount of sources in the session.
https://bugzilla.gnome.org/show_bug.cgi?id=773218
The basic idea is this:
1. For *larger* rtx-rtt, weigh a new measurement as before
2. For *smaller* rtx-rtt, be a bit more conservative and weigh a bit less
3. For very large measurements, consider them "outliers"
and count them a lot less
The idea being that reducing the rtx-rtt is much more harmful then
increasing it, since we don't want to be underestimating the rtt of the
network, and when using this number to estimate the latency you need for
you jitterbuffer, you would rather want it to be a bit larger then a bit
smaller, potentially losing rtx-packets. The "outlier-detector" is there
to prevent a single skewed measurement to affect the outcome too much.
On wireless networks, these are surprisingly common.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
Assuming equidistant packet spacing when that's not true leads to more
loss than necessary in the case of reordering and jitter. Typically this
is true for video where one frame often consists of multiple packets
with the same rtp timestamp. In this case it's better to assume that the
missing packets have the same timestamp as the last received packet, so
that the scheduled lost timer does not time out too early causing the
packets to be considered lost even though they may arrive in time.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
There is no need to schedule another EXPECTED timer if we're already
past the retry period. Under normal operation this won't happen, but if
there are more timers than the jitterbuffer is able to process in
real-time, scheduling more timers will just make the situation worse.
Instead, consider this packet as lost and move on. This scenario can
occur with high loss rate, low rtt and high configured latency.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
This patch fixes an issue with the estimated gap duration when there is
a gap immediately after a lost timer has been processed. Previously
there was a discrepancy beteen the gap in seqnum and gap in dts which
would cause wrong calculated duration. The issue would only be seen with
retranmission enabled since when it's disabled lost timers are only
created when a packet is received and the actual gap length and last dts
is known.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
Stats should also be collected for unsuccessful packets.
rtx-rtt is very important for determining the necessary configured
latency on the jitterbuffer. It's especially important to be able to
increase the latency when retransmitted packets arrive too late and are
considered lost. This patch includes these late packets in the
calculation of the various rtx stats, making them more correct and
useful.
Also in the case where the original packet arrives after a NACK is sent,
the received RTX packet should update the stats since it provides useful
information about RTT.
The RTT is only updated if and only if all requested retranmissions are
received. That way the RTT is guaranteed to make sense. If not we don't
know which request the packet is a response to and the RTT may be bogus.
A consequence of this patch is that RTT is not updated for a request
when one of the RTX packets for that seqnum is lost, but that since
measured RTT will be more accurate.
The implementation store the RTX information from the timed out timers
and use this when the retransmitted packet arrives. For performance
these timers are stored separately from the "normal" timers in order to
not impact performance (see attached performance test).
https://bugzilla.gnome.org/show_bug.cgi?id=769768
When disabled we can save some iterations over timers.
There is probably an argument for rtx-delay-reorder to exist, but
for normal operations, handling jitter (reordering) is something a
jitterbuffer should do, and this variable feels like functionality that
is not "in-sync" with what the jitterbuffer is trying to achieve.
Example: You have 50ms jitter on your network, and are receiving
audio packets with 10ms durations. An audio packet should not be
considered late until its rtx-timeout has expired (and hence a rtx-event
is sent), but with rtx-delay-reorder, events will be sent pretty much
all the time due to the jitter on the network.
Point being: The jitterbuffer should adapt its size to the measured network
jitter, and then rtx-delay-reorder needs to adapt as well, or simply
get out of the way and let the other (better) rtx-mechanisms do their job.
Also change find_timer to only use seqnum as an argument, since there
will only ever be one timer per seqnum at any given time. In the
one case where the type matters, the caller simply checks the type.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
To be able to cap the number of allowed streams for one session.
This is useful for preventing DoS attacks, where a sender can change
SSRC for every buffer, effectively bringing rtpbin to a halt.
https://bugzilla.gnome.org/show_bug.cgi?id=770292
https://github.com/mesonbuild/meson
With contributions from:
Tim-Philipp Müller <tim@centricular.com>
Jussi Pakkanen <jpakkane@gmail.com> (original port)
Highlights of the features provided are:
* Faster builds on Linux (~40-50% faster)
* The ability to build with MSVC on Windows
* Generate Visual Studio project files
* Generate XCode project files
* Much faster builds on Windows (on-par with Linux)
* Seriously fast configure and building on embedded
... and many more. For more details see:
http://blog.nirbheek.in/2016/05/gstreamer-and-meson-new-hope.htmlhttp://blog.nirbheek.in/2016/07/building-and-developing-gstreamer-using.html
Building with Meson should work on both Linux and Windows, but may
need a few more tweaks on other operating systems.
The current 'l' pointer will be NULL when the loop
is interrupted with a 'break' statement. Need to have
it advance to the next list item before interrupting.
With non-time segments, it now assumes that the arrival time of packets
is not relevant and that only the RTP timestamp matter and it produces
an output segment start at running time 0.
https://bugzilla.gnome.org/show_bug.cgi?id=766438
Some endpoints (like Tandberg E20) can send BYE packet containing our
internal SSRC. I this case we would detect SSRC collision and get rid
of the source at some point. But because we are still sending packets
with that SSRC the source will be recreated immediately.
This brand new internal source will not have some variables incorrectly
set in its state. For example 'seqnum-base` and `clock-rate` values will be
-1.
The fix is not to act on BYE RTCP if it contains internal or unknown
SSRC.
https://bugzilla.gnome.org/show_bug.cgi?id=762219
When a packet arrives that has already been considered lost as part of a
large gap the "lost timer" for this will be cancelled. If the remaining
packets of this large gap never arrives, there will be missing entries
in the queue and the loop function will keep waiting for these packets
to arrive and never push another packet, effectively stalling the
pipeline.
The proposed fix conciders parts of a large gap definitely lost (since
they are calculated from latency) and ignores the late arrivals.
In practice the issue is rare since large gaps are scheduled immediately,
and for the stall to happen the late arrival needs to be processed
before this times out.
https://bugzilla.gnome.org/show_bug.cgi?id=765933
The access to the session hash table must happen while the session lock is
taken, otherwise another thread might modify the hash table while we're
creating the stats.
https://bugzilla.gnome.org/show_bug.cgi?id=766025
The head of the queue is the oldest packet (as in lowest seqnum), the tail is
the newest packet. To calculate the fill level, we should calculate tail-head
while considering wraparounds. Not the other way around.
Other code is already doing this in the correct order.
https://bugzilla.gnome.org/show_bug.cgi?id=764889
When downstream blocks, "lost" timers are created to notify the
outgoing thread that packets are lost.
The problem is that for high packet-rate streams, we might end up with
a big list of lost timeouts (had a use-case with ~1000...).
The problem isn't so much the amount of lost timeouts to handle, but
rather the way they were handled. All timers would first be iterated,
then the one selected would be handled ... to re-iterate the list again.
All of this is being done while the jbuf lock is taken, which in some use-cases
would return in holding that lock for 10s... blocking any buffers from
being accepted in input... which would then arrive late ... which would
create plenty of lost timers ... which would cause the same issue.
In order to avoid that situation, handle the lost timers immediately when
iterating the list of pending timers. This modifies the complexity from
a quadratic to a linear complexity.
https://bugzilla.gnome.org/show_bug.cgi?id=762988
We would queue 5 consective packets before considering a reset and a proper
discont here. Instead of expecting the next output packet to have the current
seqnum (i.e. the fifth), expect it to have the first seqnum. Otherwise we're
going to drop all queued up packets.
generate_rtcp can produce empty packets when reduced size RTCP is turned on.
Skip them since it doesn't make sense to push them and they cause errors with
elements that expect RTCP packets to contain data (like srtpenc).
No need to use G_GINT64_FORMAT for potentially negative values of
GstClockTimeDiff. Since 1.6 these can be handled with GST_STIME_ARGS.
Plus it creates more readable values in the logs.
https://bugzilla.gnome.org/show_bug.cgi?id=757480
Add statitics from each rtp source to the rtp session property.
'source-stats' is a GValueArray where each element is a GstStructure of
stats for one rtp source.
The availability of new stats is signaled via g_object_notify.
https://bugzilla.gnome.org/show_bug.cgi?id=752669
By not doing this, the muxer is not effectively a rtpmuxer, rather a
funnel, since it should be a single stream that exists the muxer.
If not specified, take the first ssrc seen on a sinkpad, allowing upstream
to decide ssrc in "passthrough" with only one sinkpad.
Also, let downstream ssrc overrule internal configured one
We hence has the following order for determining the ssrc used by
rtpmux:
0. Suggestion from GstRTPCollision event
1. Downstream caps
2. ssrc-Property
3. (First) upstream caps containing ssrc
4. Randomly generated
https://bugzilla.gnome.org/show_bug.cgi?id=752694
Estimating it from the RTP time will give us the PTS, so in cases of PTS!=DTS
we would produce wrong DTS. As now the estimated DTS is based on the clock,
don't store it in the jitterbuffer items as it would otherwise be used in the
skew calculations and would influence the results. We only really need the DTS
for timer calculations.
https://bugzilla.gnome.org/show_bug.cgi?id=749536
The amount of time that is completely expired and not worth waiting for,
is the duration of the packets in the gap (gap * duration) - the
latency (size) of the jitterbuffer (priv->latency_ns). This is the duration
that we make a "multi-lost" packet for.
The "late" concept made some sense in 0.10 as it reflected that a buffer
coming in had not been waited for at all, but had a timestamp that was
outside the jitterbuffer to wait for. With the rewrite of the waiting
(timeout) mechanism in 1.0, this no longer makes any sense, and the
variable no longer reflects anything meaningful (num > 0 is useless,
the duration is what matters)
Fixed up the tests that had been slightly modified in 1.0 to allow faulty
behavior to sneak in, and port some of them to use GstHarness.
https://bugzilla.gnome.org/show_bug.cgi?id=738363
This reverts commit 05bd708fc5.
The reverted patch is wrong and introduces a regression because there
may still be time to receive some of the packets included in the gap
if they are reordered.
If we have a clock, update "now" now with the very latest running time we have.
If timers are unscheduled below we otherwise wouldn't update now (it's only updated
when timers expire), and also for the very first loop iteration now would otherwise
always be 0.
Also the time is used for the timeout functions, e.g. to calculate any times
for the next timeouts and we would otherwise pass too old times there.
https://bugzilla.gnome.org/show_bug.cgi?id=751636
This reverts commit 0c21cd7177.
If we have multiple immediate timers, we want to first handle the one with the
lowest sequence number... which would be broken now.
Instead of this we should just use a GSequence for the timers, and have them
sorted first by timestamp, and for equal timestamps by sequence number. Then
we would always only have to take the very first timer from the list and never
have to look at any others.
If update_receiver_stats() fails, we can't really do anything with this buffer
anymore and have to drop it. This happens if there's a big seqnum
discontinuity for example.
https://bugzilla.gnome.org/show_bug.cgi?id=751311
The new property allows to select the time source that should be used for the
NTP time in RTCP packets. By default it will continue to calculate the NTP
timestamp (1900 epoch) based on the realtime clock. Alternatively it can use
the UNIX timestamp (1970 epoch), the pipeline's running time or the pipeline's
clock time. The latter is especially useful for synchronizing multiple
receivers if all of them share the same clock.
If use-pipeline-clock is set to TRUE, it will override the ntp-time-source
setting and continue to use the running time plus 70 years. This is only kept
for backwards compatibility.
According to RFC 5506, reduce size packages can be sent, this
packages may not be compound, so we need to add support for
getting ssrc from other types of packages.
https://bugzilla.gnome.org/show_bug.cgi?id=750327
Otherwise we will have 10s-100s of thread wakeups in feedback profiles, create
RTCP packets, etc. just to suppress them in 99% of the cases (i.e. if no
feedback is actually pending and no regular RTCP has to be sent).
This improves CPU usage and battery life quite a lot.
https://bugzilla.gnome.org/show_bug.cgi?id=746543
If we may suppress the packet due to the rules of RFC4585 (i.e. when
below the t-rr-int), we can send a smaller RTCP packet without RRs
and full SDES. In theory we could even send a minimal RTCP packet
according to RFC5506, but we don't support that yet.
https://bugzilla.gnome.org/show_bug.cgi?id=746543
Otherwise we can't properly schedule RTCP in feedback profiles as we need to
distinguish the time when we last checked for sending RTCP (tp) but might have
suppressed it, and the time when we last actually sent a non-early RTCP
packet.
This together with the other changes should now properly implement RTCP
scheduling according to RFC4585, and especially allow us to send feedback
packets a lot if needed but only send regular RTCP packets every once in a
while.
https://bugzilla.gnome.org/show_bug.cgi?id=746543
And modify our RTCP scheduling algorithm accordingly. We now can send more
RTCP packets if needed for feedback, but will throttle full RTCP packets by
rtcp-min-interval (t-rr-int from RFC4585).
In non-feedback mode, rtcp-min-interval is Tmin from RFC3550, which is
statically set to 1s or 0s by RFC4585. Tmin defines how often we should
send RTCP packets at most.
https://bugzilla.gnome.org/show_bug.cgi?id=746543
It might just be a late retransmission or spurious packet from elsewhere, but
resetting everything would mean that we will cause a noticeable hickup. Let's
get some confidence first that the sequence numbers changed for whatever
reason.
https://bugzilla.gnome.org/show_bug.cgi?id=747922
This reverts commit d22ec49632.
Application code might expect that it only gets external sources on those
signals, and get confused by this. If anything we would need to add new
signals.
Without this it seems impossible for an application to easily get notified
about the internal ssrcs that are created, e.g. sender sources, and also
to know when they are active and produce RTCP packets.
https://bugzilla.gnome.org/show_bug.cgi?id=746747
We now take the maximum of 2*jitter and 0.5*packet_spacing for the extra
delay. If jitter is very low, this should prevent unnecessary retransmission
requests to some degree.
https://bugzilla.gnome.org/show_bug.cgi?id=748041
This also happens in the very beginning when we receive the first packet, a
warning would be very confusing here. In all places where we should warn about
this, we would've printed a warning already before.
Right above we consider lost_packet packets, each of them having duration,
as lost and triggered their timers immediately. Below we use expected_dts
to schedule retransmission or schedule lost timers for the packets that
come after expected_dts.
As we just triggered lost_packets packets as lost, there's no point in
scheduling new timers for them and we can just skip over all lost packets.
https://bugzilla.gnome.org/show_bug.cgi?id=739868
Resetting the jitterbuffer drops all packets and other things, and will cause
a discontinuity in the packets received by the depayloaders. They should now
also flush anything they had pending as the new data will start at a different
position.
https://bugzilla.gnome.org/show_bug.cgi?id=739868
Make sure that the sync_src pad has caps before the segment event.
Otherwise we might get a segment event before caps from the receive
RTCP pad, and then later when receiving RTCP packets will set caps.
This will results in a sticky event misordering warning
This fixes warnings in the rtpaux unit test but also in the
rtpaux and rtx examples in tests/examples/rtp
https://bugzilla.gnome.org/show_bug.cgi?id=746445
Before we only started it when either:
- there is no send RTP stream
or
- we received an RTP packet for sending
This could mean that if the send RTP pads are connected but never receive any
RTP data, and the same session is also used for receiving RTP/RTCP, we would
never start the RTCP thread and would never send RTCP for the receiving part
of the session.
This can be reproduced with a pipeline like:
gst-launch-1.0 rtpbin name=rtpbin \
udpsrc port=5000 ! "application/x-rtp, media=video, clock-rate=90000, encoding-name=H264" ! rtpbin.recv_rtp_sink_0 \
udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0 \
rtpbin.send_rtcp_src_0 ! fakesink name=rtcp_fakesink silent=false async=false sync=false \
rtpbin.recv_rtp_src_0_2553225531_96 ! decodebin ! xvimagesink \
fakesrc ! valve drop=true ! rtpbin.send_rtp_sink_0 \
rtpbin.send_rtp_src_0 ! fakesink name=rtp_fakesink silent=false async=false sync=false -v
Before this change the rtcp_fakesink would never send RTCP for the receiving
part of the session (i.e. no receiver reports!), after the change it does.
And before and after this change it would send RTCP for the receiving part of
the session if the sender part was omitted (the last two lines).
These are outside the expected range of sequence numbers and should be
clipped, especially for RTSP they might belong to packets from before a seek
or a previous stream in general.
This reverts commit 1591adf4cd.
https://bugzilla.gnome.org/show_bug.cgi?id=745586#c1:
It's the beginning of an implementation of RFC 2762, which is needed for
large multicast groups. The implementation is not yet complete but why
not leave what is there and implement RFC 2762 instead?
rtpsession declares an array of maps to store srrcs but only the
the key 0 is being used. This patch replaces the array of maps
for just one map and remove useless parameters in rtpsession
https://bugzilla.gnome.org/show_bug.cgi?id=745586
According to RFC 4585 section 3.5.3 step 1 we are not allowed to send
an early RTCP packet for the very first one. It must be a regular one.
Also make sure to not use last_rtcp_send_time in any calculations until
we actually sent an RTCP packet already. In specific this means that we
must not use it for forward reconsideration of the current RTCP send time.
Instead we don't do any forward reconsideration for the first RTCP packet.
This will be emitted whenever an RTCP packet is received. Different to
on-feedback-rtcp, this signal gets every complete RTCP packet and not
just the individual feedback packets.
It had no effect since quite some time and also is not needed in general,
especially not to switch between immediate feedback mode and early feedback
mode. The latest understanding of the RFC is that from the endpoint point of
view, both modes are exactly the same. RTCP is only allowed to use the
bandwidth as given by the RFC constraints, as such it is only ever possible
to schedule a RTCP packet early but it's against the RFC to schedule more RTCP
packets.
The difference between immediate feedback mode and early feedback mode is that
the former guarantees that an RTCP packet can be sent for every event
"immediately", which means that the bandwidth calculations from the RFC have
resulted in an RTCP scheduling interval that is small enough. Early feedback
mode on the other hand means that we can schedule some packets early to make
that happen, but it's not guaranteed at all that it's possible to schedule
an RTCP packet per event (i.e. they need to be accumulated or dropped).
This indicates with a boolean return value if scheduling a new RTCP packet
within the requested delay was possible. Otherwise it behaves exactly like
send-rtcp. The only reason for adding a new signal is ABI compatibility.
We (currently?) can't really handle gaps between RTP packets if they're not
properly timestamped. The current code would go into calculations with
GST_CLOCK_TIME_NONE and then cause assertions everywhere. It's probably
better to error out cleanly instead.
Apparently linphone sends an invalid RTP packet as very
first packet. We want to ignore that instead of erroring
out (same for any other invalid packets really).
https://bugzilla.gnome.org/show_bug.cgi?id=741398
We used to setup an iterator with 1 GValue set with a NULL object
pointer which is not the normal way to do that. Instead we should make
sure that the first call to gst_iterator_next returns GST_ITERATOR_DONE.
When the RTT and jitter are very low (such as on a local network), the
calculated retransmission timeout is very small. Set some sensible lower
boundary to the timeout by adding a new property. We use the packet
spacing as a lower boundary by default.
In early retransmission we are allowed to schedule 1 regular RTCP packet
at an earlier time. When we do that, we need to set allow_early to FALSE
and ignore/drop (or merge) all future requests for early transmission.
We now first check if we can schedule an early RTCP and if we can,
actually prepare the data for the next RTCP interval.
After we send the next regular RTCP after the early RTCP, we set
allow_early to TRUE again to allow more early requests.
Remove the condition for the immediate feedback for now.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=738319
Add a need-resync state, this is when we need to try to lock on to a
time/RTPtime pair.
Always check the RTP timestamps and if they go backwards, mark ourselves
as need-resync.
Only resync when need-resync is TRUE and we have a valid time. Otherwise
we keep the old values. This avoids locking on to an invalid time and
causing us to timestamp everything with -1.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=730417
rtpmux behaves like a funnel in that it forwards whatever upstream is
sending buffers. So setting proxy caps doesn't make sense as the
upstream don't have to have compatible caps, thus resulting in an empty
caps set as a result of a caps query. Instead set fixed caps just
as funnel does.
https://bugzilla.gnome.org/show_bug.cgi?id=738722
We never initialize clock_rate explicitly, therefore it is 0 by default. The
parameter is a uint32 and the only caller ensure that it is >0, therefore it
won't become -1 ever.
The jitterbuffer shouldn't force clock-rate on its sink pad, this will cause a negotiation issue since rtpssrcdemux doesn't have the clock-rate and doesn't add it to the caps. The documentation states that the clock-rate can either be specified through the caps or through the request-pt-map signal, so we must remove clock-rate from the pad templates and we must accept the GST_EVENT_CAPS if the caps don't have the clock-rate.
https://bugzilla.gnome.org/show_bug.cgi?id=734322
Implement 3 different cases for handling the SR:
1) we don't have enough timing information to handle the SR packet and
we need to wait a little for more RTP packets. In that case we keep
the SR packet around and retry when we get an RTP packet in the
chain function.
2) the SR packet has a too old timestamp and should be discarded. It is
labeled invalid and the last_sr is cleared.
3) the SR packet is ok and there is enough timing information, proceed
with processing the SR packet.
Before this patch, case 2) and 1) were handled in the same way,
resulting that SR packets with too old timestamps were checked over and
over again for each RTP packet.
1) sources that have sent BYE in the past cannot be senders, since
they would have timed out to being receivers in the meantime...
2) sources that have sent BYE are now being removed earlier inside
this function
If we are inserting a packet into the jitter queue we need to keep
looping through the items until the right position is found. Currently,
the code stops as soon as an event is found in the queue.
Regarding events, we should only move packets before an event if there
is another packet before the event that has a larger seqnum.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=730078
If two streams request a retranmission for the same SSRC, ignore the second
one if the first oen is less than one second old, otherwise time out the first
one and ignore the second.
As we now replace the local RTPSource on a conflict, it's no longer possible
to keep local conflicts in the RTPSource, so they instead need to be kept
in the RTPSession.
Also fix the rtpcollision test to generate multiple collisions instead of
one by change the address, as otherwise we detected that it was a single one.