The head of the queue is the oldest packet (as in lowest seqnum), the tail is
the newest packet. To calculate the fill level, we should calculate tail-head
while considering wraparounds. Not the other way around.
Other code is already doing this in the correct order.
https://bugzilla.gnome.org/show_bug.cgi?id=764889
When downstream blocks, "lost" timers are created to notify the
outgoing thread that packets are lost.
The problem is that for high packet-rate streams, we might end up with
a big list of lost timeouts (had a use-case with ~1000...).
The problem isn't so much the amount of lost timeouts to handle, but
rather the way they were handled. All timers would first be iterated,
then the one selected would be handled ... to re-iterate the list again.
All of this is being done while the jbuf lock is taken, which in some use-cases
would return in holding that lock for 10s... blocking any buffers from
being accepted in input... which would then arrive late ... which would
create plenty of lost timers ... which would cause the same issue.
In order to avoid that situation, handle the lost timers immediately when
iterating the list of pending timers. This modifies the complexity from
a quadratic to a linear complexity.
https://bugzilla.gnome.org/show_bug.cgi?id=762988
We would queue 5 consective packets before considering a reset and a proper
discont here. Instead of expecting the next output packet to have the current
seqnum (i.e. the fifth), expect it to have the first seqnum. Otherwise we're
going to drop all queued up packets.
generate_rtcp can produce empty packets when reduced size RTCP is turned on.
Skip them since it doesn't make sense to push them and they cause errors with
elements that expect RTCP packets to contain data (like srtpenc).
No need to use G_GINT64_FORMAT for potentially negative values of
GstClockTimeDiff. Since 1.6 these can be handled with GST_STIME_ARGS.
Plus it creates more readable values in the logs.
https://bugzilla.gnome.org/show_bug.cgi?id=757480
Add statitics from each rtp source to the rtp session property.
'source-stats' is a GValueArray where each element is a GstStructure of
stats for one rtp source.
The availability of new stats is signaled via g_object_notify.
https://bugzilla.gnome.org/show_bug.cgi?id=752669
By not doing this, the muxer is not effectively a rtpmuxer, rather a
funnel, since it should be a single stream that exists the muxer.
If not specified, take the first ssrc seen on a sinkpad, allowing upstream
to decide ssrc in "passthrough" with only one sinkpad.
Also, let downstream ssrc overrule internal configured one
We hence has the following order for determining the ssrc used by
rtpmux:
0. Suggestion from GstRTPCollision event
1. Downstream caps
2. ssrc-Property
3. (First) upstream caps containing ssrc
4. Randomly generated
https://bugzilla.gnome.org/show_bug.cgi?id=752694
Estimating it from the RTP time will give us the PTS, so in cases of PTS!=DTS
we would produce wrong DTS. As now the estimated DTS is based on the clock,
don't store it in the jitterbuffer items as it would otherwise be used in the
skew calculations and would influence the results. We only really need the DTS
for timer calculations.
https://bugzilla.gnome.org/show_bug.cgi?id=749536
The amount of time that is completely expired and not worth waiting for,
is the duration of the packets in the gap (gap * duration) - the
latency (size) of the jitterbuffer (priv->latency_ns). This is the duration
that we make a "multi-lost" packet for.
The "late" concept made some sense in 0.10 as it reflected that a buffer
coming in had not been waited for at all, but had a timestamp that was
outside the jitterbuffer to wait for. With the rewrite of the waiting
(timeout) mechanism in 1.0, this no longer makes any sense, and the
variable no longer reflects anything meaningful (num > 0 is useless,
the duration is what matters)
Fixed up the tests that had been slightly modified in 1.0 to allow faulty
behavior to sneak in, and port some of them to use GstHarness.
https://bugzilla.gnome.org/show_bug.cgi?id=738363
This reverts commit 05bd708fc5.
The reverted patch is wrong and introduces a regression because there
may still be time to receive some of the packets included in the gap
if they are reordered.
If we have a clock, update "now" now with the very latest running time we have.
If timers are unscheduled below we otherwise wouldn't update now (it's only updated
when timers expire), and also for the very first loop iteration now would otherwise
always be 0.
Also the time is used for the timeout functions, e.g. to calculate any times
for the next timeouts and we would otherwise pass too old times there.
https://bugzilla.gnome.org/show_bug.cgi?id=751636
This reverts commit 0c21cd7177.
If we have multiple immediate timers, we want to first handle the one with the
lowest sequence number... which would be broken now.
Instead of this we should just use a GSequence for the timers, and have them
sorted first by timestamp, and for equal timestamps by sequence number. Then
we would always only have to take the very first timer from the list and never
have to look at any others.
If update_receiver_stats() fails, we can't really do anything with this buffer
anymore and have to drop it. This happens if there's a big seqnum
discontinuity for example.
https://bugzilla.gnome.org/show_bug.cgi?id=751311
The new property allows to select the time source that should be used for the
NTP time in RTCP packets. By default it will continue to calculate the NTP
timestamp (1900 epoch) based on the realtime clock. Alternatively it can use
the UNIX timestamp (1970 epoch), the pipeline's running time or the pipeline's
clock time. The latter is especially useful for synchronizing multiple
receivers if all of them share the same clock.
If use-pipeline-clock is set to TRUE, it will override the ntp-time-source
setting and continue to use the running time plus 70 years. This is only kept
for backwards compatibility.
According to RFC 5506, reduce size packages can be sent, this
packages may not be compound, so we need to add support for
getting ssrc from other types of packages.
https://bugzilla.gnome.org/show_bug.cgi?id=750327
Otherwise we will have 10s-100s of thread wakeups in feedback profiles, create
RTCP packets, etc. just to suppress them in 99% of the cases (i.e. if no
feedback is actually pending and no regular RTCP has to be sent).
This improves CPU usage and battery life quite a lot.
https://bugzilla.gnome.org/show_bug.cgi?id=746543
If we may suppress the packet due to the rules of RFC4585 (i.e. when
below the t-rr-int), we can send a smaller RTCP packet without RRs
and full SDES. In theory we could even send a minimal RTCP packet
according to RFC5506, but we don't support that yet.
https://bugzilla.gnome.org/show_bug.cgi?id=746543
Otherwise we can't properly schedule RTCP in feedback profiles as we need to
distinguish the time when we last checked for sending RTCP (tp) but might have
suppressed it, and the time when we last actually sent a non-early RTCP
packet.
This together with the other changes should now properly implement RTCP
scheduling according to RFC4585, and especially allow us to send feedback
packets a lot if needed but only send regular RTCP packets every once in a
while.
https://bugzilla.gnome.org/show_bug.cgi?id=746543
And modify our RTCP scheduling algorithm accordingly. We now can send more
RTCP packets if needed for feedback, but will throttle full RTCP packets by
rtcp-min-interval (t-rr-int from RFC4585).
In non-feedback mode, rtcp-min-interval is Tmin from RFC3550, which is
statically set to 1s or 0s by RFC4585. Tmin defines how often we should
send RTCP packets at most.
https://bugzilla.gnome.org/show_bug.cgi?id=746543
It might just be a late retransmission or spurious packet from elsewhere, but
resetting everything would mean that we will cause a noticeable hickup. Let's
get some confidence first that the sequence numbers changed for whatever
reason.
https://bugzilla.gnome.org/show_bug.cgi?id=747922
This reverts commit d22ec49632.
Application code might expect that it only gets external sources on those
signals, and get confused by this. If anything we would need to add new
signals.
Without this it seems impossible for an application to easily get notified
about the internal ssrcs that are created, e.g. sender sources, and also
to know when they are active and produce RTCP packets.
https://bugzilla.gnome.org/show_bug.cgi?id=746747
We now take the maximum of 2*jitter and 0.5*packet_spacing for the extra
delay. If jitter is very low, this should prevent unnecessary retransmission
requests to some degree.
https://bugzilla.gnome.org/show_bug.cgi?id=748041
This also happens in the very beginning when we receive the first packet, a
warning would be very confusing here. In all places where we should warn about
this, we would've printed a warning already before.
Right above we consider lost_packet packets, each of them having duration,
as lost and triggered their timers immediately. Below we use expected_dts
to schedule retransmission or schedule lost timers for the packets that
come after expected_dts.
As we just triggered lost_packets packets as lost, there's no point in
scheduling new timers for them and we can just skip over all lost packets.
https://bugzilla.gnome.org/show_bug.cgi?id=739868
Resetting the jitterbuffer drops all packets and other things, and will cause
a discontinuity in the packets received by the depayloaders. They should now
also flush anything they had pending as the new data will start at a different
position.
https://bugzilla.gnome.org/show_bug.cgi?id=739868
Make sure that the sync_src pad has caps before the segment event.
Otherwise we might get a segment event before caps from the receive
RTCP pad, and then later when receiving RTCP packets will set caps.
This will results in a sticky event misordering warning
This fixes warnings in the rtpaux unit test but also in the
rtpaux and rtx examples in tests/examples/rtp
https://bugzilla.gnome.org/show_bug.cgi?id=746445
Before we only started it when either:
- there is no send RTP stream
or
- we received an RTP packet for sending
This could mean that if the send RTP pads are connected but never receive any
RTP data, and the same session is also used for receiving RTP/RTCP, we would
never start the RTCP thread and would never send RTCP for the receiving part
of the session.
This can be reproduced with a pipeline like:
gst-launch-1.0 rtpbin name=rtpbin \
udpsrc port=5000 ! "application/x-rtp, media=video, clock-rate=90000, encoding-name=H264" ! rtpbin.recv_rtp_sink_0 \
udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0 \
rtpbin.send_rtcp_src_0 ! fakesink name=rtcp_fakesink silent=false async=false sync=false \
rtpbin.recv_rtp_src_0_2553225531_96 ! decodebin ! xvimagesink \
fakesrc ! valve drop=true ! rtpbin.send_rtp_sink_0 \
rtpbin.send_rtp_src_0 ! fakesink name=rtp_fakesink silent=false async=false sync=false -v
Before this change the rtcp_fakesink would never send RTCP for the receiving
part of the session (i.e. no receiver reports!), after the change it does.
And before and after this change it would send RTCP for the receiving part of
the session if the sender part was omitted (the last two lines).
These are outside the expected range of sequence numbers and should be
clipped, especially for RTSP they might belong to packets from before a seek
or a previous stream in general.
This reverts commit 1591adf4cd.
https://bugzilla.gnome.org/show_bug.cgi?id=745586#c1:
It's the beginning of an implementation of RFC 2762, which is needed for
large multicast groups. The implementation is not yet complete but why
not leave what is there and implement RFC 2762 instead?
rtpsession declares an array of maps to store srrcs but only the
the key 0 is being used. This patch replaces the array of maps
for just one map and remove useless parameters in rtpsession
https://bugzilla.gnome.org/show_bug.cgi?id=745586
According to RFC 4585 section 3.5.3 step 1 we are not allowed to send
an early RTCP packet for the very first one. It must be a regular one.
Also make sure to not use last_rtcp_send_time in any calculations until
we actually sent an RTCP packet already. In specific this means that we
must not use it for forward reconsideration of the current RTCP send time.
Instead we don't do any forward reconsideration for the first RTCP packet.
This will be emitted whenever an RTCP packet is received. Different to
on-feedback-rtcp, this signal gets every complete RTCP packet and not
just the individual feedback packets.
It had no effect since quite some time and also is not needed in general,
especially not to switch between immediate feedback mode and early feedback
mode. The latest understanding of the RFC is that from the endpoint point of
view, both modes are exactly the same. RTCP is only allowed to use the
bandwidth as given by the RFC constraints, as such it is only ever possible
to schedule a RTCP packet early but it's against the RFC to schedule more RTCP
packets.
The difference between immediate feedback mode and early feedback mode is that
the former guarantees that an RTCP packet can be sent for every event
"immediately", which means that the bandwidth calculations from the RFC have
resulted in an RTCP scheduling interval that is small enough. Early feedback
mode on the other hand means that we can schedule some packets early to make
that happen, but it's not guaranteed at all that it's possible to schedule
an RTCP packet per event (i.e. they need to be accumulated or dropped).
This indicates with a boolean return value if scheduling a new RTCP packet
within the requested delay was possible. Otherwise it behaves exactly like
send-rtcp. The only reason for adding a new signal is ABI compatibility.
We (currently?) can't really handle gaps between RTP packets if they're not
properly timestamped. The current code would go into calculations with
GST_CLOCK_TIME_NONE and then cause assertions everywhere. It's probably
better to error out cleanly instead.
Apparently linphone sends an invalid RTP packet as very
first packet. We want to ignore that instead of erroring
out (same for any other invalid packets really).
https://bugzilla.gnome.org/show_bug.cgi?id=741398
We used to setup an iterator with 1 GValue set with a NULL object
pointer which is not the normal way to do that. Instead we should make
sure that the first call to gst_iterator_next returns GST_ITERATOR_DONE.