The send path in rtpsession processes the buffer list along the way,
sharing info and stats between packets in the same list, because it
assumes that all packets in a buffer list are from the same frame.
However, in the receiving path packets can arrive in all sorts of
arrangements:
- different sources,
- different frames (different timestamps),
- different types (multiplexed RTP and RTCP, invalid RTP packets).
so a more general approach should be used to correctly support buffer
lists in the receive path.
It turns out that it's simpler and more robust to process buffers
individually inside the rtpsession element even if they come in a buffer
list, and then reassemble a new buffer list when pushing the buffers
downstream.
This avoids complicating the existing code to make all functions
buffer-list-aware with the risk of introducing regressions,
To support buffer lists in the receive path and reduce the "push
overhead" in the pipeline, a new private field named processed_list is
added to GstRtpSessionPrivate, it is set in the chain_list handler and
used in the process_rtp callback; this is to achieve the following:
- iterate over the incoming buffer list;
- process the packets one by one;
- add the valid ones to a new buffer list;
- push the new buffer list downstream.
The processed_list field is reset before pushing a buffer list to be on
the safe side in case a single buffer was to be pushed by upstream
at some later point.
NOTE:
The proposed modifications do not change the behavior of the send path.
The process_rtp callback is called in rtpsource.c by the push_rtp
callback (via source_push_rtp) only when the source is not internal.
So even though push_rtp is also called in the send path, it won't end up
using process_rtp in this case because the source would be internal in
the send path.
The reasoning from above may suggest a future refactoring: push_rtp
might be split to better differentiate the send and receive path.
Forwarding a single segment event from the pad that first gets
chained is incorrect: when that first event was sent by an element
such as x264enc, with its offset start, we end pushing out of segment
buffers for the other pad(s).
Instead, everytime the active pad changes, forward the appropriate
segment event.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/1028
When it is not clear yet if a packet relative to a source should be
pushed, the packet is put into a queue, this happens in two cases:
- the source is still in probation;
- there is a large jump in seqnum, and it is not clear what
the cause is, future packets will help making a guess.
In either case stats about received packets are not updated at all; and
even if they were, when init_seq() is called it resets all receiver
stats, effectively loosing any possible stat about previously received
packets.
Fix this by taking into account the queued packets and update the stats
when calling init_seq().
Since commit c971d1a9a (rtpsource: refactor bitrate estimation,
2010-03-02) bytes_received filed in RTPSourceStats is set but then never
used again, expose it so that it can be used by user code to verify how
many bytes have been received.
According to RFC3550 lower-level headers should be considered for
bandwidth calculation.
See https://tools.ietf.org/html/rfc3550#section-6.2 paragraph 4:
Bandwidth calculations for control and data traffic include
lower-layer transport and network protocols (e.g., UDP and IP) since
that is what the resource reservation system would need to know.
Fix the source data to accommodate that.
Assume UDPv4 over IP for now, this is a simplification but it's good
enough for now.
While at it define a constant and use that instead of a magic number.
NOTE: this change basically reverts the logic of commit 529f443a6
(rtpsource: use payload size to estimate bitrate, 2010-03-02)
If the conflict is detected when sending a packet, then also send an
upstream event to tell the source to reconfigure itself.
Also ignore the collision if we see more than one collision from the same
remote source to avoid problems on loops.
If an rtx packet arrives that hasn't been requested (it might
have been requested from prior to a reset), ignore it so that
it doesn't inadvertently trigger a clock skew.
In the case of reordered packets, calculating skew would cause
pts values to be off. Only calculate skew when packets come
in as expected. Also, late RTX packets should not trigger
clock skew adjustments.
Fixes#612
Make sure to clear any master clock on the media_clock
before unreffing it to release the timer callback that's
updating the clock and keeping it reffed.
If, say, a rtx-packet arrives really late, this can have a dramatic
effect on the jitterbuffer clock-skew logic, having it being reset
and losing track of the current dts-to-pts calculations, directly affecting
the packets that arrive later.
This is demonstrated in the test, where a RTX packet is pushed in really
late, and without this patch the last packet will have its PTS affected
by this, where as a late RTX packet should be redundant information, and
not affect anything.
This patch corrects the delay set on EXPECTED timers that are added when
processing gaps. Previously the delay could be too small so that
'timout + delay' was much less than 'now', causing the following retries
to be scheduled too early. (They were sent earlier than
rtx-retry-timeout after the previous timeout.)
Turns out that the "big-gap"-logic of the jitterbuffer has been horribly
broken.
For people using lost-events, an RTP-stream with a gap in sequencenumbers,
would produce exactly that many lost-events immediately.
So if your sequence-numbers jumped 20000, you would get 20000 lost-events
in your pipeline...
The test that looks after this logic "test_push_big_gap", basically
incremented the DTS of the buffer equal to the gap that was introduced,
so that in fact this would be more of a "large pause" test, than an
actual gap/discontinuity in the sequencenumbers.
Once the test was modified to not increment DTS (buffer arrival time) with
a similar gap, all sorts of crazy started happening, including adding
thousands of timers, and the logic that should have kicked in, the
"handle_big_gap_buffer"-logic, was not called at all, why?
Because the number max_dropout is calculated using the packet-rate, and
the packet-rate logic would, in this particular test, report that
the new packet rate was over 400000 packets per second!!!
I believe the right fix is to don't try and update the packet-rate if
there is any jumps in the sequence-numbers, and only do these calculations
for nice, sequential streams.
In this change we now protect the internal srcpads list using the
stream lock and limit usage of the internal stream lock to
preventing data flowing on the other src pad type while creating
and signalling the new pad.
This fixes a deadlock with RTPBin shutdown lock. These two locks would
end up being taken in two different order, which caused a deadlock. More
generally, we should not rely on a streamlock when handling out-of-band
data, so as a side effect, we should not take a stream lock when
iterating internal links.
This is a tiny clarification as the storage was loosely named "storage".
This change clarify that the storage is specificaly used for received RTP
packets. This is unlike the storage found in rtprtxsend that stores a
backlog of sent RTP packets.
We recently added code to remove outdate NACK to avoid using bandwidth
for packet that have no chance of arriving on time. Though, this had a
side effect, which is that it was to get an early RTCP packet with no
feedback into it. This was pretty useless but also had a side effect,
which is that the RTX RTT value would never be updated. So we we stared
having late RTX request due to high RTT, we'd never manage to recover.
This fixes the regression by making sure we keep at least one NACK in
this situation. This is really light on the bandwidth and allow for
quick recover after the RTT have spiked higher then the jitterbuffer
capacity.
Right now, we may call on-new-ssrc after we have processed the first
RTP packet. This prevents properly configuring the source as some
property like "probation" are copied internally for use as a
decreasing counter. For this specific property, it prevents the
application from disabling probation on auxiliary sparse stream.
Probation is harmful on sparse streams since the probation algorithm
assume frequent and contiguous RTP packets.
This introduce a new signal on RTSession, on-sending-nacks is emited
right before the list of seqnums to be nacked are processed and
transformed into FB Nack. This allow implementing custom nacks
handling through another mechanism with APP feedback.
In order to do that, we now split the nacks registration from the actual
FB nack packet construction. We then try and add as many FB Nacks as
possible into the active packets and leave the remaining seqnums in the
RTPSource. In order to avoid sending outdated NACK later on, we save the
seqnum calculated deadline and cleanup the outdated seqnums before the
next RTCP send.
Fixes#583
Calling rtp_session_send_rtcp before marking the source as requiring a
pli/fir/nack meant the rtcp_thread could be scheduled and start running
before the source was updated. This meant the request would not be sent
early but instead was transmitted with the next regular RTCP packet.
Add test for nack generation.
If the current time is equal to the early rtcp time deadline, there is
no need to schedule a timer. This ensure that immediate feedback is
really immediate and simplify implementing unit tests with the test
clock, which stops perfectly on the timeout time.
This fix has been extracted from Pexip feature patch called
"rtpsession: Allow instant transmission of RTCP packets"
One comments in gst_rtp_session_chain_send_rtp_common() is referring to
groups in a buffer list, however this concept of "group" comes from
GStreamer 0.10 and does not exist anymore in GStreamer 1.0, so update the
comment to refer to buffers instead.
The update_receiver_stats() function is called also when sending packets
in rtp_source_send_rtp(), and sending packets may happen using a buffer
list rather than individual buffers.
So update the stats using the actual number of packets sent.
NOTE: this is fine for the receive path too (rtp_process_send_rtp)
because the receive path does not support buffer lists and
pinfo->packets would always be equal to 1 in this case.
This is needed for the case you don't know in advance all the sessions
you will be using, but would like to place all the related AUX element
in the same GstBin. As per current implementation, each time an sender
AUX bin is requested and returned, RTPBin will walk the src pads and
create sessions for these pads.
In the current implementation, if a src pad already have a sessions, it
returns an error and stops. As a side effect, if an AUX bin is reused in
a following AUX bin request, it can only work if the pads are created on
the last request.
This change simply relax the restriction in order to keep walking, and
just ensure that all newly created pads have a sessions.
In commit 28e5f9098 (rtpbin: use PacketInfo for the sender, 2013-09-13)
the rtp_source_send_rtp signature changed but the documentation was not
adjusted to match the new one.
Update the documentation to match the function signature.
Some functions now accept a generic 'gpointer data' parameter because
they can work either on a single buffer or a buffer list.
However the comments were still referring to the old 'GstBuffer *buffer'
parameter, so update the comments to match the actual functions
signature.