When disabled we can save some iterations over timers.
There is probably an argument for rtx-delay-reorder to exist, but
for normal operations, handling jitter (reordering) is something a
jitterbuffer should do, and this variable feels like functionality that
is not "in-sync" with what the jitterbuffer is trying to achieve.
Example: You have 50ms jitter on your network, and are receiving
audio packets with 10ms durations. An audio packet should not be
considered late until its rtx-timeout has expired (and hence a rtx-event
is sent), but with rtx-delay-reorder, events will be sent pretty much
all the time due to the jitter on the network.
Point being: The jitterbuffer should adapt its size to the measured network
jitter, and then rtx-delay-reorder needs to adapt as well, or simply
get out of the way and let the other (better) rtx-mechanisms do their job.
Also change find_timer to only use seqnum as an argument, since there
will only ever be one timer per seqnum at any given time. In the
one case where the type matters, the caller simply checks the type.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
If jackd changes the buffer size or sample rate, jackaudiosink hangs
and can't be stopped. This also happens if jack is configured as slave
and a gstreamer pipeline is started on the slave machine while the jack
master isn't running yet. If the the jack master is started it changes
the buffer size / sample rate and jackaudiosink can't be stopped.
This fix calls jack_shutdown_cb when jack_sample_rate_cb or
jack_buffer_size_cb is called.
https://bugzilla.gnome.org/show_bug.cgi?id=771272
And actually calculate the field duration instead of a frame duration so
that we can properly timestamp output frames in fields=all mode.
This is probably still broken for reverse playback in telecine mode.
This may cause a few packets to be processed by the parser, but it's
better than never pushing out buffers from a slightly broken stream
where no marker bits are set.
QuickTime.h is no longer available on OS X 10.12 (Sierra),
and both the header and the framework seem unnecessary
for compilation - at least as of 10.11 (El Capitan).
https://bugzilla.gnome.org/show_bug.cgi?id=770526
To be able to cap the number of allowed streams for one session.
This is useful for preventing DoS attacks, where a sender can change
SSRC for every buffer, effectively bringing rtpbin to a halt.
https://bugzilla.gnome.org/show_bug.cgi?id=770292
Under certain conditions gst_rtp_buffer_get_payload() returns a copy of
the payload. In this case the payload modifications will not affect the
rtp buffer. So instead of modifying the payload buffer directly we
should modify the buffer that actually gets pushed on the adapter.
The functionality of all the tests was kept exactly the same. Some tests
were renamed:
test_push_forward_seq -> test_rtxsend_rtxreceive
test_drop_one_sender -> test_rtxsend_rtxreceive_with_packet_loss
test_drop_multiple_sender -> test_multi_rtxsend_rtxreceive_with_packet_loss
test_rtxreceive_data_reconstruction was testing that retransmitted
buffer produced by rtxsend was correctly transformed to the original
buffer by rtxreceive. Now we are checking for this in all the tests
where both rtxsend & rtxreceive are involved. That's why the test was
removed.