A seek in multi-sink pipeline typically leads to several seek events in a row,
which could lead to sending several newsegments in a row without intermediate
flushing. These would then accumulate, distort rendering times and as such
lead to 'hanging'.
Use GstRTPBaseAudioPayload as the base class. This saves a lot of code and fixes
a bunch of problems that were already solved in the base class.
Fixes#853367
Don't make copied in the getter and setter for SDES in the RTPSource. This
avoids a couple of copies of the SDES structure when generating RTCP
packets.
Add a new spspps-interval property to instruct the payloader to insert
SPS and PPS at periodic intervals in the stream.
Rework the SPS/PPS handling so that bytestream and AVC sample code both use the
same code paths to handle sprop-parameter-sets. This also allows to have the AVC
code to insert SPS/PPS like the bytestream code.
Fixes#604913
For some reason latest gcc/binutils accept movzxb here while
movzbl would be correct and is the only thing accepted by older
gcc/binutils.
Fixes bug #604679.
This provides another 7% speedup for the time domain convolution and 1.5%
speedup for the FFT convolution on Mono input.
This optimization assumes that the compiler simplifies calculations
and conditions on constant numbers and unrolls loops with a constant
number of repeats.
This will always use time-domain convolution, which lowers the latency.
With FFT convolution it's always a multiple of the kernel length,
with time domain convolution it's only the pre-latency of the filter kernel.
This provides a great speedup, especially the relationship between kernel
length and processing size is now logarithmic instead of linear. Below a
kernel size of 32 it's a bit slower, afterwards it's much faster:
17 0.788000 -> 0.950000
33 1.208000 -> 1.146000
65 2.166000 -> 1.146000
...
4097 107.444000 -> 1.508000
For sizes smaller 32 the normal time-domain convolution is chosen,
for larger sizes the FFT convolution is automatically used.
Fixes bug #594381.
Remove some redundant calculations, move comparisions out of
inner loops, etc.
This makes the convolution about 3 (!) times faster but
processing time is of course still proportional to the
filter size.