audioconvert's passthrough status can no longer be determined
strictly from input / output caps equality, as a mix-matrix can
now be specified.
We now call gst_base_transform_set_passthrough dynamically, based
on the return from the new gst_audio_converter_is_passthrough()
API, which takes the mix matrix into account.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
Optimize LE<->BE conversion by adding a dedicated fast path instead of
using the generic converter. Implement transform_ip function in order to do the
endian swap in place.
This saves buffer allocation for the intermediate format, can be done in place
and also performs the conversion in one step instead of unpack-convert-pack.
For all bit widths the naive algorithm is implemented, which provides the best
performance when compiled with -O3. ORC was considered but eventually removed
as it requires a dedicated function for in-place conversion (due to the
"restrict" parameters).
A more complex algorithm for the 24-bit conversion with unrolled loop and
32-bit processing is implemented in the #if 0 section. It performs better if
compiled with -O2. With -O3 however the naive algorithm performs better.
https://bugzilla.gnome.org/show_bug.cgi?id=773073
Remove the consumed/produced output fields from the resampler and
converter. Let the caler specify the right number of input/output
samples so we can be more optimal.
Use just one function to update the converter configuration.
Simplify some things internally.
Make it possible to use writable input as temp space in audioconvert.
Pass flags in _converter_new() so that we can configure ourselves
differently depending on some options.
SOURCE_WRITABLE -> IN_WRITABLE because the array is called 'in'
Simplify the API, we don't need the consumed and produced output
arguments. The caller needs to use the _get_in_frames/get_out_frames API
to check how much input is needed and how much output will be produced.
Rework the main processing loop. We now create an audio processing
chain from small core functions. This is very similar to how the
video-converter core works and allows us to statically calculate an
optimal allocation strategy for all possible combinations of operations.
Make sure we support non-interleaved data everywhere.
Add functions to calculate in and out frames and latency.