A correctly configured AVAudioSession is needed on iOS to:
- allow the application to capture microphone audio (in some cases)
- avoid playback being silenced in silent mode
Without this, initializing AudioUnit for capture can fail on iOS >=17 (from my testing).
Since AVAudioSession has a lot of settings, in most cases its setup should be handled by the user/app.
However, just to have a basic default scenario covered, let's configure the bare minimum ourselves,
and allow anyone to disable that behaviour by setting configure-session=false on src/sink.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7856>
Turns out AudioConvertHostTimeToNanos and AudioGetCurrentHostTime are macOS-only APIs, which prevents apps using
GStreamer on iOS from being accepted into App Store.
This commit replaces those functions with a manual version of what they do - mach_absolute_time() for the current time,
and data from mach_timebase_info() at the beginning to convert host timestamps to nanoseconds.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6789>
When advancing the ringbuffer, store the processed CoreAudio sample
time, then interpolate the clock in the _get_delay() calls to smooth
the clock. CoreAudio's "latency" report is always a constant and
otherwise leads to the clock generating a latency-time staircase.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5140>
Set the BufferFrame size in CoreAudio so it will deliver data
in ringbuffer segment units when recording. Otherwise, CoreAudio
will provide data in whatever granularity it wants, with no
relationship to the requested latency-time.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5140>