On macOS, you always get your own 'timeline' for the AudioUnit session, so timestamps start from 0.
On iOS however, AudioUnit seems to give you a 'shared' timeline so timestamps start at a later, non-0 point in time.
Simply offsetting seems to do the trick.
This was causing osxaudiosrc to not output any sound on iOS.
Regressed in 2df9283d3f
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7856>
A correctly configured AVAudioSession is needed on iOS to:
- allow the application to capture microphone audio (in some cases)
- avoid playback being silenced in silent mode
Without this, initializing AudioUnit for capture can fail on iOS >=17 (from my testing).
Since AVAudioSession has a lot of settings, in most cases its setup should be handled by the user/app.
However, just to have a basic default scenario covered, let's configure the bare minimum ourselves,
and allow anyone to disable that behaviour by setting configure-session=false on src/sink.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7856>
Found that osxaudiosink could not be added standalone in gst-full build
using
-Dgst-full-elements=osxaudio:osxaudiosink because element registration
was
done at the plugin level. Now src/sink elements and deviceprovider have
their
individual registration.
Copied/adapted from the alsa plugin.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5419>
When advancing the ringbuffer, store the processed CoreAudio sample
time, then interpolate the clock in the _get_delay() calls to smooth
the clock. CoreAudio's "latency" report is always a constant and
otherwise leads to the clock generating a latency-time staircase.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5140>