For outputs with a high number of channels, macOS has a bug where
initially CoreAudio will report incorrect positions for all channels,
but after you run Audio MIDI Setup and configure the speaker layout
there, macOS will always report those few as positioned, with no option
to revert that (other than deleting some internal files).
In such scenario our code would just ignore all the unpositioned
channels. Since you can only position max. 16 channels in macOS, if you
had more on your output device, those would be unusable.
This commit makes sure that in addition to the usual positioned layout
(if there is one), we will expose caps for a no-positions layout that
always has the maximum amount of channels available.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8311>
By default, for devices with larger amounts of outputs, CoreAudio can
provide invalid channel labels/positions, simply by starting at 0 and
incrementing forward. For example, values 19 through 32 are not valid
according to the CoreAudioBaseTypes.h header, but if your device has >19
output channels, you will find CoreAudio using those values.
This is most likely a bug in CoreAudio, since in that case it should use
unpositioned labels (e.g. _Discrete_X) instead.
This commit aims to work around this by overriding all channels to be
unpositioned if the case above is detected.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8311>
Mirroring the demux element and isomp4mux from gst-plugins-rs.
Tested against other Gst elements and MPV. Note that the later
apparently does not show correct results for flipped values.
Can be tested with:
```
gst-launch-1.0 \
videotestsrc num-buffers=90 ! \
taginject tags="image-orientation=rotate-90" ! \
capsfilter caps=video/x-raw,width=640,height=480,max-framerate=30/1 ! \
videoconvert ! \
queue ! \
openh264enc ! \
queue ! \
h264parse ! \
mp4mux ! \
filesink location=./test.mp4
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8371>
With this patch, configure time is identical no matter whether doc is
enabled or not.
The configuration files also now contain explicitly-listed sources with
no wildcards.
For the four libraries where hotdoc needs to use clang to generate the
documentation (as opposed to the rest of the libraries where hotdoc uses
the gir), the script will call pkg-config to determine the appropriate
C flags.
This means a side effect of this patch is that pkg-config files are now
generated for the gstadaptivedemux and gstopencv libraries.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8312>
In fit-down mode only 1.0 rates are supported, and the element will fit
audio data in buffers to their advertised duration.
This is useful in speech synthesis cases, where elements such as
awspolly will generate audio data from text, and assign the duration of the
input text buffers to their output buffers
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8149>
Incorrect parsing of these bits meant that we were incorrectly parsing
the VP9 uncompressed bitstream header for some profiles, as the header
is of variable length and format depending on the profile. Amongst
various unintended effects, this caused the width and height from the SS
to be incorrectly parsed and set in the caps.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8275>
On macOS, you always get your own 'timeline' for the AudioUnit session, so timestamps start from 0.
On iOS however, AudioUnit seems to give you a 'shared' timeline so timestamps start at a later, non-0 point in time.
Simply offsetting seems to do the trick.
This was causing osxaudiosrc to not output any sound on iOS.
Regressed in 2df9283d3f
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7856>
At some point in iOS 17, this call started waiting for the first render callback (io_proc) to finish.
In our case, that callback also takes the ringbuf object lock by calling gst_audio_ring_buffer_set_timestamp(),
which results in a deadlock.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7856>
A correctly configured AVAudioSession is needed on iOS to:
- allow the application to capture microphone audio (in some cases)
- avoid playback being silenced in silent mode
Without this, initializing AudioUnit for capture can fail on iOS >=17 (from my testing).
Since AVAudioSession has a lot of settings, in most cases its setup should be handled by the user/app.
However, just to have a basic default scenario covered, let's configure the bare minimum ourselves,
and allow anyone to disable that behaviour by setting configure-session=false on src/sink.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7856>
For some frames with decode-only flag, the v4l2 decoder will not
put them in output list. The corresponding decode-only frames will
be still kept in input list, which may cause potential performance
issue when the input list is full. So release the decode-only frames
according to the decode-only flag after they are processed by decoder
driver.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8153>
The matrix multiplication makes some assumption about the element
values to simplify the math with fixpoint values. If this is allowed
for the given matrices is now checked first.
Then the debug output for matrix and a comment have been fixed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8127>