This makes sure that we only build files that need explicit SIMD support
with the relevant CFLAGS. This allows the rest of the code to be built
without, and specific SSE* code is only called after runtime checks for
CPU features.
https://bugzilla.gnome.org/show_bug.cgi?id=729276
https://github.com/mesonbuild/meson
With contributions from:
Tim-Philipp Müller <tim@centricular.com>
Jussi Pakkanen <jpakkane@gmail.com> (original port)
Highlights of the features provided are:
* Faster builds on Linux (~40-50% faster)
* The ability to build with MSVC on Windows
* Generate Visual Studio project files
* Generate XCode project files
* Much faster builds on Windows (on-par with Linux)
* Seriously fast configure and building on embedded
... and many more. For more details see:
http://blog.nirbheek.in/2016/05/gstreamer-and-meson-new-hope.htmlhttp://blog.nirbheek.in/2016/07/building-and-developing-gstreamer-using.html
Building with Meson should work on both Linux and Windows, but may
need a few more tweaks on other operating systems.
Elements inherited from GstAudioDecoder, supporting PLC and introducing
delay produce invalid timestamps. Good example is opusdec with in-band FEC
enabled. After receiving GAP event it delays the audio concealment until
the next buffer arrives. The next buffer will have DISCONT flag set which
will make GstAudioDecoder to reset it's internal state, thus forgetting
the timestamp of GAP event. As a result the concealed audio will have the
timestamp of the next buffer (with DISCONT flag) but not the timestamp
from the event.
As said in its doc GST_AUDIO_CHANNEL_POSITION_NONE is meant to be used
for "position-less channels, e.g. from a sound card that records 1024
channels; mutually exclusive with any other channel position".
But at the moment using such positions would raise a
'g_return_if_reached' warning as gst_audio_get_channel_reorder_map()
would reject it.
Fix this by preventing any attempt to reorder in such case as that's not
what we want anyway.
https://bugzilla.gnome.org/show_bug.cgi?id=763799
We currently don't log much about channel positions making debugging
harder as it should be. This is the first step in my attempt to improve
this.
https://bugzilla.gnome.org/show_bug.cgi?id=763985
There is a small window of time where the audio ringbuffer thread
can access the parent thread variable, before it's initialized
by the parent thread. The patch replaces this variable use by
g_thread_self().
https://bugzilla.gnome.org/show_bug.cgi?id=764865
Since the allocation query caps contains memory size and the pad's caps
contains the display size, an audio encoder or decoder might need to allocate
a different buffer size than the size negotiated in the caps.
This patch splits this logic distinction for audiodecoder and audioencoder.
Thus the user, if needs a different allocation caps, should set it through
gst_audio_{encoder,decoder}_set_allocation_cap() before calling the negotiate()
vmethod. Otherwise the allocation_caps will be the same as the caps in the
src pad.
https://bugzilla.gnome.org/show_bug.cgi?id=764421
Store the filter in the desired sample format so that we can simply do a
linear or cubic interpolation to get the new filter instead of having to
go through gdouble and then convert.
Remove some unused variables from the inner product functions.
Make filter coefficients by interpolating if required.
Rename some fields.
Try hard to not recalculate filters when just chaging the rate.
Add more proprties to audioresample.
Rearrange the oversampled taps in memory to make it easier to use
SIMD instructions on them. this simplifies some sse code.
Add some more optimizations
Improve int16 resampling by using pmaddwd
Use intrinsics to scale and pack int16 samples
Align the coefficients so that we can use aligned loads
Add padding to taps and samples so that we don't have to use partial
loads for the remainder of the loops.
Remove copy_n, we can reuse the plain copy function with some new
parameters.
Align and pad the sample array.
Remove the consumed/produced output fields from the resampler and
converter. Let the caler specify the right number of input/output
samples so we can be more optimal.
Use just one function to update the converter configuration.
Simplify some things internally.
Make it possible to use writable input as temp space in audioconvert.
If we don't have writable memory, make sure to make a copy of the input
samples into a temporary (writable) buffer, even if we are dealing with
a native intermediate format that we don't need to call the unpack
function for.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=761655
gst_pad_get_allowed_caps() will return NULL if the srcpad has no peer.
In that case, use gst_pad_peer_query_caps() with template caps as filter
to have negotiated output caps properly before forwarding GAP event.
https://bugzilla.gnome.org/show_bug.cgi?id=761218
It's useful enough already to be used in other elements for audio aggregation,
let's give people the opportunity to use it and give it some API testing.
https://bugzilla.gnome.org/show_bug.cgi?id=760733
It's quite unexpected behaviour that various subclass settings are just
reset before set_format(). Unfortunately changing this now has the risk
of breaking existing code but we should reconsider this for 2.0.
When the input and output formats are the same and in a possible
intermediate format, avoid unpack and pack.
Never do passthrough channel mixing.
Only do dithering and noise shaping in S32 format
Add support for float and int16 mixing
Remove in-place processing, this simplifies things as we won't be using it.
Don't do clipping for float audio formats
Process as many samples as we can from the input and return the number
of processed samples from the chain. This simplifies some code.
Fix the IN_WRITABLE handling, don't overwrite the flags.
Pass flags in _converter_new() so that we can configure ourselves
differently depending on some options.
SOURCE_WRITABLE -> IN_WRITABLE because the array is called 'in'
Simplify the API, we don't need the consumed and produced output
arguments. The caller needs to use the _get_in_frames/get_out_frames API
to check how much input is needed and how much output will be produced.
We did not take the sample size into account. Rearrange the tests to have more
conversion test and an extra test case for passthrough operations.
Fixes#759890
Rename samples to num_samples, since we also have samples in chain, but that is
the data pointer. Always use gzize for num_samples. Make the log output a bit
more homogenous.
Rework the main processing loop. We now create an audio processing
chain from small core functions. This is very similar to how the
video-converter core works and allows us to statically calculate an
optimal allocation strategy for all possible combinations of operations.
Make sure we support non-interleaved data everywhere.
Add functions to calculate in and out frames and latency.
Any latency query before this will not get the correct latency so a new
latency query should be triggered once the audio sink know its own latency.
Without this the initial latency query from the pipeline arrives too early
sometimes and the resulting latency is too short.
https://bugzilla.gnome.org/show_bug.cgi?id=758911
Commit ff6d1a2a25 changed sample's type from
gint to gsize (and renamed it to in_samples). gsize is an unsigned long,
which means it can never be a negative value and the check making sure that
in_samples is >= 0 is never going to be false. Removing it.
CID 1338689
Move the audio quantize code from audioconvert to the audio library.
work on making an audio converter helper function similar to the video
converter.
Fold fastrandom directly into the quantizer, add some ORC code to
optimize this later.
Rename _get_default_mask() to _get_fallback_mask() to make it more
clear that the function only provides a fallback if nothing else can be
done. Also clarify this in the documentation.
API: gst_audio_channel_get_fallback_mask()
Add a TRUNCATE_RANGE flag for unpack functions to fill the least
significate bits with 0 (as did the old code). Also add functions
that don't truncate. Use the TRUNC flag in audioconvert for
backwards compatibility for now.
No need to use G_GINT64_FORMAT for potentially negative values of
GstClockTimeDiff. Since 1.6 these can be handled with GST_STIME_ARGS.
Plus it creates more readable values in the logs.
https://bugzilla.gnome.org/show_bug.cgi?id=757480
If the flush-start is arrived during _eos_wait() in basesink,
the 'eos' flag is overwritten to TRUE after exiting the _eos_wait().
To resolve the overwritten issue,
the subclass doing the _eos_wait() call should return the right value.
If the eos flag is set to TRUE again, it will cause error(enter the eos flow)
of the following state changing from PAUSED to PLAYING in basesink.
https://bugzilla.gnome.org/show_bug.cgi?id=754980
Before we just merged everything in pretty much random ways
ad-hoc instead of keeping state properly. In 0.10 that was
how it worked, but in 1.x the tag events sent should always
reflect the latest state and replace any previous tags.
So save the upstream (stream) tags, and save the tags set
by the decoder subclass with merge mode, and then update
the merged tags whenever either of those two changes.
This slightly changes the behaviour of gst_audio_decoder_merge_tags()
in case it is called multiple times, since now any call replaces
the previously-set tags. However, it leads to much more predictable
outcomes, and also we are not aware of any subclass which sets this
multiple times and expects all the tags set to be merged.
If more complex tag merging scenarios are required, we'll have
to add a new vfunc for that or the subclass has to intercept
the upstream tags itself and send merged tags itself.
https://bugzilla.gnome.org/show_bug.cgi?id=679768
Apparently I forgot how gobject works, there is no need to expose
it directly as one can call it from the parent_class pointer
This reverts commit 8a64592481.
Add gst_audio_decoder_set_use_default_pad_acceptcaps() to allow
subclasses to make videodecoder use the default pad acceptcaps
handling instead of resorting to the caps query that is, usually,
less efficient and unecessary
API: gst_audio_decoder_set_use_default_pad_acceptcaps
Subclasses can use it to select what queries they want to handle
and forward the rest to the default handling function.
API: gst_audio_decoder_sink_query_default
https://bugzilla.gnome.org/show_bug.cgi?id=753623
POOL meta just means that this specific instance of the meta is related to a
pool, a copy should be made when reasonable and the flag should just not be
set in the copy.
For alaw/mulaw we should also try to initialize the channel positions in the
ringbuffer's audio info. This allow pulsesink to directly use the channel
positions instead of using the default zero-initialized ones, which doesn't
work well.
https://bugzilla.gnome.org/show_bug.cgi?id=751144
This new clock slaving method allows for installing a callback that is
invoked during playback. Inside this callback, a custom slaving
mechanism can be used (for example, a control loop adjusting a PLL or an
asynchronous resampler). Upon request, it can skew the playout pointer
just like the "skew" method. This is useful if the clocks drifted apart
too much, and a quick reset is necessary.
Signed-off-by: Carlos Rafael Giani <dv@pseudoterminal.org>
https://bugzilla.gnome.org/show_bug.cgi?id=708362
We only get here if we don't have any srcpad caps, and we're going
to override the GstAudioInfo a few lines below anyway without ever
using it if for whatever reason we get caps here.
memcmp will blindly compare the reserved fields, as well as any
padding the compiler may choose to sprinkle in GstSegment.
Fixes valgrind complaints in unit tests, as well as some found via
https://bugzilla.gnome.org/show_bug.cgi?id=738216
When the ringbuffer is deactivated and then acquired, if the audio clock
provided by the sink gets reset to zero, we need to add an offset to the
clock to make sure that subsequent samples are written out at the right
times. While we need to leave this to derived classes to take care of
when they provide their own clock (since that clock may or may not be
reset to zero), we can do this ourselves if we know the provided clock
is our own (which does reset to zero on a re-acquire).
Make sure to update the output segment to track the segment
we're decoding in, but don't actually push it downstream until
after buffers are decoded.
https://bugzilla.gnome.org/show_bug.cgi?id=744806
If we have timestamps on input buffers and are in trickmode no-audio
mode, then don't pass anything to the subclass for decode and simply
send gap events downstream
Only for forward playback for now - reverse requires accumulating
GAP events and pushing out in reverse order.
https://bugzilla.gnome.org/show_bug.cgi?id=735666
In trickmode no-audio mode, or when receiving a GAP buffer,
discard the contents and render as a GAP event instead.
Make sure when rendering a gap event that the ring buffer will
restart on PAUSED->PLAYING by setting the eos_rendering flag.
This mostly reverts commit 8557ee and replaces it. The problem
with the previous approach is that it hangs in wait_preroll()
on a PLAYING-PAUSED transition because it doesn't commit state
properly.
https://bugzilla.gnome.org/show_bug.cgi?id=735666
The decoder can fail to drain on EOS if there was only one gather
set, because it will never have sent the segment event downstream
and set the output segment, and fail to detect that the rate < 0.0
Make sure to send pending events before sending all the gather data
for decode.