Remove some unused variables from the inner product functions.
Make filter coefficients by interpolating if required.
Rename some fields.
Try hard to not recalculate filters when just chaging the rate.
Add more proprties to audioresample.
Rearrange the oversampled taps in memory to make it easier to use
SIMD instructions on them. this simplifies some sse code.
Add some more optimizations
Improve int16 resampling by using pmaddwd
Use intrinsics to scale and pack int16 samples
Align the coefficients so that we can use aligned loads
Add padding to taps and samples so that we don't have to use partial
loads for the remainder of the loops.
Remove copy_n, we can reuse the plain copy function with some new
parameters.
Align and pad the sample array.
Remove the consumed/produced output fields from the resampler and
converter. Let the caler specify the right number of input/output
samples so we can be more optimal.
Use just one function to update the converter configuration.
Simplify some things internally.
Make it possible to use writable input as temp space in audioconvert.
If we don't have writable memory, make sure to make a copy of the input
samples into a temporary (writable) buffer, even if we are dealing with
a native intermediate format that we don't need to call the unpack
function for.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=761655
gst_pad_get_allowed_caps() will return NULL if the srcpad has no peer.
In that case, use gst_pad_peer_query_caps() with template caps as filter
to have negotiated output caps properly before forwarding GAP event.
https://bugzilla.gnome.org/show_bug.cgi?id=761218
It's useful enough already to be used in other elements for audio aggregation,
let's give people the opportunity to use it and give it some API testing.
https://bugzilla.gnome.org/show_bug.cgi?id=760733
It's quite unexpected behaviour that various subclass settings are just
reset before set_format(). Unfortunately changing this now has the risk
of breaking existing code but we should reconsider this for 2.0.
When the input and output formats are the same and in a possible
intermediate format, avoid unpack and pack.
Never do passthrough channel mixing.
Only do dithering and noise shaping in S32 format
Add support for float and int16 mixing
Remove in-place processing, this simplifies things as we won't be using it.
Don't do clipping for float audio formats
Process as many samples as we can from the input and return the number
of processed samples from the chain. This simplifies some code.
Fix the IN_WRITABLE handling, don't overwrite the flags.
Pass flags in _converter_new() so that we can configure ourselves
differently depending on some options.
SOURCE_WRITABLE -> IN_WRITABLE because the array is called 'in'
Simplify the API, we don't need the consumed and produced output
arguments. The caller needs to use the _get_in_frames/get_out_frames API
to check how much input is needed and how much output will be produced.
We did not take the sample size into account. Rearrange the tests to have more
conversion test and an extra test case for passthrough operations.
Fixes#759890
Rename samples to num_samples, since we also have samples in chain, but that is
the data pointer. Always use gzize for num_samples. Make the log output a bit
more homogenous.
Rework the main processing loop. We now create an audio processing
chain from small core functions. This is very similar to how the
video-converter core works and allows us to statically calculate an
optimal allocation strategy for all possible combinations of operations.
Make sure we support non-interleaved data everywhere.
Add functions to calculate in and out frames and latency.
Any latency query before this will not get the correct latency so a new
latency query should be triggered once the audio sink know its own latency.
Without this the initial latency query from the pipeline arrives too early
sometimes and the resulting latency is too short.
https://bugzilla.gnome.org/show_bug.cgi?id=758911
Commit ff6d1a2a25 changed sample's type from
gint to gsize (and renamed it to in_samples). gsize is an unsigned long,
which means it can never be a negative value and the check making sure that
in_samples is >= 0 is never going to be false. Removing it.
CID 1338689
Move the audio quantize code from audioconvert to the audio library.
work on making an audio converter helper function similar to the video
converter.
Fold fastrandom directly into the quantizer, add some ORC code to
optimize this later.
Rename _get_default_mask() to _get_fallback_mask() to make it more
clear that the function only provides a fallback if nothing else can be
done. Also clarify this in the documentation.
API: gst_audio_channel_get_fallback_mask()
Add a TRUNCATE_RANGE flag for unpack functions to fill the least
significate bits with 0 (as did the old code). Also add functions
that don't truncate. Use the TRUNC flag in audioconvert for
backwards compatibility for now.
No need to use G_GINT64_FORMAT for potentially negative values of
GstClockTimeDiff. Since 1.6 these can be handled with GST_STIME_ARGS.
Plus it creates more readable values in the logs.
https://bugzilla.gnome.org/show_bug.cgi?id=757480
If the flush-start is arrived during _eos_wait() in basesink,
the 'eos' flag is overwritten to TRUE after exiting the _eos_wait().
To resolve the overwritten issue,
the subclass doing the _eos_wait() call should return the right value.
If the eos flag is set to TRUE again, it will cause error(enter the eos flow)
of the following state changing from PAUSED to PLAYING in basesink.
https://bugzilla.gnome.org/show_bug.cgi?id=754980
Before we just merged everything in pretty much random ways
ad-hoc instead of keeping state properly. In 0.10 that was
how it worked, but in 1.x the tag events sent should always
reflect the latest state and replace any previous tags.
So save the upstream (stream) tags, and save the tags set
by the decoder subclass with merge mode, and then update
the merged tags whenever either of those two changes.
This slightly changes the behaviour of gst_audio_decoder_merge_tags()
in case it is called multiple times, since now any call replaces
the previously-set tags. However, it leads to much more predictable
outcomes, and also we are not aware of any subclass which sets this
multiple times and expects all the tags set to be merged.
If more complex tag merging scenarios are required, we'll have
to add a new vfunc for that or the subclass has to intercept
the upstream tags itself and send merged tags itself.
https://bugzilla.gnome.org/show_bug.cgi?id=679768
Apparently I forgot how gobject works, there is no need to expose
it directly as one can call it from the parent_class pointer
This reverts commit 8a64592481.
Add gst_audio_decoder_set_use_default_pad_acceptcaps() to allow
subclasses to make videodecoder use the default pad acceptcaps
handling instead of resorting to the caps query that is, usually,
less efficient and unecessary
API: gst_audio_decoder_set_use_default_pad_acceptcaps
Subclasses can use it to select what queries they want to handle
and forward the rest to the default handling function.
API: gst_audio_decoder_sink_query_default
https://bugzilla.gnome.org/show_bug.cgi?id=753623
POOL meta just means that this specific instance of the meta is related to a
pool, a copy should be made when reasonable and the flag should just not be
set in the copy.
For alaw/mulaw we should also try to initialize the channel positions in the
ringbuffer's audio info. This allow pulsesink to directly use the channel
positions instead of using the default zero-initialized ones, which doesn't
work well.
https://bugzilla.gnome.org/show_bug.cgi?id=751144
This new clock slaving method allows for installing a callback that is
invoked during playback. Inside this callback, a custom slaving
mechanism can be used (for example, a control loop adjusting a PLL or an
asynchronous resampler). Upon request, it can skew the playout pointer
just like the "skew" method. This is useful if the clocks drifted apart
too much, and a quick reset is necessary.
Signed-off-by: Carlos Rafael Giani <dv@pseudoterminal.org>
https://bugzilla.gnome.org/show_bug.cgi?id=708362
We only get here if we don't have any srcpad caps, and we're going
to override the GstAudioInfo a few lines below anyway without ever
using it if for whatever reason we get caps here.
memcmp will blindly compare the reserved fields, as well as any
padding the compiler may choose to sprinkle in GstSegment.
Fixes valgrind complaints in unit tests, as well as some found via
https://bugzilla.gnome.org/show_bug.cgi?id=738216
When the ringbuffer is deactivated and then acquired, if the audio clock
provided by the sink gets reset to zero, we need to add an offset to the
clock to make sure that subsequent samples are written out at the right
times. While we need to leave this to derived classes to take care of
when they provide their own clock (since that clock may or may not be
reset to zero), we can do this ourselves if we know the provided clock
is our own (which does reset to zero on a re-acquire).
Make sure to update the output segment to track the segment
we're decoding in, but don't actually push it downstream until
after buffers are decoded.
https://bugzilla.gnome.org/show_bug.cgi?id=744806
If we have timestamps on input buffers and are in trickmode no-audio
mode, then don't pass anything to the subclass for decode and simply
send gap events downstream
Only for forward playback for now - reverse requires accumulating
GAP events and pushing out in reverse order.
https://bugzilla.gnome.org/show_bug.cgi?id=735666
In trickmode no-audio mode, or when receiving a GAP buffer,
discard the contents and render as a GAP event instead.
Make sure when rendering a gap event that the ring buffer will
restart on PAUSED->PLAYING by setting the eos_rendering flag.
This mostly reverts commit 8557ee and replaces it. The problem
with the previous approach is that it hangs in wait_preroll()
on a PLAYING-PAUSED transition because it doesn't commit state
properly.
https://bugzilla.gnome.org/show_bug.cgi?id=735666
The decoder can fail to drain on EOS if there was only one gather
set, because it will never have sent the segment event downstream
and set the output segment, and fail to detect that the rate < 0.0
Make sure to send pending events before sending all the gather data
for decode.
Don't render out silence samples to a buffer, just
start the clock running, since any buffer with the
GAP flag will be discarded in render() now anyway.
Make the base audio sink throw away buffers marked GAP, or all
incoming buffers when performing a trick play with
GST_SEGMENT_TRICKMODE_NO_AUDIO flag set, and make sure to start
the ringbuffer when that happens so the clock starts running.
Preserve the timing calculations when rendering, so state is all
updated the same, but just don't render samples.
https://bugzilla.gnome.org/show_bug.cgi?id=735666
Some audio sink sub-classes (pulsesink) don't start their clock
when the ringbuffer starts, but always have to on EOS. When we
explicitly need to start the ringbuffer, make sure sub-classes will
do it by (ab)using the existing eos_rendering flag.
Otherwise calls to get the clock time might change its internal state
and the internal/external time for calibration get unbalanced leading to
a clock jump
https://bugzilla.gnome.org/show_bug.cgi?id=740834
The same was done already in the decoder, and we cleaned some state just above
manually that would also be taken care of by reset().
This makes sure that the element is in the same state before start() is called
the very first time and every future call after the element was used already.
The implementation of that vfunc might want to use the object lock for
something too. It's generally not a good idea to keep the object lock while
calling any function implemented elsewhere.
Also the ringbuffer can only be NULL at this point, remove a useless if block.
And in the sink actually hold the object lock while setting the ringbuffer on
the instance. Code accessing this is expected to use the object lock, so do it
here ourselves too.
Allows subclasses to do custom caps query replies.
Also exposes the standard caps query handler so subclasses can just
extend on top of it instead of reimplementing the caps query proxying.
Allows decoders to proxy downstream restrictions on caps.
Also implements accept-caps query to prevent regressions caused by the
new fields on the return of a caps query that would cause the accept-caps
to fail as it uses subset caps comparisons
The spec mentions a version of the MPEG-2 frame with a base frame and
extension frame. I don't have IEC 13818-3 to figure out what that is,
and don't see any references in search results, so it's a FIXME for now.
https://bugzilla.gnome.org/show_bug.cgi?id=736797
When playing chained data the audio ringbuffer is released and
then acquired again. This makes it reset the segbase/segdone
variables, but the next sample will be scheduled to play in
the next position (right after the sample from the previous media)
and, as the segdone is at 0, the audiosink will wait the duration
of this previous media before it can write and play the new data.
What happens is this:
pointer at 0, write to 698-1564, diff 698, segtotal 20, segsize 1764, base 0
it will have to wait the length of 698 samples before being able to write.
In a regular sample playback it looks like:
pointer at 677, write to 696-1052, diff 19, segtotal 20, segsize 1764, base 0
In this case it will write to the next available position and it
doesn't need to wait or fill with silence.
This solution is borrowed from pulsesink that resets the clock to
start again from 0, which makes it reset the time_offset to the time
of the last played sample. This is used to correct the place of
writing in the ringbuffer to the new start (0 again)
https://bugzilla.gnome.org/show_bug.cgi?id=737055
Move the assert to the error handling block at the end of the function so the
the logging is still triggered. Reword the logging slightly and add another
comment to hint what went wrong.
Fixes#737138
Issue:
During a PAUSED->PLAYING transition when we are rendering an audio buffer in AudioBaseSink
we make adjustments to the sink's provided clock i.e. fix clock calibration using the external
pipeline clock, within "gst_audio_base_sink_sync_latency function inside gstaudiobasesink.c".
For the calibration adjustment we need to get the sink clock time using "gst_audio_clock_get_time".
But before calling "gst_audio_clock_get_time" we acquire the Object Lock on the Sink. If sink is
a pulsesink, "gst_audio_clock_get_time" internally calls "gst_pulsesink_get_time" which needs to
acquire Pulse Audio Main Loop Lock before querying Pulse Audio for its stream time using
"pa_stream_get_time". Please see "gst_pulsesink_get_time in pulsesink.c".
So the situation here is we have acquired the Object lock on Sink and need PA Main Loop Lock.
Now Pulse Audio Main Thread itself might be in the process of posting a stream status
message after Paused to Playing transition which in turn acquires the PA Main loop lock and
needs the Object Lock on Pulse Sink. This causes a deadlock with the earlier render thread.
Fix:
Do not acquire the object Lock on Sink before querying the time on PulseSink clock. This is
similar to the way we have used get_time at other places in the code. Acquire it after the
get_time call. This way PA Main loop will be able to post its stream status message by
acquiring the Sink Object lock and will eventually release its Main Loop lock needed for
gst_pulsesink_get_time to continue.
https://bugzilla.gnome.org/show_bug.cgi?id=736071
As was done for the base video decoder in commit 695675, don't
flush out the decoder on a new SEGMENT event. Segment events
may be a new segment, but are also often segment updates for
the current segment where the old data should be kept. For new
segments, a STREAM_START event will already trigger a drain, but
make sure to flush any remaining partial data then as well.
https://bugzilla.gnome.org/show_bug.cgi?id=734666
With most decoder libraries, and especially when accessing codecs via
OpenMAX or similar APIs, we don't have the ability to properly related
the output buffers to a number of input samples. And could e.g. get
a fractional number of input buffers decoded at a time.
Previously this would in the end lead to an error message and stopped
playback. Change it to a warning message instead and try to handle it
gracefully. In theory the subclass can now get timestamp tracking
wrong if it completely misuses the API, but if on average it behaves
correct (and gst-omx and others do) it will continue to work properly.
Also add a test for the new behaviour.
We don't change it in the encoder yet as that requires more internal logic
changes AFAIU and I'm not aware of a case where this was a problem so far.
They are very confusing for people, and more often than not
also just not very accurate. Seeing 'last reviewed: 2005' in
your docs is not very confidence-inspiring. Let's just remove
those comments.
We were returning in various places without unreffing the caps, and
we were also leaking (overwriting) the caps we got from _get_current_caps()
Spotted by Haakon Sporsheim in #gstreamer
Clock slaving can clip start time to zero, giving us a shorted
duration than we originally got. To keep in sync, we must then
discard the samples falling before that zero timestamp.
This possibly fixes random distortion caused by constant PA
underflows which are never resynced.
Fixes problem in audioconvert, which would end up using
a mixmatrix when converting between different mono format
because it thinks MONO positioning is different from
unpositioned channels, which is not the case in this
special case. The mixmatrix would end up being 0.0 so
audioconvert would convert to silence samples.
https://bugzilla.gnome.org/show_bug.cgi?id=724509
We call the _get_time function from the provided clock and we don't lock
the sink object for performance reasons. Make sure we only read and
check variables once so that they don't change while we are executing
the code.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=720661
For default caps generation when handling gap events that are sent
before any buffer, try to use caps that are closer to what upstream
provided to avoid fixating rate or channels to 1 as default.
So there are the steps:
1) Try to set rate, channels and channel-mask from upstream if provided
2) Fixate the rate and channels to the default rate and channels from
audio lib
3) Fixate the caps just to be sure everything is fixed
4) If no channel-mask was provided and channels > 2, use a default
channel-mask (taken from audioconvert code)
https://bugzilla.gnome.org/show_bug.cgi?id=722144
Before trying to generate a default fixated caps when handling a gap
event, make sure that the same strategy that is used when handling
a buffer has been attempted. Otherwise audiodecoder will ignore
upstream caps settings such as rate and channels and will likely
end with a caps with channels=1 and rate=1.
https://bugzilla.gnome.org/show_bug.cgi?id=722144
Port a change from audiobasesink from def07410, to ignore setcaps
when the caps don't actually change, and avoid a reconfiguration
and reset of the ringbuffer in that case.
And don't assume in other code that set_format() preserves any fields at
all. These assumptions were already made here for fields that were changed
by set_format().
If there are no caps from the audio decoder when handling a GAP
event - as when one is received right at the start on a DVD without
initial audio - then choose any default caps for downstream and
then send the GAP, so the audio sink has a configured format in
which to start the ringbuffer.
Also, make the audio sink reject a GAP without caps with a clearer
error message.
Fixes bug https://bugzilla.gnome.org/show_bug.cgi?id=603921
Fixes "Unitialized Scalar Variable" issues reported by Coverity.
Has the added advantage of detecting whether somebody *does* use those
fields (ending up with a invalid address).
https://bugzilla.gnome.org/show_bug.cgi?id=720810
So that it avoids to send an allocation query twice.
One from an early call to gst_audio_encoder_negotiate from a
subclass, then one from gst_audio_encoder_allocate_output_buffer.
Which means that previously gst_audio_encoder_negotiate was not
clearing the GST_PAD_FLAG_NEED_RECONFIGURE even on success.
Fixes bug https://bugzilla.gnome.org/show_bug.cgi?id=719684
Raise an error in case no frames are decoded before EOS and we
have input, meaning that data was received but it was somehow invalid.
Based on the videodecoder change, merged here for consistency.
https://bugzilla.gnome.org/show_bug.cgi?id=711094
Allows using -1 to make audiodecoder never post an error message
after decoding errors.
Based on the videodecoder change, merged here for consistency.
https://bugzilla.gnome.org/show_bug.cgi?id=711094