FEC may only be used when PLC is enabled on the audio decoder,
as it relies on empty buffers to generate audio from the next
buffer. Hooking to the gap events doesn't work as the audio
decoder does not like more buffers output than it sends.
The length of data to generate using FEC from the next packet
is determined by rounding the gap duration to nearest. This
ensures that duration imprecision does not cause quantization
to 2.5 milliseconds less than available. Doing so causes the
Opus API to fail decoding. Such duration imprecision is common
in live cases.
The buffer to consider when determining the length of audio
to be decoded is the previous buffer when using FEC, and the
new buffer otherwise. In the FEC case, this means we determine
the amount of audio from the previous buffer, whether it was
missing or not (and get the data either from this buffer, or
the current one if the previous one was missing).
rename gst-launch --> gst-launch-1.0
replace old elements with new elements(ffmpegcolorspace -> videoconvert, ffenc_** -> avenc_**)
fix caps in examples
https://bugzilla.gnome.org/show_bug.cgi?id=759432
We always require the channel-mapping-field. If it's 0 we require nothing
else, otherwise we need channels, stream-count and coupled count to be
available.
oggdemux is outputting the meta now, and only outputs if it should really
apply to the current buffer. Previously we would skip N samples also if we
started the decoder in the middle of the stream.
https://bugzilla.gnome.org/show_bug.cgi?id=757153
Previously, PLC frames always had a length of 120ms, which caused audio
quality degradation and synchronization errors. Fix this by calculating an
appropriate length for the PLC frame.
The length must be a multiple of 2.5ms. Calculate a multiple of 2.5ms that
is nearest to the current PLC length. Any leftover PLC length that didn't
make it into this frame is accumulated for the next PLC frame.
https://bugzilla.gnome.org/show_bug.cgi?id=725167
This way we let opusdec do the resampling if needed and don't carry
around buffers with a too high sample rate if not required.
While Opus always uses 48kHz internally, this information from the
header specifies which frequencies are safe to drop.
The max latency parameter is "the maximum time an element
synchronizing to the clock is allowed to wait for receiving all
data for the current running time" (docs/design/part-latency.txt).
https://bugzilla.gnome.org/show_bug.cgi?id=744338
opus + jpegformat plugin builds fail when gstreamer is configured with
--disable-gst-debug as they are checking the GST_DISABLE_DEBUG symbol
instead of GST_DISABLE_GST_DEBUG.
Signed-off-by: Peter Korsgaard <jacmet@sunsite.dk>
https://bugzilla.gnome.org/show_bug.cgi?id=683850
There are two of them, unintuitively enough; the one passed
to the encoder should not be the one that gets written to the
file. The former maps the input to an ordering which puts
paired channels first, while the latter moves the channels
to Vorbis order. So add code to calculate both, and we now
have properly paired channels where appropriate.
https://bugzilla.gnome.org/show_bug.cgi?id=665078
It would ideally be better to leave this to a rgvolume element,
but we don't control the pipeline. So do it by default, and allow
disabling it via a property, so the correct volume should always
be output.