* gst/qtdemux/qtdemux.c (gst_qtdemux_src_convert): Unref the qtdemux; we
weren't doing so before.
(gst_qtdemux_handle_src_event, gst_qtdemux_chain): Fix some error
cases which would leak a ref to the qtdemux.
Extract MusicBrainz tags added by MusicBrainz's Picard
tagger application. These tags (esp. the album id) are
helpful for rhythmbox et.al. to automatically downloads
cover art.
https://bugzilla.gnome.org/show_bug.cgi?id=642205
Images might have framerate=0/1 in the caps, which caused an
assertion on deinterlace. I don't know of interlaced image formats
but deinterlace might be hardcoded on some generic pipelines and
it shouldn't assert.
The fix was to set field_duration to 0 if the input has a framerate
with a 0 numerator.
This patch also adds checks for this situation on the unit tests.
https://bugzilla.gnome.org/show_bug.cgi?id=641400
Commit 6c8268dbfd broke recording
from interlaced v4l2 source (e.g. typical tv capture card) since
V4L2_FIELD_SEQ_TB (with fields stored separately) does not map
to currently defined interlaced format (fields stored interleaved).
Besides this mismatch, hardware might quite likely not support or
appreciate this field value, since querying supported formats mapped
_INTERLACED field formats to interlaced=true caps (so the latter should
not be mapped to field value that is not known to be supported).
The element downstream of mp3enc might only accept certain sample rates or channels,
make sure we relay any restrictions that do exist to upstream when it does a
get_caps() on the sink pad. That way upstream elements like audioresample or
audioconvert can pick a sample rate / channel configuration that will be accepted,
instead of just negotiating to the highest, which might then be rejected.
https://bugzilla.gnome.org/show_bug.cgi?id=641151
gstdirectsoundsink.c: In function 'gst_directsound_sink_write':
gstdirectsoundsink.c:557: error: implicit declaration of function '_swab'
gstdirectsoundsink.c:557: error: nested extern declaration of '_swab'
Theora can only use the last frame (or the keyframe) as a reference, so in
practice. If we receive a buffer that references an unknown codebook, request
new headers. It probably means that headers were lost.
Functions that process the rtcp buffer could decide to keep a ref
on the buffer for further processing. So make the metadata writable
only after they are done.
By allowing larger chunks to be sent, PulseAudio will have a
lower CPU usage. This is especially important on low-end machines,
where PulseAudio can crash if packets are coming in at a higher
rate than PulseAudio can process them.
Signed-off-by: David Henningsson <david.henningsson@canonical.com>
In particular, this avoids missing the intended keyframe when first converting
from the frame's mov time to global segment time, and then back from global
time to mov time when activating the segment.