In case many packets fit on a page, we may not see a granpos for
a while, and granpos interpolation can wrap the 'frames since last
keyframe' part of the granpos, generating a granpos which is smaller
than what it should be.
This is fixed by detecting keyframe packets (at least for Theora),
and updating the last keyframe granpos from this.
This may still be generating potentially wrong granpos for streams
which have a Theora like granpos (keyframes, a max keyframe distance
and a count of frames since last keyframe), and which allow implicit
granules on packets. For these streams, a custom keyframe detection
routine should be plugged into their GstOggStream mapper.
https://bugzilla.gnome.org/show_bug.cgi?id=669164
It was matching non header packets.
This fixes various leaks, where buffers would be pushed onto a headers
list, but never popped.
Might also fix corruption as those buffers were dropped from the output
silently...
https://bugzilla.gnome.org/show_bug.cgi?id=669167
This is necessary in order to match what the caps strings in
video.h contain for 16-bit rgb formats and also to match how
gst_video_format_parse_caps expects them.
https://bugzilla.gnome.org/show_bug.cgi?id=667681
After a PAUSED->READY change the sink pads are currently not set to
blocking state. When the element is set back to PAUSED, the change will
be done asynchronously, but as the _pad_blocked_cb() callback is now not
called, the state change never completes.
Fix that by setting the sink pads to blocking state on a PAUSED->READY
change, which ensures that the _pad_blocked_cb() is called when needed
on any future READY->PAUSED change. The sink pads are already put to
blocking state on NULL->READY change, so this behavior is consistent.
Fixes bug #668097.
In order to allow for proper functionality when a decoder only supports
one instance at a time (dsp), we must block the demuxer pads when they
get created if they are not part of the active group, preventing buffers
from being sent to the decoder (and initializing it through setcaps),
then after we switch to a new group, we unblock the demuxer pads for
the active groups. In the callback for the unblock, we prune the old
groups, making sure the previous decoder instance is destroyed before
we push a buffer to the new instance.
An ALSA sink may select a different rate (as we use the _set_rate_near
API, which is not guaranteed to set the exact target rate).
The rest of the code seems to already handle this well, as output
from a 88200 Hz file seems to have the correct pitch when selecting
a 96 kHz rate.
For the USE_TREMOLO case, GstVorbisDec doesn't have
a vb member. Besides, Tremolo's vorbis_dsp_synthesis()
expects a vorbis_dsp_state to be passed as first
argument. Not a vorbis_block.
I'm not 100% sure this is valid on any other X server than mine,
but since the XFree call does not take the context as a parameter,
it seems pretty certain it's the right thing to do, but I'll put
this caveat here in case someone checks in the future.
Fix building of the libgstvideo module on Android by adding the
missing and needed $(DEFAULT_INCLUDES) to CFLAGS for the
androgenizer call on gst-libs/gst/video/Makefile.am
Before this change, building was failing due to gst-plugins-base/
and gst-plugins-base/gst-libs/gst/video being left out of the
include path.
When I first implemented push mode seeking, I removed the chain
freeing there as it could be used later. The current code does not
seem to do that though, so I'm restoring the previous freeing,
which plugs the leak while apparently not reintroducing use of
freed data with chained and normal files, both with gst-launch
playbin2 and Totem.
This reverts commit 5df30c1b90.
I must have dreamt the Valgrind logs, reverting this reintroduces
no leak, and gets rid of the test failures it introduced :S
Not that I have ever seen these in practice, but if they
can't happen we may just as well just assign the new tag
list. Merge properly to be on the safe side, and also
avoid a useless tag list copy in the normal case where
there is no tag list yet.
A first hang was happening when trying to locate a page backwards,
where we'd sync forever on the same page.
With that fixed, a second hang would happen after preparing an EOS
event, but with no chain created yet to send it to, the pipeline
would stay idle forever.
An element error is now emitted for this case.