In case many packets fit on a page, we may not see a granpos for
a while, and granpos interpolation can wrap the 'frames since last
keyframe' part of the granpos, generating a granpos which is smaller
than what it should be.
This is fixed by detecting keyframe packets (at least for Theora),
and updating the last keyframe granpos from this.
This may still be generating potentially wrong granpos for streams
which have a Theora like granpos (keyframes, a max keyframe distance
and a count of frames since last keyframe), and which allow implicit
granules on packets. For these streams, a custom keyframe detection
routine should be plugged into their GstOggStream mapper.
https://bugzilla.gnome.org/show_bug.cgi?id=669164
It was matching non header packets.
This fixes various leaks, where buffers would be pushed onto a headers
list, but never popped.
Might also fix corruption as those buffers were dropped from the output
silently...
https://bugzilla.gnome.org/show_bug.cgi?id=669167
An ALSA sink may select a different rate (as we use the _set_rate_near
API, which is not guaranteed to set the exact target rate).
The rest of the code seems to already handle this well, as output
from a 88200 Hz file seems to have the correct pitch when selecting
a 96 kHz rate.
For the USE_TREMOLO case, GstVorbisDec doesn't have
a vb member. Besides, Tremolo's vorbis_dsp_synthesis()
expects a vorbis_dsp_state to be passed as first
argument. Not a vorbis_block.
When I first implemented push mode seeking, I removed the chain
freeing there as it could be used later. The current code does not
seem to do that though, so I'm restoring the previous freeing,
which plugs the leak while apparently not reintroducing use of
freed data with chained and normal files, both with gst-launch
playbin2 and Totem.
This reverts commit 5df30c1b90.
I must have dreamt the Valgrind logs, reverting this reintroduces
no leak, and gets rid of the test failures it introduced :S
A first hang was happening when trying to locate a page backwards,
where we'd sync forever on the same page.
With that fixed, a second hang would happen after preparing an EOS
event, but with no chain created yet to send it to, the pipeline
would stay idle forever.
An element error is now emitted for this case.
Pads are initialized twice: when requesting pads and when
initializing collectpads. Avoid double initialization by
checking if collectpads are still going to be initialized when
creating request pads.