gcc on OSX complains about ret being used uninitialized in
this function, and it is right. Don't leak element ref
when returning early because newsegment event is not in
TIME format.
Remove the android/ top dir
Fixe the Makefile.am to be androgenized
To build gstreamer for android we are now using androgenizer which generates the
needed Android.mk files.
Androgenizer can be found here:
http://git.collabora.co.uk/?p=user/derek/androgenizer.git
Also initialize it always in TIME format. We require TIME segments
in oggmux anyway and drop newsegment events in other formats and
assume an open-ended segment starting at 0.
Theora and vorbis use running time (which is correct) for calculating
the granulepos for their ogg packets. Oggmux, however, used
timestamps to order the received buffers.
This patch makes it use the running time to compare buffer times
and also to timestamp pushed buffers.
Some bits of the code still use timestamps, but they are only
used to calculate durations, so it should be fine.
https://bugzilla.gnome.org/show_bug.cgi?id=643775
In case the ogg mapper doesn't handle all the accepted input formats
(although it really should). Saves us error handling for that case
though. Also log caps properly.
https://bugzilla.gnome.org/show_bug.cgi?id=629196
Using the IN_CAPS flag for this is brittle, and will fail if either
vorbisparse or vorbistag (which is itself based on vorbisparse) is
inserted between oggdemux and oggmux. Possibly other elements too
(eg, theoraparse, etc).
Using oggstream ensures we Get It Right More Often Than Not.
https://bugzilla.gnome.org/show_bug.cgi?id=629196
Discontinuities are automatically signalled by oggdemux at the start
of a new stream. When oggmux is yet to output actual data pages,
do not signal these discontinuities in the ogg stream.
This patch may miss some actual discontinuities at the very start of
a stream, but avoids the spurious missing pages when encoding happens
normally.
A better fix might involve finding a way to distinguish between actual
data discontinuities and discontinuities merely marking the start of
a new stream.
Fixes an issue with ogg page numbering (would skip a number for no
reason, which then looks like a packet was lost somewhere) when
re-muxing an ogg stream, e.g. when re-tagging in rhythmbox.
https://bugzilla.gnome.org/show_bug.cgi?id=629196
This was causing keyframe_granule to be set to 0 for all streams
when seeking to the beginning of the stream, i.e., at the
beginning of playback. Fixes#619778.
Instead, use either 0 or 1, depending on bitstream version, which give
the correct result for streams which aren't cut off at start.
This allows that function to not return negative granpos.
https://bugzilla.gnome.org/show_bug.cgi?id=638276
The offset part of the granpos is not a sign of the newer encoding.
Use the version number instead.
This fixes the criticals thrown by theoraparse, and (at last) the
remaining part of #553244.
allocate buffers using gst_buffer_new_and_alloc() instead of
gst_pad_alloc_buffer_and_set_caps(), as the first one will
cause the pad to block, and we don't want that since that will
prevent subsequent pads from being fed if a block occurs at
start, when all pads must be fed for playback to start.
This fixes autoplugging of the tiger element and other things.
https://bugzilla.gnome.org/show_bug.cgi?id=637822
Oggdemux will currently try to pad alloc a buffer from the peer when it is
reading the header files. This is a relic from the time where we had an internal
parser and needs to be removed at some point in time.
The problem is that when there is no peer pad yet (which is normal when
collecting headers) we should still continue to parse all the packets of a
page instead of erroring out on NOT_LINKED.
Fixes#632167
Only keep the last valid granulepos we see when scanning the last
pages. It is possible that the last page that we inspect has a -1 granulepos, in
which case we want to keep the previous valid time instead.
Fixes#631703
Files with a skeleton, or other files with a stream that ends before the end of
the chain would start playing from the end of the chain when trying to seek with
a negative rate at a position between the end of any stream and the end of the
chain.
This is due to the loop in _do_seek() assuming that pages will be encountered
for all streams shortly after the place where we want to seek, as found by
do_binary_search().
In the first iteration of the loop, stream ends are now checked against the
time of the current page.