When calculating duration in push-mode we seek to a certain position
and discard any data until we get data from that requested position.
The problem is that basing ourselves solely on offset to determine
whether we reached the target offset is wrong since the source might
be fast enough to send us that target position *before* it processed
the requested seek.
This would end up in a situation where:
* We think we're done with duration estimate
* We fire a seek back to "0" in the loop thread
* We resume normal processing
* ... except that we're still getting data from too far ahead which
we decide to process.
* And we start doing totally wrong granule/time/duration calculation
and pushing wrong data.
Instead of this confusion, wait until we receive data from the requested
seek. We do that by using the fact that the seqnum in
seek_event_drop_til will be non-zero until the SEGMENT corresponding
to the requested SEEK has been received.
Bonus: makes startup slightly faster
Code using the push_loop_thread (using for sending seeks) assumes
that the thread was properly started, except that this isn't always
true and the thread might not have completely started.
Instead wait for the thread to properly start before doing anything
else.
If we are going to return a (potentially) 64bit integer, don't use
a 32bit one for calculation, otherwise we could end up exceeding
the maximum size of a 32bit int.
For stream mappers that don't set a specific granuleshift, it will
have the default value of -1.
Protect the code for that and return the granule value as-is
Since the default value of a GstOggPad.map.map was 0 ... we would
end up using wrong functions from mappers() if the stream wasn't
initialized yet.
Instead of that, use a default blank/empty first entry.
In some corner cases we end up with the building chain not being
properly tracked (and therefore not properly freed).
Add a FIXME so it can later be fixed, but for now just fix the leak
... as expected later on when end time is used to determine end running time.
Otherwise the latter is determined as NONE and the resulting text buffer is
then used indefinitely.
Fix various issues with reverse playback by clearing tracking
vars when working in reverse, and where possible using the
timestamp interpolation code to generate timestamps for
outgoing buffers. Make sure to mark things as discontinuous
only when looping backward to a new position and fix seeking
to the next page when starting.
In gst_ogg_demux_do_seek() when calculating the
keyframe time, account for a non-zero start-time
Handle a discontinuous first packet in
gst_ogg_demux_setup_first_granule() because that's pretty
normal after a seek. Also differentiate between a genuinely
truncated first packet and just bailing out early, by not using
granule = -1 as an error code.
Make the debug output logs clearer about which timestamps
are stream times (PTS) and which are ogg timestamps.
This is a followup commit to b95725c37e
* Resetting the decoder should only happen when we get a new initialization
header (0x01) and not on the other headers
* The initialized variable only gets set to TRUE once all headers have
been parsed. Also check if the vorbis_info struct has been properly resetted
also. Failure to do that would cause vorbisdec to error if it got
two initialization header in a row (the first would configure the underlying
library and the second one would error out because it's already initialized)
https://bugzilla.gnome.org/show_bug.cgi?id=779515