We only need to initialize the mutex/cond once when creating the
element and then release them when we are done with the element.
Avoids weird "mutex_clear called when still locked" issues
There were still some races going on where seeking events wouldn't
be properly intercepted/executed by this thread.
* Instead of always waiting for the GCond to be emitted, first just
check if there is an event available
* Take ownership of the event *while* the lock is taken and not
after releasing/reacquiring it
* Finally acquire lock at the very top and release it at the end
to make it a bit more streamlined
This removes the remaining issues with seeks not being executed
The previous branch will release the lock in the call to
gst_ogg_demux_seek_back_after_push_duration_check_unlock()
Only unlock it if we didn't call that function
When calculating duration in push-mode we seek to a certain position
and discard any data until we get data from that requested position.
The problem is that basing ourselves solely on offset to determine
whether we reached the target offset is wrong since the source might
be fast enough to send us that target position *before* it processed
the requested seek.
This would end up in a situation where:
* We think we're done with duration estimate
* We fire a seek back to "0" in the loop thread
* We resume normal processing
* ... except that we're still getting data from too far ahead which
we decide to process.
* And we start doing totally wrong granule/time/duration calculation
and pushing wrong data.
Instead of this confusion, wait until we receive data from the requested
seek. We do that by using the fact that the seqnum in
seek_event_drop_til will be non-zero until the SEGMENT corresponding
to the requested SEEK has been received.
Bonus: makes startup slightly faster
Code using the push_loop_thread (using for sending seeks) assumes
that the thread was properly started, except that this isn't always
true and the thread might not have completely started.
Instead wait for the thread to properly start before doing anything
else.
In some corner cases we end up with the building chain not being
properly tracked (and therefore not properly freed).
Add a FIXME so it can later be fixed, but for now just fix the leak
Fix various issues with reverse playback by clearing tracking
vars when working in reverse, and where possible using the
timestamp interpolation code to generate timestamps for
outgoing buffers. Make sure to mark things as discontinuous
only when looping backward to a new position and fix seeking
to the next page when starting.
In gst_ogg_demux_do_seek() when calculating the
keyframe time, account for a non-zero start-time
Handle a discontinuous first packet in
gst_ogg_demux_setup_first_granule() because that's pretty
normal after a seek. Also differentiate between a genuinely
truncated first packet and just bailing out early, by not using
granule = -1 as an error code.
Make the debug output logs clearer about which timestamps
are stream times (PTS) and which are ogg timestamps.
If we can't find a valid granule near the end of the file, we
disable seeking. This guards against the whole file being then
read and never going to PLAYING.
https://bugzilla.gnome.org/show_bug.cgi?id=770314
This workaround tried to avoid an EOS event when seeking to the
end of an Ogg stream in order to find its duration. At some point,
an EOS event there would cause any queue2 upstream to pause and
not restart on a seek back to the beginning. This now appears to
not be the case anymore, and so the workaround can be removed.
https://bugzilla.gnome.org/show_bug.cgi?id=767689
If the duration is not known from the chain, it might be known
by the startup seek.
This fixes failure to seek.
Merged with a patch from Tim-Philipp Müller <tim@centricular.com>
https://bugzilla.gnome.org/show_bug.cgi?id=768991
Dropping a buffer because we have a seek pending is normal,
and will now happen when we trigger a seek while going through
the packets in a page. So this should not be an error.
A low bitrate stream which can pack more than 2 seconds of audio
in a page would cause the stream's position to be updated not
often enough, and would trigger a spurious "jump" via a GAP
event. Instead, we update the stream position after calculating
the new overall segment position.
https://bugzilla.gnome.org/show_bug.cgi?id=764966
The granulepos does not have the pre-skip subtracted while timestamps do,
and the last granulepos will be shorter by the number of samples that should
be dropped because of padding in the end.
As such, extrapolating the granule of the beginning of the first frame will
lead to a negative value, which is not a problem but intentional.
https://bugzilla.gnome.org/show_bug.cgi?id=757153
This reverts commit 76647f2710.
Avoiding pull mode activation is a feature regression, and
demuxers should always use pull mode where that is possible,
e.g. if there's an upstream queue2 with a ring buffer or
a download buffer.
This patch made reverse playback no longer possible over http.
If the goal is to minimise seeks, then that can still be done
by making the demuxer behave differently in pull mode if
the SEQUENTIAL flag is set. If there are bugs, like the demuxer
needlessly scanning the entire file on start-up in pull mode,
then those should be fixed instead.
https://bugzilla.gnome.org/show_bug.cgi?id=746010
gst_event_replace() takes its own reference on the event so we should drop
ours after creating and storing an event using it.
This fix leaks which can be reproduced using the
validate.http.media_check.vorbis_theora_1_ogg scenario.
https://bugzilla.gnome.org/show_bug.cgi?id=748247
When a stream has a skeleton index, the stream time is taken from that
index. However, when part of the stream is captured, the index is
invalid as its offsets are now wrong. To avoid this, we ignore the index
when the last offset points beyond the end of the stream (when its
byte length is known).
https://bugzilla.gnome.org/show_bug.cgi?id=744070
gstoggdemux.c:1233:11: error: format specifies type 'long' but the argument has type 'ogg_int64_t' (aka 'long long') [-Werror,-Wformat]
granule);
^~~~~~~
https://bugzilla.gnome.org/show_bug.cgi?id=746512
The code that was calculating the start granule from packet durations
was interpreting a negative value as an error, but this is actually a
valid case, to indicate clipping of data at start.
https://bugzilla.gnome.org/show_bug.cgi?id=743900
If we get EOS when we're trying to build a chain, we disable seeking
and continue instead of posting an error. This can happen for corner
cases such as a stream with a video that stops before the end, for
instance.
https://bugzilla.gnome.org/show_bug.cgi?id=745980