When the first segment has position != 0 and position > max-size-time
it will immediatelly cause the multiqueue to signal overrun.
This can happen easily with adaptive streams when switching bitrates
and starting a new group. The segment for this new group will have
a position that is much greater than 0 and will lead to this issue.
This is particularly harmful when the adaptive stream uses mpegts
that doesn't emit no-more-pads and it might happen that only one
of the stream pads was added when the multiqueue overruns and gets
the group ready for exposing. So the user will only get audio or
video.
The solution is to fallback to the sink segment while the source pad
has no segment.
https://bugzilla.gnome.org/show_bug.cgi?id=729124
If the toplevel bin is not not a pipeline, we place the bin in a
pipeline. Also make sure that we connect to the deep-notify of this new
pipeline because we will g_signal_handler_disconnect() from it later.
This ensures that the lock of the internal pad is held while referencing
it's peer (= the target pad), which ensures that the peer is not
going to be unlinked/destroyed in the meantime.
https://bugzilla.gnome.org/show_bug.cgi?id=725809
Make a method to get the seeking threshold. If data is further away from
this threshold we want to perform a seek upstream.
When the current downloaded range can merge with the next range,
actually include the data of the next range into the current range
instead of discarding it. Also decide if we seek to the write position
of the merged range or continue reading.
When the single queue size was just bumped by 1 to allow more buffers to
be added, the buffers limit could be reduced to the current level when
setting the max-size-buffers property. This would result in a stall
since the queue would not grow anymore at this point.
Prevent this by not reducing a single queue size below the current
number of buffers + 1.
https://bugzilla.gnome.org/show_bug.cgi?id=712597
events_foreach adds an extra ref when giving the event to the
user function. In case it was unrefed by the user, this extra ref
disappeared, but events_foreach still should unref again to
lose its own ref before removing the event from the array.
https://bugzilla.gnome.org/show_bug.cgi?id=722467
When prerolling/buffering, multiqueue has its buffers limit set
to 0, this means it can take an infinite amount of buffers.
When prerolling/buffering finishes, its limit is set back to 5, but
only if the current level is lower than 5. It should (almost) never be
and this will cause prerolling/buffering to need to wait to reach the
hard bytes and time limits, which are much higher.
This can lead to a very long startup time. This patch fixes this
by setting the single queues to the max(current, new_value) instead
of simply ignoring the new value and letting it as infinite(0)
https://bugzilla.gnome.org/show_bug.cgi?id=712597
TIME segments are being ignored and a standard initialized
segment is used instead. This causes issues as not properly detecting
reverse playback or not cliping output based on the segment.
This seems to be a regression from one of the GstSegment/GstEvent
redesigns on the 0.10 -> 1.0 transition
Baseparse stores buffers for reverse playback to push on the next
DISCONT, the issue was that it wouldn't ever check for a discont
on passthrough mode as it skips all real parsing. This test
was create to verify this issue and prevent it from happening again
https://bugzilla.gnome.org/show_bug.cgi?id=721941
If on passthrough during reverse playback, do not accumulate buffers as
baseparse will never check for DISCONT flag to push those buffers.
So just push buffers downstream as if it was forward playback.
https://bugzilla.gnome.org/show_bug.cgi?id=721941
It wasn't required, instead baseparse was using it to check the media
caps to identify if it was handling audio or video.
The pending_segment was removed and a checked_media boolean
replaced it for a more accurate naming.
https://bugzilla.gnome.org/show_bug.cgi?id=721350
A GAP event is handled as an empty buffer by sinks and they expect
to receive start up events before GAP events (like a segment).
This is important specially if there is a GAP at the beginning of
a stream (before any buffers) so that the segment event can be
pushed downstream before the GAP
https://bugzilla.gnome.org/show_bug.cgi?id=721350
This allows blocking a pad, add a new blocking probe, removing
the first probe and then having the second probe called. Which
could then decide that data-flow should actually continue
instead of blocking now.
https://bugzilla.gnome.org/show_bug.cgi?id=721289
It was used for pad-alloc in 0.10 but currently is completely unused
and not necessary. All pad access is protected by the tee object lock
and keeping another reference to the current pad.
Also only check the data types for non-IDLE probes. When we
are idle, we have no data type obviously.
Previously we were calling IDLE probes during data flow whenever
a non-blocking probe would be called. The pad was usually not idle
at that time.