The qlock is released between popping a buffer from the queue
and pushing it. When this buffer causes the sink to wait in
preroll, this lets a query see that the queue is empty, and
push the query then wait for it to be serviced. However, this
will not be done till after peroll, and this will thus block.
If upstream was waiting on buffering to reach 100% before
switching to PLAYING, a deadlock would ensue.
We fix it by refusing the query when buffering, as per Wim's
recommendation on IRC.
Use the pad as object for logging to get more context. Use
gst_pad_store_sticky_event() instead of sending the event. This avoids a warning
as here the pad is not yet linked and we actually don't want to send anyway.
Use the last result as a default when pushing a item from a single queue,
otherwise the status gets reset to _OK when pushing events.
This causes problems when mistakenly activating a not-linked stream
that is being ignored upstream as it is not being used (adaptive
scenarios), it will make the multiqueue post a buffering message
on a pad that won't receive buffers
https://bugzilla.gnome.org/show_bug.cgi?id=725917
Make a method to get the seeking threshold. If data is further away from
this threshold we want to perform a seek upstream.
When the current downloaded range can merge with the next range,
actually include the data of the next range into the current range
instead of discarding it. Also decide if we seek to the write position
of the merged range or continue reading.
If the segment event is allowed to be pushed to all pads it
will lead to an assertion of 'sticky event misordering:
segment received before caps' in case the pad-negotiation-mode
is set to 'active' or 'none'.
This patch fixes this by making all sticky events follow the
property like the caps event to prevent misordering warnings.
When a new pad is activated the current sticky events on the
sinkpad are forwarded to it in the proper order.
https://bugzilla.gnome.org/show_bug.cgi?id=723266
When the single queue size was just bumped by 1 to allow more buffers to
be added, the buffers limit could be reduced to the current level when
setting the max-size-buffers property. This would result in a stall
since the queue would not grow anymore at this point.
Prevent this by not reducing a single queue size below the current
number of buffers + 1.
https://bugzilla.gnome.org/show_bug.cgi?id=712597
In the case where one singlequeue is full and all other are not linked, the
growing of the full queue does not work correctly. The result depends on if
the full queue is last in the queue list or not.
https://bugzilla.gnome.org/show_bug.cgi?id=722891
It is already stored inside the GstSegment struct and
was only duplicating information. Also removed some
weird positon if/else that would possibly change the
segment that was going to be pushed downstream
When prerolling/buffering, multiqueue has its buffers limit set
to 0, this means it can take an infinite amount of buffers.
When prerolling/buffering finishes, its limit is set back to 5, but
only if the current level is lower than 5. It should (almost) never be
and this will cause prerolling/buffering to need to wait to reach the
hard bytes and time limits, which are much higher.
This can lead to a very long startup time. This patch fixes this
by setting the single queues to the max(current, new_value) instead
of simply ignoring the new value and letting it as infinite(0)
https://bugzilla.gnome.org/show_bug.cgi?id=712597
The offset can be -1 when we are configured in TIME format. Instead of
failing the seek and erroring, do what and offset of -1 is supposed to
do and simply read from the current offset.
It was used for pad-alloc in 0.10 but currently is completely unused
and not necessary. All pad access is protected by the tee object lock
and keeping another reference to the current pad.
This makes buffering stop in case a stream switch happens. This is
important for adaptive streams that can disable not-linked streams
to avoid consuming the network bandwidth.
https://bugzilla.gnome.org/show_bug.cgi?id=719575
After patch bda406c4, the state of the singlequeue was set to OK, but nothing
would then wake up the thread, as the other wakeup functions only look at
singlequeues that are marked as having received as not-linked.
https://bugzilla.gnome.org/show_bug.cgi?id=708200
* add many missing declarations to sections
* GstController has been removed, update docs
* skip GstIndex when generating documentation
* rephrase so gtkdoc doesn't imagine return value
* add missing argument description for gst_context_new()
* document GstOutputSelectorPadNegotiationMode and move to header-file
https://bugzilla.gnome.org/show_bug.cgi?id=719614
Use gap events to advance the selector's pad position.
This is relevant to keep sync_streams mode working when one of the
streams doesn't have data all the time.
Since the refactoring of GstContext (commits
qc9fa2771b508e9aaeecc700e66e958190476f,
a7f5dc8b8a,
690326f906dc82e41ea58b81cdb2e3e88b754,
d367dc1b0d4ecb37f4d27267e03d7bf0c6c06a6, and
82d158aed3f2e8545e1e7d35085085ff58f18) I am no longer able to get
a shared context for an element that is used twice in a pipeline.
I used the documentation and eglglessink as my reference for
implementing the GstContext logic.
As the code was tied to a hardware decoder, I have ported the
GstContext code to fakesink to show the problem. Using the old
API a single ExampleMgr instance is created, but using the new
API each element is creating its own instance.
In some cases the wait for more data was happening without updating
the buffering state, meaning the API user would not be able to notice
it should pause the pipeline and update UI to indicate that is the
case, the video would likely stutter instead.
https://bugzilla.gnome.org/show_bug.cgi?id=707648
If the multiqueue has automatically grown chances are good that
we will cause the pipeline to starve if the maximum level is reduced
below that automatically grown size.
https://bugzilla.gnome.org/show_bug.cgi?id=707156
When a buffering query is handled it uses the get_buffering_percent()
function to get some statitics. Unfortunately this function also
calculates whether the queue should be buffering and adapts the
global queue2 state in case of state transitions from/to buffering
(including whether a buffering message was posted on the bus!).
This means that there is a race which can cause buffering messages
to never posted if the global state changes happen as a result of aa
query instead of resulting from bytes flowing in/out.
Spotted by Sjoerd Simons.
Change to only query state in get_buffering_percent() and update
state only in update_buffering().
https://bugzilla.gnome.org/show_bug.cgi?id=705332
When in download buffering mode queue2 didn't check if a range offset is
in a undownloaded range before the currently in-progress range. Causing
seeks to an earlier offset to, well, take a while.
When asked about the scheduling flags first check with upstream and
simply add the _SEEKABLE flag when using a temporary file as storage.
This enables the forwarding of _SEQUENTIAL and _BANDWIDTH_LIMITED from
sources if needed.
https://bugzilla.gnome.org/show_bug.cgi?id=704927
A new active pad might not be notified in some cases, which results
in the current track number not being set in playbin.
The active-pad notification is only sent in the chain and sink_event
functions, and only when the buffer or event that triggered the active
pad selection is from the newly activated pad. So in the other case
the notification will never be sent.
https://bugzilla.gnome.org/show_bug.cgi?id=704691
We must be certain that we don't cause a deadlock when blocking the serialized
queries. One such deadlock can happen when we are buffering and downstream is
blocked in preroll and a serialized query arrives. Downstream will not unblock
(and allow our query to execute) until we complete buffering and buffering will
not complete until we can answer the query..
https://bugzilla.gnome.org/show_bug.cgi?id=702840
Otherwise we might get deadlocks caused by lock order inversion:
During the chain function the stream lock is first locked and then the
inputselector lock. During pad release we first locked the inputselector
lock and then deactivating the pad would lock the stream lock.
There's no reason why the inputselector lock should be required while
deactivating and removing the pad, it's only needed before.
https://bugzilla.gnome.org/show_bug.cgi?id=704002
We must be certain that we don't cause a deadlock when blocking the serialized
queries. One such deadlock can happen when we are buffering and downstream is
blocked in preroll and a serialized query arrives. Downstream will not unblock
(and allow our query to execute) until we complete buffering and buffering will
not complete until we can answer the query..
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=702840
During FLUSH_START the query needs to be unblocked already, otherwise
it can lead to deadlocks if the FLUSH_START is the result of something
done from the streaming thread of the srcpad (the queue will never be
emptied!).
Fixes some deadlocks during flushing.
And store queue items differently to not accidentially read
already unreffed queries when flushing. Queries are owned by
upstream and not us.
In the case the source has no caps, caps must be sent before segment. This
fixes few unit tests that where failing due to the new misordering warning.
https://bugzilla.gnome.org/show_bug.cgi?id=699968