There is no reason I can see to set mq->buffering = TRUE when
use_buffering is set; the code here also calls update_buffering(), which
will set mq->buffering = TRUE if this is warranted because of low buffer
levels.
https://bugzilla.gnome.org/show_bug.cgi?id=745937
When using a negative rate (rate being segment.rate * segment.applied_rate),
we will end up reporting decreasing positions, therefore adjust the clamping
against last reported value accordingly.
Fixes positions getting properly reported with applied_rate < 0.0
https://bugzilla.gnome.org/show_bug.cgi?id=738092
The documentation states that gst_element_send_event is to "send an event
to an element".
Therefore we *send* upstream events to a source pad and downstream events
to a sink pad
When comparing percentage values, compare with 0-100 scale as it
has already been made relative to 0-high_percent, otherwise we mark
the queue as not buffering and report a 50% to the user. This leads to
a buffering stall as the user assumes the queue is still buffering but
it thinks it isn't.
https://bugzilla.gnome.org/show_bug.cgi?id=736969
multiqueue's queues stored percent value is the percentage from 0
to 100 (max-size-*) and should be compared with the requested limit
(high_percentage) set by the user and not with 100% to check if
buffering should stop. Otherwise we are only stopping buffering when the
queue gets completely full.
Previously, dropping a query from a pad probe would deem the
query succeeded, and the caller might then assume the query's
results are valid, and thus dereference an invalid object
such as a GstCaps.
We now assume dropped queries did not succeed. Dropped events
and buffers are still deemed a success.
If a task thread is calling pause on it self and the
controlling/"main" thread stops the task, it could end in a race
where gst_task_func loops and then checks for paused after the
controlling thread just changed the task state to stopped.
Hence the task would actually call func again even though it was
both paused and stopped.
https://bugzilla.gnome.org/show_bug.cgi?id=740001
Fixes 'Attempt to unlock mutex that was not locked'
warning with newer GLibs when sink is shut down in
certain situations. Triggered by the decodebin
test_reuse_without_decoders unit test in -base
sometimes, esp. on slower machines.
The point of this example is to show how to set caps
on the source pad once it has been set on the sink pad.
So, in passthrough mode, the caps is just copied to the
source pad.
https://bugzilla.gnome.org/show_bug.cgi?id=738153
Running two threads, one executing the timer and one unscheduling it, the
unscheduled status set by the second thread is sometimes overwritten by the
first one.
https://bugzilla.gnome.org/show_bug.cgi?id=737999
If we are pushing a serialized query into a queue and the queue is
filled, we will end in a deadlock. We need to release the lock before
pushing and acquire it again afterward.
https://bugzilla.gnome.org/show_bug.cgi?id=737794
This might create deadlocks and we need to avoid holding element
specific lock while posting messages
For example a deadlock will happen if while posting the message,
someone connected on the bus (sync) tries to DOT the pipeline.
https://bugzilla.gnome.org/show_bug.cgi?id=737102
It might cause deadlocks to post messages while holding the queue2
lock. To avoid this a new boolean flag is set whenever a new
buffering percent is found. The message is posted after the lock
is released.
To make sure the buffering messages are posted in the right order, messages
are posted holding another lock. This prevents 2 threads trying to post
messages at the same time.
https://bugzilla.gnome.org/show_bug.cgi?id=736969
It might cause deadlocks to post messages while holding the multiqueue
lock. To avoid this a new boolean flag is set whenever a new buffering percent
is found. The message is posted after the lock can be released.
To make sure the buffering messages are posted in the right order, messages
are posted holding another lock. This prevents 2 threads trying to post
messages at the same time.
https://bugzilla.gnome.org/show_bug.cgi?id=736295
Imagine the following 'pipeline'
--------------
p1/| 'fullqueue' |--- 'laggy' downstream
--------- / | |
-| demuxer | | multiqueue |
--------- \ | |
p2\| 'emptyqueue' |--- 'fast' downstream
--------------
In the case downstream of one single queue (fullqueue) has (a lot of) latency
(for example for reverse playback with video), we can end up having the other
SingleQueue (emptyqueue) emptied, before that fullqueue gets
unblocked. In the meantime, the demuxer tries to push on fullqueue, and
is blocking there.
In that case the current code will post a BUFFERING message on the bus when
emptyqueue gets emptied, that leads to the application setting the pipeline state to
PAUSED. So now we end up in a situation where 'laggy downstream' is
prerolled and will not unblock anymore because the pipeline is set to
PAUSED, the fullequeue does not have a chance to be emptied and
the emptyqueue can not get filled anymore so no more BUFERRING message
will be posted and the pipeline is stucked in PAUSED for the eternity.
Making sure that we do not try to "buffer" if one of the single queue
does not need buffering, prevents this situtation from happening though it lets the
oportunity for buffering in all other cases.
That implements a new logic where we need all singlequeue to need
buffering for the multiqueue to actually state buffering is needed,
taking the maximum buffering of the single queue as the reference point.
https://bugzilla.gnome.org/show_bug.cgi?id=734412