This might create deadlocks and we need to avoid holding element
specific lock while posting messages
For example a deadlock will happen if while posting the message,
someone connected on the bus (sync) tries to DOT the pipeline.
https://bugzilla.gnome.org/show_bug.cgi?id=737102
It might cause deadlocks to post messages while holding the queue2
lock. To avoid this a new boolean flag is set whenever a new
buffering percent is found. The message is posted after the lock
is released.
To make sure the buffering messages are posted in the right order, messages
are posted holding another lock. This prevents 2 threads trying to post
messages at the same time.
https://bugzilla.gnome.org/show_bug.cgi?id=736969
It might cause deadlocks to post messages while holding the multiqueue
lock. To avoid this a new boolean flag is set whenever a new buffering percent
is found. The message is posted after the lock can be released.
To make sure the buffering messages are posted in the right order, messages
are posted holding another lock. This prevents 2 threads trying to post
messages at the same time.
https://bugzilla.gnome.org/show_bug.cgi?id=736295
Imagine the following 'pipeline'
--------------
p1/| 'fullqueue' |--- 'laggy' downstream
--------- / | |
-| demuxer | | multiqueue |
--------- \ | |
p2\| 'emptyqueue' |--- 'fast' downstream
--------------
In the case downstream of one single queue (fullqueue) has (a lot of) latency
(for example for reverse playback with video), we can end up having the other
SingleQueue (emptyqueue) emptied, before that fullqueue gets
unblocked. In the meantime, the demuxer tries to push on fullqueue, and
is blocking there.
In that case the current code will post a BUFFERING message on the bus when
emptyqueue gets emptied, that leads to the application setting the pipeline state to
PAUSED. So now we end up in a situation where 'laggy downstream' is
prerolled and will not unblock anymore because the pipeline is set to
PAUSED, the fullequeue does not have a chance to be emptied and
the emptyqueue can not get filled anymore so no more BUFERRING message
will be posted and the pipeline is stucked in PAUSED for the eternity.
Making sure that we do not try to "buffer" if one of the single queue
does not need buffering, prevents this situtation from happening though it lets the
oportunity for buffering in all other cases.
That implements a new logic where we need all singlequeue to need
buffering for the multiqueue to actually state buffering is needed,
taking the maximum buffering of the single queue as the reference point.
https://bugzilla.gnome.org/show_bug.cgi?id=734412
Don't re-start the queue push task on the source pad when a
flush-stop event comes in and we're in the process of shutting
down, otherwise that task will never be stopped again.
When the element is set to READY state, the pads get de-activated.
The source pad gets deactivated before the queue's own activate_mode
function on the source pads gets called (which will stop the thread),
so checking whether the pad is active before re-starting the task on
receiving flush-stop should be fine. The problem would happen when the
flush-stop handler was called just after the queue's activate mode
function had stopped the task.
Spotted and debugged by Linus Svensson <linux.svensson@axis.com>
https://bugzilla.gnome.org/show_bug.cgi?id=734688
Otherwise it would only be proxied for the active pad which can lead
upstream to use an incompatible caps for the downstream element.
Even if a reconfigure event is sent upstream when the pad is activated, this
will save the caps reconfiguration if it is already using an acceptable caps.
Just remove one skip annotation that causes this:
** (g-ir-compiler:12458): ERROR **: Caught NULL node, parent=empty
with older g-i versions such as 1.32.1.
This function is not really pad or slow for the common case of requesting a
pad with the name of the template. It is only slower if you to name your pads
directly instead of letting the element handle it.
Also there's no reason to deprecate it in favor of a more complicated function
for the common case.
After EOS there will be no further buffer which could propagate the
error upstream, so nothing is going to post an error message and
the pipeline just idles around.
SetEvent() seems to not call SetLastError(0) internally, so checking last
error after calling SetEvent() may return the error from an earlier W32 API
call. Fix this by calling SetlastError(0) explicitly.
Currently WAKE_EVENT() code is cramped into a macro and doesn't look to be
entirely correct. Particularly, it does not check the return value of
SetEvent(), only the thread-local W32 error value. It is likely that SetEvent()
actually just returns non-zero value, but the code mistakenly thinks that the
call has failed, because GetLastError() seems to indicate so.
https://bugzilla.gnome.org/show_bug.cgi?id=733805
default_alloc_buffer() calls gst_buffer_new_allocate() but does not check for
failed allocation.
This patch makes default_alloc_buffer() return an error (GST_FLOW_ERROR) if
buffer allocation fails.
https://bugzilla.gnome.org/show_bug.cgi?id=733974
If the current max-buffers limit it infinite and a finite value is
requested, switch to the MAX (requested, current-value) to set some
limit but not below what we know that we've needed so far.
https://bugzilla.gnome.org/show_bug.cgi?id=733837
When eos events are forwarded simultaneouly from two sinkpads on
funnel, it doesnot forward the eos to sourcepad. The reason is
sticky events are stored after the event callbacks are returned.
Therefore while one is about to store the sticky events on the its
sinkpad, other sinkpad starts checking for the eos events on all other
sinkpads and assumes eos is not present yet.
https://bugzilla.gnome.org/show_bug.cgi?id=732851