Don't re-start the queue push task on the source pad when a
flush-stop event comes in and we're in the process of shutting
down, otherwise that task will never be stopped again.
When the element is set to READY state, the pads get de-activated.
The source pad gets deactivated before the queue's own activate_mode
function on the source pads gets called (which will stop the thread),
so checking whether the pad is active before re-starting the task on
receiving flush-stop should be fine. The problem would happen when the
flush-stop handler was called just after the queue's activate mode
function had stopped the task.
Spotted and debugged by Linus Svensson <linux.svensson@axis.com>
https://bugzilla.gnome.org/show_bug.cgi?id=734688
Otherwise it would only be proxied for the active pad which can lead
upstream to use an incompatible caps for the downstream element.
Even if a reconfigure event is sent upstream when the pad is activated, this
will save the caps reconfiguration if it is already using an acceptable caps.
Imagine the following 'pipeline'
--------------
p1/| 'fullqueue' |--- 'laggy' downstream
--------- / | |
-| demuxer | | multiqueue |
--------- \ | |
p2\| 'emptyqueue' |--- 'fast' downstream
--------------
In the case downstream of one single queue (fullqueue) has (a lot of) latency
(for example for reverse playback with video), we can end up having the other
SingleQueue (emptyqueue) emptied, before that fullqueue gets
unblocked. In the meantime, the demuxer tries to push on fullqueue, and
is blocking there.
In that case the current code will post a BUFFERING message on the bus when
emptyqueue gets emptied, that leads to the application setting the pipeline state to
PAUSED. So now we end up in a situation where 'laggy downstream' is
prerolled and will not unblock anymore because the pipeline is set to
PAUSED, the fullequeue does not have a chance to be emptied and
the emptyqueue can not get filled anymore so no more BUFERRING message
will be posted and the pipeline is stucked in PAUSED for the eternity.
Making sure that we do not try to "buffer" if one of the single queue
does not need buffering, prevents this situtation from happening though it lets the
oportunity for buffering in all other cases.
That implements a new logic where we need all singlequeue to need
buffering for the multiqueue to actually state buffering is needed,
taking the maximum buffering of the single queue as the reference point.
https://bugzilla.gnome.org/show_bug.cgi?id=734412
After EOS there will be no further buffer which could propagate the
error upstream, so nothing is going to post an error message and
the pipeline just idles around.
When eos events are forwarded simultaneouly from two sinkpads on
funnel, it doesnot forward the eos to sourcepad. The reason is
sticky events are stored after the event callbacks are returned.
Therefore while one is about to store the sticky events on the its
sinkpad, other sinkpad starts checking for the eos events on all other
sinkpads and assumes eos is not present yet.
https://bugzilla.gnome.org/show_bug.cgi?id=732851
Avoid relocations and refactor so that we don't calculate
the fixed and known at compile time maximum string size
every time. Also skip the mini object flags which we are
not going to print anyway.
The initial buffers (that were used for timestamping) might have PTS
and DTS set. In order to forward those properly, get the initial
PTS/DTS from the adapter and set them on the reconstructed output
buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=733291
We always work in passthrough mode so there's no point in doing
something more clever in basetransform. Also the basetransform
code leads to problems with incomplete caps and downstream
elements that use GST_PAD_FLAG_ACCEPT_INTERSECT.
https://bugzilla.gnome.org/show_bug.cgi?id=732559
When no data is coming from sinkpads and eos events
arrived at one of the sinkpad, funnel forwards the EOS
event to downstream. It forwards the EOS because lastsink pad
is NULL. Also the unit testcase of the funnel is not checking
the correct behavior as it should. The unit test case should
fail if one of the sink pad has already EOS present on it and
we are trying to push one more EOS.
https://bugzilla.gnome.org/show_bug.cgi?id=731716
The start and stop should represent the currently downloading region.
The estimated-total should represent the remaining time to download
the currently downloading region. This makes it a lot more useful
for applications because they can then use those values to update
the fill region and use the estimated time to delay playback.
Update the docs with this clarification.
When the first segment has position != 0 and position > max-size-time
it will immediatelly cause the multiqueue to signal overrun.
This can happen easily with adaptive streams when switching bitrates
and starting a new group. The segment for this new group will have
a position that is much greater than 0 and will lead to this issue.
This is particularly harmful when the adaptive stream uses mpegts
that doesn't emit no-more-pads and it might happen that only one
of the stream pads was added when the multiqueue overruns and gets
the group ready for exposing. So the user will only get audio or
video.
The solution is to fallback to the sink segment while the source pad
has no segment.
https://bugzilla.gnome.org/show_bug.cgi?id=729124
They are very confusing for people, and more often than not
also just not very accurate. Seeing 'last reviewed: 2005' in
your docs is not very confidence-inspiring. Let's just remove
those comments.
The qlock is released between popping a buffer from the queue
and pushing it. When this buffer causes the sink to wait in
preroll, this lets a query see that the queue is empty, and
push the query then wait for it to be serviced. However, this
will not be done till after peroll, and this will thus block.
If upstream was waiting on buffering to reach 100% before
switching to PLAYING, a deadlock would ensue.
This had been fixed recently by failing queries when the
queue2 was buffering, but this happens to break some other
case (playbin on a local http server and matroska), while
this patch works for both.
See https://bugzilla.gnome.org/show_bug.cgi?id=728345
This should never happen theoretically, but since a transient
failure would get us to silently read wrong data, it's worth
erroring out. And it silence this:
Coverity 206034
The qlock is released between popping a buffer from the queue
and pushing it. When this buffer causes the sink to wait in
preroll, this lets a query see that the queue is empty, and
push the query then wait for it to be serviced. However, this
will not be done till after peroll, and this will thus block.
If upstream was waiting on buffering to reach 100% before
switching to PLAYING, a deadlock would ensue.
We fix it by refusing the query when buffering, as per Wim's
recommendation on IRC.