When posting 100% buffering due to removing the last
buffering element, we still need to hold the posting
lock as well, to avoid any race with other elements
that might post a buffering message at that exact
moment
When the decodebin state change fails because of an error
message, we might not go through PAUSED->READY. Don't leak
a ref to decodebin pads due to pad blocking in that case.
This is because we return ASYNC going to PAUSED, and if
we fail before reaching PAUSED the only transition we'll
see is READY->NULL.
https://bugzilla.gnome.org/show_bug.cgi?id=775893
When shutting down decodebin2 and parsebin, they set their
output pads to flushing, and there is a very small window
where elements might send a sticky event such as a tag event
(which silently fails due to flushing) and then sends a buffer,
and the buffer will return GST_FLOW_ERROR because it can't
forward sticky events. The element will then send an error
message on the bus. This can also happen when elements send EOS
just as shutdown is happening. Since we're about to destroy all
the elements inside parsebin and decodebin anyway, just discard
error messages from them.
A nicer but more difficult fix for GStreamer 2.0 is to make
all event pushing / handling in core return a GstFlowReturn
like buffers do, so we can report a FLUSHING state cleanly.
There are cases when there is no demuxer involved that could do the
buffering, e.g. HLS with raw MP3 or AAC. In this case we want to place
the buffering multiqueue after the parser.
Before this change, we've considered the first element after the
adaptive streaming demuxer as a parser. This is not always true, e.g.
id3demux. Instead we now wait until we actually have a parser (or
decoder).
Fixes playback on such HLS streams.
When connecting a demuxer through a multiqueue ensure to copy sticky
events in order to allow the following factory being properly
checked that it is functional.
https://bugzilla.gnome.org/show_bug.cgi?id=769580
When processing EOS for a pad, send a stream-group-done
for the pad in case downstream is waiting for more
data on this stream before it can process related
streams from the group.
https://bugzilla.gnome.org/show_bug.cgi?id=768995
When we initialize an element in decodebin, we 1) set it to PAUSED and
push sticky events on its sinkpad to trigger negotiation 2) block its
src pad(s) to detect CAPS events. We can't block before 1) as that
would lead to a deadlock.
It's possible (and common) tho that an element configures its srcpad
during 1) and before 2). Therefore before this change we would
typically block and expose an element's pad only once the element
output its first buffer, triggering sticky events to be resent. One
consequence of this behaviour is that it sometimes broke
renegotiation.
With this change now we consider a pad ready to be exposed when it's
->blocked or has fixed caps (which were set before we could block it).
https://bugzilla.gnome.org/show_bug.cgi?id=765456
If we are configured to use buffering and there is no demuxer in the chain, we
still want a multiqueue, otherwise we will ignore the use-buffering property.
In that case, we will insert a multiqueue after the parser or decoder - not
elsewhere, otherwise we won't have timestamps.
https://bugzilla.gnome.org/show_bug.cgi?id=764948
There's a small window between decodebin choosing a buffering level
to post and another thread choosing a different buffering level
where things can race. Close that window by holding a new lock
that's only for posting buffering messages - like what was done
in multiqueue.
https://bugzilla.gnome.org/show_bug.cgi?id=764020
In check_upstream_seekable function, it returns FALSE value even though
we already declare about the seekable variable. So, This patch return
result of seekable in check_upstream_seekable function.
https://bugzilla.gnome.org/show_bug.cgi?id=763975
Due to transient locked state during autoplugging, some elements might be
ignored by the GstBin::change_state() and might still be running. Which could
then cause pad-added and similar accessing decodebin state that does not exist
anymore, and crash.
https://bugzilla.gnome.org/show_bug.cgi?id=763625
In other places we lock it the other way around, leading to possible
deadlocks. Also this will deadlock if analyze_pad() causes a new element to be
autoplugged that adds new pads on itself when its state is changed.
https://bugzilla.gnome.org/show_bug.cgi?id=763491
When getting caps of the decode chain, in get_topology, the caps are being
checked if fixed or not. But get_topology will be called when the decode is
chain is being exposed and hence it will always be fixed. Hence removing the
check for fixed caps. Removing gst_pad_get_current_caps for the chain->pad, as
get_pad_caps will again call the same api.
And get_topology can return NULL value if currently shutting down the
pipeline, which on being passed to create message will result in assertion
error. Check if topology is valid before using it
https://bugzilla.gnome.org/show_bug.cgi?id=755918
analyze_new_pad() can return a new decode chain, which might have a new
GstDecodePad in the end. We should use those two for expose_pad() and not the
original ones that were passed to analyze_new_pad().
This fails when having a demuxer element that has raw pads immediately or
if a decoder with raw caps is after an adaptive demuxer.
https://bugzilla.gnome.org/show_bug.cgi?id=760949
This reverts commit 77dc09c3a9.
It can cause the FLUSH_START/STOP events to go to the sink elements, which
then causes state changes and various other problems. We shouldn't really
flush downstream here, the idea is to do *draining*.
Apart from that the testcase for the original bug here works without this
commit now.
Otherwise we'll remove that element while keeping its buffering message in our
list, and because of that never ever report buffering 100% as that element
will always be at a lower percentage.
This fixes e.g. seeking over Period boundaries in DASH and various other
issues when buffering happens between group switches.
Also use a new mutex for protecting the buffering messages. The object lock is
already used by gst_object_has_as_ancestor() and we need to use it now for
checking if the buffering message sender has the to-be-removed element as
ancestor.
Make sure that any access to the GstDecodeBin's decode_chain
field is protected using the EXPOSE_LOCK. Also add a simple
reference counter to the GstDecodeChain structure so that when
the type_found signal fires it can hold onto the decode chain
even while the EXPOSE_LOCK is not held. This should fix a
race condition if the type_found signal fires right in the
middle of a state change that messes with the same decode
chain.
https://bugzilla.gnome.org/show_bug.cgi?id=755260
Just setting the ghostpad as flushing wasn't enough. It needs to be
consistent on the internal proxypad also, otherwise you end up in
situations where:
* a pending buffer on the target pad triggers the sticky event
propagation
* the default implementation sees that the proxypad is not flushing,
so it tries to push it to the other pad (the actual ghostpad)
* the ghostpad is flushing, so returns FALSE
* the push_event function sees that pushing the event failed...
* ... and pending buffer push returns GST_FLOW_ERROR, instead of
GST_FLOW_FLUSHING
By using gst_pad_set_active(FALSE), we ensure that both the ghostpad
and the proxypad are flushing/deactivated. The situation above will
no longer occur, and a GST_FLOW_FLUSHING will be returned.
This makes sure that they will always get SEEK events, even if we're currently
in the middle of a group switch (i.e. switching to another
representation/bitrate/etc).
https://bugzilla.gnome.org/show_bug.cgi?id=606382
As stated in GST_PAD_PROBE_HANDLED's documentation, we are
supposed to unref the event before returning.
Fixes an event leak in the validate.hls.playback.play_15s.hls_bibbop
validate scenario.
https://bugzilla.gnome.org/show_bug.cgi?id=754459
The logic introduced by
[d50b713: decodebin: set the decode pad target before setting elements to PAUSED]
to expose pads would only ever be able to possibly expose one (the last) pad per element.
Make it so that any exposable pads are able to be exposed rather than just the
last pad returned by connect_element.
https://bugzilla.gnome.org/show_bug.cgi?id=742924
In the case of analyzing a demuxer chain, analyze_new_pad may create
a new GstDecodeChain. This was not propagated to the calling function which as
of [d50b713f decodebin: set the decode pad target before setting elements to PAUSED]
is now required to be able to expose the correct pad.
https://bugzilla.gnome.org/show_bug.cgi?id=742924
Otherwise caps and context queries will disappear into nothing and therefore
fail. With autoplug-query now actually working, users (such as playbin) can
proxy these queries to the selected video sink and be able to select an
more appropriate configuration.
https://bugzilla.gnome.org/show_bug.cgi?id=731204
It's only relevant for each group, and by storing it in the group
we have locking and everything else like for the other buffering-related
variables. Locking looks a bit fishy still, but it was like that for a long
time already so shouldn't be worse than before.
Overview:
There are some of interleaved streams which has long-term location of audio data.
It mean the audio data is located far away more than multiqueue size.
In this case, because of multiqueue overrun, the pipeline is stopped.
To prevent hanging-like state, the decodebin needs to handle the queue size.
Caused:
The multiqueue size is not enough, the pipeline will stay being stalled status
and decodebin cannot complete to build decode chain.
In this issue file, decodebin did not receive no_more_pads signal or audio data yet.
Steps to Reproduce:
play the high-resolution(4K file) files or some streaming media(push mode).
Actual Results:
There is no audio or subtitle.
We can see only video or infinite loading.
Resolution:
Decodebin detect this problem, and add extra buffer size to multiqueue.
The multiqueue is larger than before, the next data can be pushed the downstream element.
Additional Information:
The max-preroll extra buffer size is set 8MB.
We can use total pre-roll buffer 10MB.
Only first overrun callback can handle multiqueue size.
https://bugzilla.gnome.org/show_bug.cgi?id=733235
When an upstream element wants to flush downstream, we need to take
all chains/groups into consideration.
To that effect, when a FLUSH_START event is seen, after having it
sent downstream we mark all those chains/groups as "drained" (as if
they had seen a EOS event on the endpads).
When a FLUSH_STOP event is received, we check if we need to switch groups.
This is done by checking if there are next groups. If so, we will switch
over to the latest next_group. The actual switch will be done when
that group is blocked.
https://bugzilla.gnome.org/show_bug.cgi?id=606382
When upstream events/queries reach sinkpads of unlinked groups (i.e.
no longer linked to the upstream demuxer), this patch attempts to find
the linked group and forward it upstream of that group.
This is done by adding upstream event/query probes on new group sinkpads
and then:
* Checking if the pad is linked or not (has a peer or not)
* If there is a peer, just let the event/query follow through normally
* If there is no peer, we find a pad to which to proxy it and return
GST_PROBE_HANDLED if it succeeded (allowing the event/query to be properly
returned to the initial called)
Note that this is definitely not thread-safe for the time being
https://bugzilla.gnome.org/show_bug.cgi?id=606382
deadend_details need not be returned when the pad is not a deadend.
Hence checking if res value is TRUE and clearing the string instead of
passing it on
https://bugzilla.gnome.org/show_bug.cgi?id=753088
When switching to a new chain it might be that this new chain
is not yet ready to be exposed so check it before exposing.
Can happen with mpegts that might delay adding pads or pushing data
until it has found the PMT/PAT/PCR and that may take a while depending
on the stream.
It happened frequently with HLS:
http://vevoplaylist-live.hls.adaptive.level3.net/vevo/ch1/appleman.m3u8