It's only relevant for each group, and by storing it in the group
we have locking and everything else like for the other buffering-related
variables. Locking looks a bit fishy still, but it was like that for a long
time already so shouldn't be worse than before.
Overview:
There are some of interleaved streams which has long-term location of audio data.
It mean the audio data is located far away more than multiqueue size.
In this case, because of multiqueue overrun, the pipeline is stopped.
To prevent hanging-like state, the decodebin needs to handle the queue size.
Caused:
The multiqueue size is not enough, the pipeline will stay being stalled status
and decodebin cannot complete to build decode chain.
In this issue file, decodebin did not receive no_more_pads signal or audio data yet.
Steps to Reproduce:
play the high-resolution(4K file) files or some streaming media(push mode).
Actual Results:
There is no audio or subtitle.
We can see only video or infinite loading.
Resolution:
Decodebin detect this problem, and add extra buffer size to multiqueue.
The multiqueue is larger than before, the next data can be pushed the downstream element.
Additional Information:
The max-preroll extra buffer size is set 8MB.
We can use total pre-roll buffer 10MB.
Only first overrun callback can handle multiqueue size.
https://bugzilla.gnome.org/show_bug.cgi?id=733235
When an upstream element wants to flush downstream, we need to take
all chains/groups into consideration.
To that effect, when a FLUSH_START event is seen, after having it
sent downstream we mark all those chains/groups as "drained" (as if
they had seen a EOS event on the endpads).
When a FLUSH_STOP event is received, we check if we need to switch groups.
This is done by checking if there are next groups. If so, we will switch
over to the latest next_group. The actual switch will be done when
that group is blocked.
https://bugzilla.gnome.org/show_bug.cgi?id=606382
When upstream events/queries reach sinkpads of unlinked groups (i.e.
no longer linked to the upstream demuxer), this patch attempts to find
the linked group and forward it upstream of that group.
This is done by adding upstream event/query probes on new group sinkpads
and then:
* Checking if the pad is linked or not (has a peer or not)
* If there is a peer, just let the event/query follow through normally
* If there is no peer, we find a pad to which to proxy it and return
GST_PROBE_HANDLED if it succeeded (allowing the event/query to be properly
returned to the initial called)
Note that this is definitely not thread-safe for the time being
https://bugzilla.gnome.org/show_bug.cgi?id=606382
deadend_details need not be returned when the pad is not a deadend.
Hence checking if res value is TRUE and clearing the string instead of
passing it on
https://bugzilla.gnome.org/show_bug.cgi?id=753088
When switching to a new chain it might be that this new chain
is not yet ready to be exposed so check it before exposing.
Can happen with mpegts that might delay adding pads or pushing data
until it has found the PMT/PAT/PCR and that may take a while depending
on the stream.
It happened frequently with HLS:
http://vevoplaylist-live.hls.adaptive.level3.net/vevo/ch1/appleman.m3u8
When shutting down the chain, we can get a deadlock when removing
a pad, if that chain was being busy streaming but blocked (eg, while
waiting for a queue to have free space).
https://bugzilla.gnome.org/show_bug.cgi?id=746480
This fixes a race where the use-buffering property on a multiqueue was
set before the queue depth was changed from it's high preroll limits to
lower playback limits. This resulted in buffering messages being emitted
by the multiqueue in the short window between use-buffering being
set and the queue depth being reset.
https://bugzilla.gnome.org/show_bug.cgi?id=744308
When we modify a GList (via g_list_delete_link), always reassign the
new head to the original GList. Otherwise we end up with
filtered_errors being corrupt (the head might have been the element
removed)
This function is static, and only ever called with the expose lock
taken. It thus has no reason to take this lock itself.
This was introduced by one of my locking fixes from 741355.
https://bugzilla.gnome.org/show_bug.cgi?id=741355
Check if dbin->decode_chain is NULL before running drain_and_switch_chains()
because if it is, we shouldn't run that function or it will segfault.
CID #1271074
Otherwise if there are multiple parsers we would most likely break negotiation
of the stream-format/alignment wanted by the decoders as parsers generally
support all possible stream-formats and alignments.
If caps on a newly added pad are NULL, analyze_new_pad will try to
acquire the chain lock to add a probe to the pad so the chain can
be built later. This comes from the streaming thread, in response
to headers or other buffers causing this pad to be added, so the
stream lock is taken.
Meanwhile, another thread might be destroying the chain from a
downward state change. This will cause the chain to be freed with
the chain lock taken, and some elements are set to NULL here, which
can include the parser. This causes pad deactivation, which tries
to take the element's pad's stream lock, deadlocking.
Fix this by keeping track of which elements need setting to NULL,
and only do this after the chain lock is released. Only the chain
manipulation needs to be locked, not the elements' state changes.
https://bugzilla.gnome.org/show_bug.cgi?id=741355
There was a deadlock between a thread changing decodebin/demuxer
state from PAUSED to READY, and another thread pushing data
when starting.
From the stack trace at
https://bug741355.bugzilla-attachments.gnome.org/attachment.cgi?id=292471,
I deduce the following is happening, though I did not reproduce the
problem so I'm not sure this patch fixes it.
The streaming thread (thread 2 in that stack trace) takes the demuxer's
sink pad's stream lock in gst_ogg_demux_perform_seek_pull and will
activate a new chain. This ends up causing the expose lock being taken
in _pad_added_cb in decodebin.
Meanwhile, a state changed is triggered on thread 1, which takes the
expose lock in decodebin in gst_decode_bin_change_state, then frees
the previous chain, which ends up calling gst_pad_stop_task on the
demuxer's task, which in turn takes the demuxer's sink pad's stream
lock, deadlocking as both threads are now waiting for each other.
https://bugzilla.gnome.org/show_bug.cgi?id=741355
Consider pipeline: gst-launch-1.0 playbin uri=http://example.com/a.ogg
Consider 128kbit audio stream.
As soon as uridecodebin detects the bitrate, it configures its input
queue2 max-size to 32000 bytes.
The 2MB buffer in multiqueue is nearly 2 orders of magnitude bigger.
This non-deterministically drives queue2 buffer anywhere from
100% to 0% until multiqueue is filled.
This patch sets multiqueue size to 5 buffers early in no_more_pads_cb.
Partly reverts commit db771185ed.
https://bugzilla.gnome.org/show_bug.cgi?id=740689
Decodebin has already added the element to the bin and should only
select caps compatible pads. It should disable the pad link checks
to avoid doing those again.
https://bugzilla.gnome.org/show_bug.cgi?id=742885
Before we were setting them to PAUSED and (much) later connecting to
their source pad caps notify signal.
There was a race where that demuxer was pushing a caps and later a buffer
on its source pad when we were not even connected to its source pad caps notify
signal leading to decodebin missing the information and not keeping on
building the pipeline on CAPS event thus the demuxer was posting an ERROR
(not linked) message on the bus. This need to be done for 'simple
demuxers' because those have one ALWAYS source pad, not like usual demuxers
that have several dynamic source pads.
A "simple demuxer" is a demuxer that has one and only one ALWAYS source
pad.
https://bugzilla.gnome.org/show_bug.cgi?id=740693
There was a race where:
1) we would put the element to PAUSED
2) It would get data sent to it from upstream
3) It would thus send caps
3) caps_notify_cb would continue autoplugging
4) caps would flow downstream, the last pad would get exposed
5) we were still not done sending the sticky events
Taking the stream lock on the new element's sinkpad and only
releasing it when sticky events have all been sent prevents
the caps from reaching the source pad of the element before
we're all set.
https://bugzilla.gnome.org/show_bug.cgi?id=740694
If there are two parser elements available for the same media format,
then decodebin is autoplugging an extra capsfilter and parser irrespective
of caps and rank. So restrict the decodebin from autoplugging multiple parser
elements back to back in adjacent positions with in a single DecodeChain
for the same media format.
https://bugzilla.gnome.org/show_bug.cgi?id=738416
If we had plugins and an error occurred we only include the error message
caused by this, otherwise we will include the codec description as generated
from the caps.
This allows to detect which exact codec was missing instead of getting a
generic "no suitable decoders found" error message.
Gracefully handle switching groups that all pads are deadend.
This can happen when quickly switching programs on mpegts as the
output is unaligned it can happen that not enough data was accumulated at
parsers to generate any buffers, causing the stream to receive EOS before
any data can be decoded.
To handle this scenario, the _expose function now also gets if there is
any next group to be exposed along with the list of endpads. If there are
no endpads and there is another group to expose it will switch to this next
group and then retry exposing the streams.
Also, the requirement to only switch from the chain that has the endpad had
to be modified to care for when the drainpad is NULL
https://bugzilla.gnome.org/show_bug.cgi?id=733169
otherwise we're going to
a) start Parser/Converter before they are linked to their capsfilter,
breaking their negotiation of a proper stream format
b) start demuxers without having connected to their pad-added signals. We
miss pads and in the worst case don't link any pads at all
... and if this fails for whatever reason we skip the element and instead
try with the next element. This allows us to handle elements that fail
when setting caps on them by just skipping to the next alternative element.
They might fail to go to PAUSED, and when connecting them further
we might already expose their srcpads on decodebin if we're unlucky.
This prevents us to handle failures going to PAUSED gracefully.
If the caps query returned us fixed caps this doesn't mean yet
that these caps are actually complete (fields might be missing).
It allows to do us some decisions, but the selection of the next
element should be delayed as only complete caps allow proper selection
of the next element.
Otherwise we might try to continue autoplugging e.g. for a specific
stream-format although the parser could convert to something else, thus giving
us potentially less options for decoders.
Aggregate buffering messages to only post the lower value
to avoid setting pipeline to playing while any multiqueue
is still buffering.
There are 3 scenarios where the entries should be removed from
the list:
1) When decodebin is set to READY
2) When an element posts a 100% buffering (already implemented)
3) When a multiqueue is removed from decodebin.
For item 3 we don't need to handle it because this should only
happen when either 1 is hapenning or when it is playing a
chained file, for which number 2 should have happened for the
previous stream to finish
https://bugzilla.gnome.org/show_bug.cgi?id=726423
2 seconds might be too small for some container formats, e.g.
MPEGTS with some video codec and AAC/ADTS audio with 700ms
long buffers. The video branch of multiqueue can run full while
the audio branch is completely empty, especially because there
are usually more queues downstream on the audio branch.
Usually these buffers are multiple seconds large, and having a maximum
of 5 buffers in the multiqueue there can use a lot of memory. Lower
this to 2 for adaptive streaming demuxers.
If we are using an adaptive stream demuxer, which outputs a non-container
stream, we are putting another multiqueue after the *parser* following
the adaptive stream demuxer. We do not want to add another instance of
the same parser right after this multiqueue.
Otherwise we will emit buffering messages not just from the last
multiqueue but also from previous multiqueues... confusing the
application with different percentages during pre-rolling.
For adaptive streaming demuxer we insert a multiqueue after
this demuxer. This multiqueue will get one fragment per buffer.
Now for the case where we have a container stream inside these
buffers, another demuxer will be plugged and after this second
demuxer there will be a second multiqueue. This second multiqueue
will get smaller buffers and will be the one emitting buffering
messages.
If we don't have a container stream inside the fragment buffers,
we'll insert a multiqueue below right after the next element after
the adaptive streaming demuxer. This is going to be a parser or
decoder, and will output smaller buffers.
Change the way autoplug-select is accumulated so that it's possible to have
multiple handlers. The handlers keep getting called as long as they keep
returning GST_AUTOPLUG_SELECT_TRY.
One practical example of when this is needed is when hooking into playbin's
uridecodebin, which is perhaps not very elegant but the only way to influence
which streams playbin autoplugs/exposes.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=723096
Otherwise we're going to deadlock forever because no autoplugging
happens without having caps, but caps can never be send because
we're blocking.
Serialized queries before caps should never be sent unless really
necessary.
Otherwise we will only block after the serialized, non-sticky event
after the CAPS event or the first buffer. If we're waiting for another
pad to finish autoplugging after we got final caps on this pad, it
will mean that we will let the ALLOCATION query pass although the
pad is not exposed yet.
Remove the byte limit for adaptive http streaming. Because some fragments might
be very big, we might need a lot of buffering. I also suspect another problem
where data is actually missing and things go out of sync somehow.
When we disable buffering in the more upstream multiqueue elements,
we need to also update the queue limits. In particular, the max_size_time should
be set to 0 or else we might simply deadlock.
When we have a scenario of demuxers linked to demuxers, decodebin2
will create multiqueue at different levels of the pipeline. The problem
is that only the lowest multiqueue's should do the buffering messaging,
as they will handle with the raw streams data.
When all multiqueues are doing buffering, the upper ones can handle
large buffers that easily fill them, moving from 0% to 100% from
buffer to buffer, causing too much buffering messages to be posted.
This hangs the pipeline unnecessarily and might lead to deadlocks.
Decodebin2's chains store a next_groups list that was being handled as
it could only have a single element. This is true for most of the
chaining streams scenarios where streams change not very often.
In more stressfull changing scenarios, like adaptive streams, those
changes can happen very often, and in short time intervals. This could
confuse decodebin2 as this list was always being used as a single
element list.
This patches makes it handle as a real list, using iteration instead
of picking the first element as the correct one always.
Even if the chain hasn't been 'handled' in this switching round,
report it as drained so upper chains/groups know abou it.
This makes switching happen on upper levels of the groups/chain
trees
The _decode_bin_compare_factories_func() should return negative
value if the rank of both PluginFeatures are equal and the name of
first PluginFeature comes before the second one (== ascending order).