Change the way autoplug-select is accumulated so that it's possible to have
multiple handlers. The handlers keep getting called as long as they keep
returning GST_AUTOPLUG_SELECT_TRY.
One practical example of when this is needed is when hooking into playbin's
uridecodebin, which is perhaps not very elegant but the only way to influence
which streams playbin autoplugs/exposes.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=723096
Otherwise we're going to deadlock forever because no autoplugging
happens without having caps, but caps can never be send because
we're blocking.
Serialized queries before caps should never be sent unless really
necessary.
Otherwise we will only block after the serialized, non-sticky event
after the CAPS event or the first buffer. If we're waiting for another
pad to finish autoplugging after we got final caps on this pad, it
will mean that we will let the ALLOCATION query pass although the
pad is not exposed yet.
Remove the byte limit for adaptive http streaming. Because some fragments might
be very big, we might need a lot of buffering. I also suspect another problem
where data is actually missing and things go out of sync somehow.
When we disable buffering in the more upstream multiqueue elements,
we need to also update the queue limits. In particular, the max_size_time should
be set to 0 or else we might simply deadlock.
When we have a scenario of demuxers linked to demuxers, decodebin2
will create multiqueue at different levels of the pipeline. The problem
is that only the lowest multiqueue's should do the buffering messaging,
as they will handle with the raw streams data.
When all multiqueues are doing buffering, the upper ones can handle
large buffers that easily fill them, moving from 0% to 100% from
buffer to buffer, causing too much buffering messages to be posted.
This hangs the pipeline unnecessarily and might lead to deadlocks.
Decodebin2's chains store a next_groups list that was being handled as
it could only have a single element. This is true for most of the
chaining streams scenarios where streams change not very often.
In more stressfull changing scenarios, like adaptive streams, those
changes can happen very often, and in short time intervals. This could
confuse decodebin2 as this list was always being used as a single
element list.
This patches makes it handle as a real list, using iteration instead
of picking the first element as the correct one always.
Even if the chain hasn't been 'handled' in this switching round,
report it as drained so upper chains/groups know abou it.
This makes switching happen on upper levels of the groups/chain
trees
The _decode_bin_compare_factories_func() should return negative
value if the rank of both PluginFeatures are equal and the name of
first PluginFeature comes before the second one (== ascending order).
There were two issues with the previous decodebin2 group switching algorithm:
Issue 1: It operated with no memory of what has been drained or not, leading to
multiple checks for chains/groups that were already drained.
Issue 2: When receiving an EOS, it only detected that a higher-level chain
was drained if it contained the pad receiving the EOS.
The following modifications have been applied:
- a new drained property has been added to GstDecodeChain
- both drained properties of chain/group are set as soon as they are detected
- the algorithm now tests agains these values
See https://bugzilla.gnome.org/show_bug.cgi?id=685938
Should fix "cannot register existing type `GstPlaybinSelectorPad'" warnings
and subsequent errors when creating multiple players at the same time.
Conflicts:
gst/playback/gststreamselector.c
This allows the following use-cases to expose the group and pads
before an ALLOCATION query comes through:
* Single stream use-cases
* Multi stream use-cases where all streams sent the CAPS event before
the first ALLOCATION query
Some cases will still make the initial ALLOCATION query fail though,
which isn't optimal, but not fatal (it will recover when pads are
exposed, a RECONFIGURE event is sent upstream and elements can
re-send an ALLOCATION query which will reach downstream elements).
https://bugzilla.gnome.org/show_bug.cgi?id=680262
A caps event is also used to establish that a stream has prerolled.
Without this, we end up allowing negotiation queries to fail, ending
in decoders (and other elements) to not be configured right from the
start with the most optimal settings.