If a pipeline fails to preroll, it might happen that the sinks are
put into READY state from playbin's sink activation, but they are never
set to playsink, so they aren't being managed by a GstBin and will keep
their READY state until they are unreffed, leading to a warning.
Prevent this by always forcing them to NULL when deactivating a group
https://bugzilla.gnome.org/show_bug.cgi?id=708789
Otherwise we will remove the bus that would proxy messages to playsink
and never set it again. If the sink is already in playsink, all failures
are fatal anyway as it's either a sink that worked before or one that
was set by the user.
https://bugzilla.gnome.org/show_bug.cgi?id=701997
playbin will now only activate the sinks in a single place and
will never change the states of any sinks that are owned by
playsink.
Also handle text-sinks the same way as audio/video sinks inside
playbin.
This makes sure the application gets any context related messages and
can do whatever is required to a) get the sink a context or b) share
the context with other elements in the pipeline.
The proxying is necessary because the sink is not a child element of
playbin, but instead will at a later point be a child of some bin
inside playsink.
https://bugzilla.gnome.org/show_bug.cgi?id=700967
Otherwise we're going to deadlock forever because no autoplugging
happens without having caps, but caps can never be send because
we're blocking.
Serialized queries before caps should never be sent unless really
necessary.
We found a case where untranslated values were being passed from the
proxy to the underlying channel, causing bad color balance values
in some setups.
Thanks to Sebastian Dröge for clarifying how the code works, and
suggesting the fix.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=701202
This allows to chose something else than input-selector
for multiple audio/video/text streams, e.g. an adder could
be used for audio.
It is needed for example to implement some of the more
advanced HTML5 video features.
https://bugzilla.gnome.org/show_bug.cgi?id=698851
Add the actual decoder/parser/etc caps at the very end to
make sure we don't cause empty caps to be returned, e.g.
if a parser asks us but a decoder is required after it
because no sink can handle the format directly.
Otherwise we will only block after the serialized, non-sticky event
after the CAPS event or the first buffer. If we're waiting for another
pad to finish autoplugging after we got final caps on this pad, it
will mean that we will let the ALLOCATION query pass although the
pad is not exposed yet.
Otherwise we accumulate more and more queue2 elements, and let each
of them start a thread doing nothing but waiting each time uridecodebin
goes to PAUSED.
https://bugzilla.gnome.org/show_bug.cgi?id=699794
This makes it possible to take advantage of the O(log n) lookups
of GSequence on the ~1000 element lists and only do iterations
on <10 element lists. Previously the code iterated over ~1000 element
lists multiple times.
Autoplug the decoder elements and sink elements based on
the number of common capsfeatures if the ranks are the same.
This will also helps to autoplug the h/w_decoder and h/w_renderer.
https://bugzilla.gnome.org/show_bug.cgi?id=698712
Remove the byte limit for adaptive http streaming. Because some fragments might
be very big, we might need a lot of buffering. I also suspect another problem
where data is actually missing and things go out of sync somehow.
When we disable buffering in the more upstream multiqueue elements,
we need to also update the queue limits. In particular, the max_size_time should
be set to 0 or else we might simply deadlock.
When we have a scenario of demuxers linked to demuxers, decodebin2
will create multiqueue at different levels of the pipeline. The problem
is that only the lowest multiqueue's should do the buffering messaging,
as they will handle with the raw streams data.
When all multiqueues are doing buffering, the upper ones can handle
large buffers that easily fill them, moving from 0% to 100% from
buffer to buffer, causing too much buffering messages to be posted.
This hangs the pipeline unnecessarily and might lead to deadlocks.
Decodebin2's chains store a next_groups list that was being handled as
it could only have a single element. This is true for most of the
chaining streams scenarios where streams change not very often.
In more stressfull changing scenarios, like adaptive streams, those
changes can happen very often, and in short time intervals. This could
confuse decodebin2 as this list was always being used as a single
element list.
This patches makes it handle as a real list, using iteration instead
of picking the first element as the correct one always.
Even if the chain hasn't been 'handled' in this switching round,
report it as drained so upper chains/groups know abou it.
This makes switching happen on upper levels of the groups/chain
trees
If a source element could be created for a URI, but all elements rejected
the URI for some reason, propagate the error from the URI handler instead
of reporting a 'no uri handler found for protocol xyz' error, which is
confusing. Fixes error reporting with dvb:// URIs when the channel config
file could not be found or not be parsed or the channel isn't listed.
https://bugzilla.gnome.org/show_bug.cgi?id=678892
Use a scheduling query to check if the source element has some
bandwidth limitations. If this is the case on-disk buffering might be
used. If the source element doesn't handle the scheduling query then
fallback to checking the URI protocol against the hardcoded list of
protocols known to handle buffering already.
Fixes bug 693484.
The compare_factories_func() should return negative value
if the rank of both PluginFeatures are equal and the name of
first PluginFeature comes before the second one (== ascending order).
The _decode_bin_compare_factories_func() should return negative
value if the rank of both PluginFeatures are equal and the name of
first PluginFeature comes before the second one (== ascending order).