Also improve the waiting condition for stream switches, which was assuming
before that the condition variable will only stop waiting once when it is
signaled. But the documentation says that there might be spurious wakeups.
https://bugzilla.gnome.org/show_bug.cgi?id=736655
Change the GAP events that are currently sent from the chain function of
the current pad to all other EOS pads. They should instead be sent from
their own streaming threads.
https://bugzilla.gnome.org/show_bug.cgi?id=736655
Wait in the event function when EOS is received until all pads are EOS
and then forward the EOS event from each pads own event function.
Also send a new GAP event for EOS pads from the event function whenever
going from PLAYING->PAUSED by shortly waking up the GCond. This is needed
to allow sinks to pre-roll again, as they did not receive EOS yet because
we blocked that, but also will never get data again.
https://bugzilla.gnome.org/show_bug.cgi?id=736655
Consider pipeline: gst-launch-1.0 playbin uri=http://example.com/a.ogg
Consider 128kbit audio stream.
As soon as uridecodebin detects the bitrate, it configures its input
queue2 max-size to 32000 bytes.
The 2MB buffer in multiqueue is nearly 2 orders of magnitude bigger.
This non-deterministically drives queue2 buffer anywhere from
100% to 0% until multiqueue is filled.
This patch sets multiqueue size to 5 buffers early in no_more_pads_cb.
Partly reverts commit db771185ed.
https://bugzilla.gnome.org/show_bug.cgi?id=740689
Decodebin has already added the element to the bin and should only
select caps compatible pads. It should disable the pad link checks
to avoid doing those again.
https://bugzilla.gnome.org/show_bug.cgi?id=742885
Create a function to do the pad cleanup of the GstSourceCombine struct
and use it to not forget to also cleanup the sink pad and fix a memory
leak.
https://bugzilla.gnome.org/show_bug.cgi?id=741198
Before we were setting them to PAUSED and (much) later connecting to
their source pad caps notify signal.
There was a race where that demuxer was pushing a caps and later a buffer
on its source pad when we were not even connected to its source pad caps notify
signal leading to decodebin missing the information and not keeping on
building the pipeline on CAPS event thus the demuxer was posting an ERROR
(not linked) message on the bus. This need to be done for 'simple
demuxers' because those have one ALWAYS source pad, not like usual demuxers
that have several dynamic source pads.
A "simple demuxer" is a demuxer that has one and only one ALWAYS source
pad.
https://bugzilla.gnome.org/show_bug.cgi?id=740693
There was a race where:
1) we would put the element to PAUSED
2) It would get data sent to it from upstream
3) It would thus send caps
3) caps_notify_cb would continue autoplugging
4) caps would flow downstream, the last pad would get exposed
5) we were still not done sending the sticky events
Taking the stream lock on the new element's sinkpad and only
releasing it when sticky events have all been sent prevents
the caps from reaching the source pad of the element before
we're all set.
https://bugzilla.gnome.org/show_bug.cgi?id=740694
Otherwise the following can happen:
1. set mute=true
2. play media1 (Ok)
3. play media without audio (audiochain removed)
4. play media2 (audiochain created, mute=*false*)
https://bugzilla.gnome.org/show_bug.cgi?id=740675
If there are two parser elements available for the same media format,
then decodebin is autoplugging an extra capsfilter and parser irrespective
of caps and rank. So restrict the decodebin from autoplugging multiple parser
elements back to back in adjacent positions with in a single DecodeChain
for the same media format.
https://bugzilla.gnome.org/show_bug.cgi?id=738416
The "iradio-mode" property used to have a default FALSE value in HTTP
source elements but now it should default to TRUE or just do not exist
as a property so it is not really needed to set it any more in
uridecodebin.
Apart from that this code could've never worked as uridecodebin looks for a
string-typed iradio-mode property, but it's a boolean in all sources.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=725383
audioresample and videoscale is something the application will have to do if
required, but we can at least help here by adding the
audioconvert/videoconvert elements.
https://bugzilla.gnome.org/show_bug.cgi?id=735748
When switching URI from about-to-finish, playbin starts decoding the new
URI and the queue2 inside uridecodebin starts emitting buffering messages
immediately. However, the queue(s) inside playsink still have buffers to
play and the pipeline doesn't need to pause for buffering, so we should
not send those buffering messages up to the application, otherwise there
is an audible glitch caused by pausing the pipeline for a very short time.
https://bugzilla.gnome.org/show_bug.cgi?id=727255
If we had plugins and an error occurred we only include the error message
caused by this, otherwise we will include the codec description as generated
from the caps.
This allows to detect which exact codec was missing instead of getting a
generic "no suitable decoders found" error message.
Otherwise we might change some capsfeatures from ANY to the specific
value from the filter and do not filter those out in case the
sink doesn't support them
https://bugzilla.gnome.org/show_bug.cgi?id=734822
Gracefully handle switching groups that all pads are deadend.
This can happen when quickly switching programs on mpegts as the
output is unaligned it can happen that not enough data was accumulated at
parsers to generate any buffers, causing the stream to receive EOS before
any data can be decoded.
To handle this scenario, the _expose function now also gets if there is
any next group to be exposed along with the list of endpads. If there are
no endpads and there is another group to expose it will switch to this next
group and then retry exposing the streams.
Also, the requirement to only switch from the chain that has the endpad had
to be modified to care for when the drainpad is NULL
https://bugzilla.gnome.org/show_bug.cgi?id=733169
Unsetting DISCONT flag means we need to copy the buffer. This copy operation
mandates that all GstMemory should be copy-able which is not always the case
https://bugzilla.gnome.org/show_bug.cgi?id=727409
We now add all our elements to uridecodebin *after*
GstBin::change_state(READY->PAUSED), so we need to post async-start
and async-done messages ourselves if we want to work async.
https://bugzilla.gnome.org/show_bug.cgi?id=733495
We now add all our elements to uridecodebin *after*
GstBin::change_state(READY->PAUSED), so we need to post async-start
and async-done messages ourselves if we want to work async.
https://bugzilla.gnome.org/show_bug.cgi?id=733495
otherwise we're going to
a) start Parser/Converter before they are linked to their capsfilter,
breaking their negotiation of a proper stream format
b) start demuxers without having connected to their pad-added signals. We
miss pads and in the worst case don't link any pads at all
... and if this fails for whatever reason we skip the element and instead
try with the next element. This allows us to handle elements that fail
when setting caps on them by just skipping to the next alternative element.
They might fail to go to PAUSED, and when connecting them further
we might already expose their srcpads on decodebin if we're unlucky.
This prevents us to handle failures going to PAUSED gracefully.
If the caps query returned us fixed caps this doesn't mean yet
that these caps are actually complete (fields might be missing).
It allows to do us some decisions, but the selection of the next
element should be delayed as only complete caps allow proper selection
of the next element.
Otherwise we might try to continue autoplugging e.g. for a specific
stream-format although the parser could convert to something else, thus giving
us potentially less options for decoders.
We can't convert to ANY capsfeatures, they are only there so that we
can passthrough whatever downstream can support... but we definitely
don't want to return them to upstream.
When playing RTSP streams there will be one decodebin per stream. If some of
them fail because of a missing plugin we should not fail completely but play
the supported streams at least.
https://bugzilla.gnome.org/show_bug.cgi?id=730868
Aggregate buffering messages to only post the lower value
to avoid setting pipeline to playing while any multiqueue
is still buffering.
There are 3 scenarios where the entries should be removed from
the list:
1) When decodebin is set to READY
2) When an element posts a 100% buffering (already implemented)
3) When a multiqueue is removed from decodebin.
For item 3 we don't need to handle it because this should only
happen when either 1 is hapenning or when it is playing a
chained file, for which number 2 should have happened for the
previous stream to finish
https://bugzilla.gnome.org/show_bug.cgi?id=726423
Otherwise we might end up inside the callback without having stored
the probe id... then try to remove that probe (not!) from the callback
and wait forever for the pad to unblock.
They are very confusing for people, and more often than not
also just not very accurate. Seeing 'last reviewed: 2005' in
your docs is not very confidence-inspiring. Let's just remove
those comments.
This provides an audio-filter and video-filter property to allow
applications to set filter elements/bins. The idea is that these will
e
applied if possible -- for non-raw sinks, the filters will be skipped.
If the application wishes to force the application of the filters, this
can be done by setting the new flag introduced on playsink -
GST_PLAY_FLAG_FORCE_FILTERS.
https://bugzilla.gnome.org/show_bug.cgi?id=679031
This provides an audio-filter and video-filter property to allow
applications to set filter elements/bins. The idea is that these will be
applied if possible -- for non-raw sinks, the filters will be skipped.
If the application wishes to force the application of the filters, this
can be done by setting the new flag introduced on playsink -
GST_PLAY_FLAG_FORCE_FILTERS.
https://bugzilla.gnome.org/show_bug.cgi?id=679031
2 seconds might be too small for some container formats, e.g.
MPEGTS with some video codec and AAC/ADTS audio with 700ms
long buffers. The video branch of multiqueue can run full while
the audio branch is completely empty, especially because there
are usually more queues downstream on the audio branch.
Usually these buffers are multiple seconds large, and having a maximum
of 5 buffers in the multiqueue there can use a lot of memory. Lower
this to 2 for adaptive streaming demuxers.
If we have the peer caps and a caps filter, return peer_caps +
intersect_first (filter, converter_caps) instead of
intersect_first (filter, peer_caps + converter_caps) and preservers
downstream caps preference order.
https://bugzilla.gnome.org/show_bug.cgi?id=724893
If we are using an adaptive stream demuxer, which outputs a non-container
stream, we are putting another multiqueue after the *parser* following
the adaptive stream demuxer. We do not want to add another instance of
the same parser right after this multiqueue.
Otherwise we will emit buffering messages not just from the last
multiqueue but also from previous multiqueues... confusing the
application with different percentages during pre-rolling.
For adaptive streaming demuxer we insert a multiqueue after
this demuxer. This multiqueue will get one fragment per buffer.
Now for the case where we have a container stream inside these
buffers, another demuxer will be plugged and after this second
demuxer there will be a second multiqueue. This second multiqueue
will get smaller buffers and will be the one emitting buffering
messages.
If we don't have a container stream inside the fragment buffers,
we'll insert a multiqueue below right after the next element after
the adaptive streaming demuxer. This is going to be a parser or
decoder, and will output smaller buffers.