When playing RTSP streams there will be one decodebin per stream. If some of
them fail because of a missing plugin we should not fail completely but play
the supported streams at least.
https://bugzilla.gnome.org/show_bug.cgi?id=730868
Aggregate buffering messages to only post the lower value
to avoid setting pipeline to playing while any multiqueue
is still buffering.
There are 3 scenarios where the entries should be removed from
the list:
1) When decodebin is set to READY
2) When an element posts a 100% buffering (already implemented)
3) When a multiqueue is removed from decodebin.
For item 3 we don't need to handle it because this should only
happen when either 1 is hapenning or when it is playing a
chained file, for which number 2 should have happened for the
previous stream to finish
https://bugzilla.gnome.org/show_bug.cgi?id=726423
Otherwise we might end up inside the callback without having stored
the probe id... then try to remove that probe (not!) from the callback
and wait forever for the pad to unblock.
They are very confusing for people, and more often than not
also just not very accurate. Seeing 'last reviewed: 2005' in
your docs is not very confidence-inspiring. Let's just remove
those comments.
It was already checked in an early out, and as it's only
incremented for at most the size of the passed buffer, it
can only become NULL in an address wraparound.
While there, don't cast away const on a pointer.
Coverity 1139845
This provides an audio-filter and video-filter property to allow
applications to set filter elements/bins. The idea is that these will
e
applied if possible -- for non-raw sinks, the filters will be skipped.
If the application wishes to force the application of the filters, this
can be done by setting the new flag introduced on playsink -
GST_PLAY_FLAG_FORCE_FILTERS.
https://bugzilla.gnome.org/show_bug.cgi?id=679031
This provides an audio-filter and video-filter property to allow
applications to set filter elements/bins. The idea is that these will be
applied if possible -- for non-raw sinks, the filters will be skipped.
If the application wishes to force the application of the filters, this
can be done by setting the new flag introduced on playsink -
GST_PLAY_FLAG_FORCE_FILTERS.
https://bugzilla.gnome.org/show_bug.cgi?id=679031
2 seconds might be too small for some container formats, e.g.
MPEGTS with some video codec and AAC/ADTS audio with 700ms
long buffers. The video branch of multiqueue can run full while
the audio branch is completely empty, especially because there
are usually more queues downstream on the audio branch.
Usually these buffers are multiple seconds large, and having a maximum
of 5 buffers in the multiqueue there can use a lot of memory. Lower
this to 2 for adaptive streaming demuxers.
The typefinder returns LIKELY for as little as one possible
sync and no bad sync (not even taking into account how much
data was looked at for that). It's generally just not fit
for purpose, so should just not return anything like LIKELY
at all ever, even more so since it only recognises one out
of ten H263 files, and likes to mis-detect mp3s as H263.
https://bugzilla.gnome.org/show_bug.cgi?id=700770https://bugzilla.gnome.org/show_bug.cgi?id=725644
If we have the peer caps and a caps filter, return peer_caps +
intersect_first (filter, converter_caps) instead of
intersect_first (filter, peer_caps + converter_caps) and preservers
downstream caps preference order.
https://bugzilla.gnome.org/show_bug.cgi?id=724893
If we are using an adaptive stream demuxer, which outputs a non-container
stream, we are putting another multiqueue after the *parser* following
the adaptive stream demuxer. We do not want to add another instance of
the same parser right after this multiqueue.
Otherwise we will emit buffering messages not just from the last
multiqueue but also from previous multiqueues... confusing the
application with different percentages during pre-rolling.
For adaptive streaming demuxer we insert a multiqueue after
this demuxer. This multiqueue will get one fragment per buffer.
Now for the case where we have a container stream inside these
buffers, another demuxer will be plugged and after this second
demuxer there will be a second multiqueue. This second multiqueue
will get smaller buffers and will be the one emitting buffering
messages.
If we don't have a container stream inside the fragment buffers,
we'll insert a multiqueue below right after the next element after
the adaptive streaming demuxer. This is going to be a parser or
decoder, and will output smaller buffers.