Fields related to stream handling (input_streams,
output_streams, slots, guint slot_id) where used totally unprotected
until know.
This lead to several races, especially playing back RTSP streams.
To protect those fields, the OBJECT_LOCK can not be used as we sometimes
need to be able to post message on the bus while holding it.
decodebin3 already has a lock to manage stream selection, and in the end
it makes sense to protect all the stream management fields with the same
lock which is why we reuse the SELECTION_LOCK here.
https://bugzilla.gnome.org/show_bug.cgi?id=784012
decodebin3 checks input streams and pushes EOS if all input streams
are EOSed. If not, fake EOS is pushed to the corresponding slot.
When adaptivedemux is used with multi-track configuration,
adaptivedemux never ever push EOS to non-selected track
because streaming thread for the slot stops with not-linked flow return.
So, decodebin3 should generate EOS itself to finish playback.
https://bugzilla.gnome.org/show_bug.cgi?id=777735
linked input of slot can be old input, so urisourcebin should check
eos state to figure out whether it's new one or not.
If not, urisourcebin never ever forwards EOS to downstream at the end
of presentation, because the old input is still there without removal
https://bugzilla.gnome.org/show_bug.cgi?id=777735
group-id in stream-start event might be updated in
parse_chain_output_probe (). This cause duplicated stream-start
twice with identical stream-id and seq-num, but only group-id is
different. Although there is no change, stream-start event will
be followed by the first buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=771088
This makes it possible for GstDiscoverer to work with sources that
have multiple source pads and hence will trigger the creation of multiple
decodebin instances such as rtspsrc.
Based on the work of Vineeth TM <vineeth.tm@samsung.com>
https://bugzilla.gnome.org/show_bug.cgi?id=754178
And only set low-percent/high-percent if not using downloadbuffer, just
like in old uridecodebin. using the watermark based buffering causes
playback to hang never finish buffering with downloadbuffer.
Those multiqueue are the ones dealing with adaptive demuxers. They should
have a time limit set so that they don't end up buffering too much data.
They would previously be set with no limits at all, which would cause them
to grow indefinitely until downstream blocks.
When a clip has video audio and subtitle, if need send gap event
to audio and subtitle, we should make sure all has been sent, so
need every stream keep one send_gap_event.
https://bugzilla.gnome.org/show_bug.cgi?id=780429
When posting 100% buffering due to removing the last
buffering element, we still need to hold the posting
lock as well, to avoid any race with other elements
that might post a buffering message at that exact
moment
Add locking, and handle EOS properly now that urisourcebin
uses custom events in place of real EOS events, so we
need to manually remove buffering messages and potentially
post 100% in that situation
The expanded 4 second buffering was making radio streams that are
being delivered at real-time speeds too slow. We might need
a better plan for matching the queue2 size to incoming bitrate
in the absence of tag information or timestamping.
In uridecodebin, it used tags on the output of decodebin to
adjust the queue2 buffering, but urisourcebin doesn't have that
view - decodebin is downstream from us.
Probe for MultiQueue source pad might receive EOS twice,
the first is fake-eos and the other is actual EOS.
And the slot can be freed with fake-eos/EOS if the slot has no input.
Since slot freeing is async, double free can be possible.
So, decodebin3 needs to remove the probe also with slot freeing.
https://bugzilla.gnome.org/show_bug.cgi?id=777530
"requested_selection" list might be generated by select-streams event.
And memory of stream-id(s) in select-streams is independent from that of stream-collection.
https://bugzilla.gnome.org/show_bug.cgi?id=775553
When the decodebin state change fails because of an error
message, we might not go through PAUSED->READY. Don't leak
a ref to decodebin pads due to pad blocking in that case.
This is because we return ASYNC going to PAUSED, and if
we fail before reaching PAUSED the only transition we'll
see is READY->NULL.
https://bugzilla.gnome.org/show_bug.cgi?id=775893
The state of urisourcebin (and all elements contained within) can
change at any point in time, including when setting up the typefind
element.
In order to avoid ending up with typefind starting without being fully
connected, lock the state and connect to the 'have-type' signal.
Due to the special nature of adaptivedemux, reconfigure happens
frequently with seek/track-change.
In very exceptional cases, the following sequence is possible:
* EOS event is pushed to queue element and still buffers are queued
* During draining remaining buffers, reconfiguration downstream
happens due to track switch.
* The queue gets a not-linked flow return from downstream
* Because the sinkpad is EOS, the queue registers an
error on the bus, causing the pipeline to fail.
Avoid the sinkpad getting marked EOS in the first place, by using a
custom event in place of EOS.
https://bugzilla.gnome.org/show_bug.cgi?id=777009
When shutting down decodebin2 and parsebin, they set their
output pads to flushing, and there is a very small window
where elements might send a sticky event such as a tag event
(which silently fails due to flushing) and then sends a buffer,
and the buffer will return GST_FLOW_ERROR because it can't
forward sticky events. The element will then send an error
message on the bus. This can also happen when elements send EOS
just as shutdown is happening. Since we're about to destroy all
the elements inside parsebin and decodebin anyway, just discard
error messages from them.
A nicer but more difficult fix for GStreamer 2.0 is to make
all event pushing / handling in core return a GstFlowReturn
like buffers do, so we can report a FLUSHING state cleanly.
When plugging and then exposing a parser, don't fail
if it fails to send sticky events. The most likely
reason is that things were flushed due to the app
immediately doing a seek, but we can't detect flushing
separately to other error conditions without a
gst_pad_send_event_full() core function that returns
a GstFlowReturn.