When dealing with streams/contents which have large duration, it is
more user-friendly to show more details in the high values (hours or days)
than in the microseconds.
This patch will use the following formatting schemes:
* Below 1hour : MM:SS.SSS
* Below 24hours : HHhMMmSSs
* Above : DDdHHhMMm
decodebin3 checks input streams and pushes EOS if all input streams
are EOSed. If not, fake EOS is pushed to the corresponding slot.
When adaptivedemux is used with multi-track configuration,
adaptivedemux never ever push EOS to non-selected track
because streaming thread for the slot stops with not-linked flow return.
So, decodebin3 should generate EOS itself to finish playback.
https://bugzilla.gnome.org/show_bug.cgi?id=777735
linked input of slot can be old input, so urisourcebin should check
eos state to figure out whether it's new one or not.
If not, urisourcebin never ever forwards EOS to downstream at the end
of presentation, because the old input is still there without removal
https://bugzilla.gnome.org/show_bug.cgi?id=777735
group-id in stream-start event might be updated in
parse_chain_output_probe (). This cause duplicated stream-start
twice with identical stream-id and seq-num, but only group-id is
different. Although there is no change, stream-start event will
be followed by the first buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=771088
In GStreamer 1.12 and older, the GstBaseSrc live lock used to be held while
create() virtual function was called. As appsrc pushes serialized event in
that virtual function, we ended up with some deadlock while setting the
state to NULL. This test simulates this situation.
https://bugzilla.gnome.org/show_bug.cgi?id=783301
This makes it possible for GstDiscoverer to work with sources that
have multiple source pads and hence will trigger the creation of multiple
decodebin instances such as rtspsrc.
Based on the work of Vineeth TM <vineeth.tm@samsung.com>
https://bugzilla.gnome.org/show_bug.cgi?id=754178
... as expected later on when end time is used to determine end running time.
Otherwise the latter is determined as NONE and the resulting text buffer is
then used indefinitely.
The base class is trying to align the processed data, but it endup
removing the GstVideoMeta. That caused wrong result. Instead, just copy
from the process function with the appropriate alignment.
https://bugzilla.gnome.org/show_bug.cgi?id=781204
And only set low-percent/high-percent if not using downloadbuffer, just
like in old uridecodebin. using the watermark based buffering causes
playback to hang never finish buffering with downloadbuffer.
With both audiorate and videorate, it seems more sensible to apply rate
adjustments after the first buffer appears. For example, with v4l2src,
there is often a small delay before the first video buffer turns up, and
this can cause a stuttery start because of videorate trying to ensure a
perfect stream.
Those multiqueue are the ones dealing with adaptive demuxers. They should
have a time limit set so that they don't end up buffering too much data.
They would previously be set with no limits at all, which would cause them
to grow indefinitely until downstream blocks.
And monitor no_more_pads.
With live sources such as rtsp, uridecodebin only creates its
child decodebins between PAUSED and PLAYING.
This means that the ASYNC_DONE it posts when getting NO_PREROLL
in its change_state method gets immediately propagated by the
GstBin parent class, as opposed to a situation where a
decodebin has been added to it already, and has posted ASYNC_START.
The proposed solution, instead of simply waiting for ASYNC_DONE,
and finishing prematurely in that case, waits for three conditions
to be true:
* the uridecodebin needs to have emitted no_more_pads
* its current state must be PAUSED if not live, PLAYING otherwise
* There must be no "pending subtitle pads", ie pads where we haven't
received tags yet.
All these conditions are checked in the message handler, as we
post custom messages on it when we get subtitle tags or no_more_pads.
https://bugzilla.gnome.org/show_bug.cgi?id=783257