Both versions are basically the same, but version 2.0 also allows
60000/1001 as framerate and allows to specify the field and line number
for each payload.
Put the major version into the caps so that elements can limit via caps
negotiation which versions they can support.
audioconvert's passthrough status can no longer be determined
strictly from input / output caps equality, as a mix-matrix can
now be specified.
We now call gst_base_transform_set_passthrough dynamically, based
on the return from the new gst_audio_converter_is_passthrough()
API, which takes the mix matrix into account.
Use the bitrate advertised by queue2 to determine the limits to
set across possibly multiple queue2/downloadbuffer elements. e.g.
with two queue2's and a max-bytes based on the ratio of the
bitrate/cumulative_bitrate multiplied by the buffer_size set on urisourcebin.
This allows finer grained control over the buffer used by all the queue
elements inside urisourcebin. Instead of a maximum of
n_streams*buffer_size being used, only buffer_size will be used however
we will fallback to n_streams*buffer_size if one of the queue2's does
not have bitrate information.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/issues/60
This new property controls the synchronisation offset between the text and video
streams. Positive values make the text ahead of the video and negative values
make the text go behind the video.
https://bugzilla.gnome.org/show_bug.cgi?id=797134
This new property controls the synchronisation offset between the text and video
streams. Positive values make the text ahead of the video and negative values
make the text go behind the video.
https://bugzilla.gnome.org/show_bug.cgi?id=797134
When the playsink contains a text chain this property controls the
synchronisation of the subtitles and video by controlling the underlying
subtitleoverlay::subtitle-ts-offset property.
https://bugzilla.gnome.org/show_bug.cgi?id=797134
This removes the crossfade-ratio property and replaces it with an
operator property. Currently this implements the following operators:
- SOURCE: Copy over the source and don't look at the destination
- OVER: Default blending of the source over the destination
- ADD: Like OVER but simply adding the alpha instead
See the example for how to implement crossfading with this.
https://bugzilla.gnome.org/show_bug.cgi?id=797169
The queue between the audiotee and the audio chain wasn't properly added to the
bin, leading to streamsynchronizer locks on EOS. Reconfiguration of the
visualization chain wasn't working as expected either. It is now possible to
dynamically enable/disable the audio visualization support.
https://bugzilla.gnome.org/show_bug.cgi?id=796553
255 will easily become 0 in the blending function as they expect
the maximum value to be 255.
Can be reproduce with
gst-launch-1.0 videotestsrc pattern=ball ! c.sink_0 \
videotestsrc pattern=snow ! c.sink_1 \
compositor name=c \
sink_0::zorder=0 sink_1::zorder=1 sink_0::crossfade-ratio=0.5 \
background=black ! \
videoconvert ! xvimagesink
crossfade-ratio +/- 0.001 makes it work correctly and the same happens
at e.g. 0.25, 0.75, N*0.0625
https://bugzilla.gnome.org/show_bug.cgi?id=796846
The fomula, 'offset = time / rate', is correct only if
the rate is never changed. When the rate is changed,
the offset should be re-calculated based on the previous
offset.
https://bugzilla.gnome.org/show_bug.cgi?id=791269
adder needs more than just trivial work to support planar buffers properly
because it currently reads sub-buffers from GstCollectPads in order for all
of them to have matching sizes. In planar mode, this means it would truncate
some channels and mix them up in strange ways. It only works if all input
buffers in all sink pads have matching sizes.
This moves all the conversion related code to a single place, allows
less code-duplication inside compositor and makes the glmixer code less
awkward. It's also the same pattern as used by GstAudioAggregator.
The aggregated_frame is now called prepared_frame and passed to the
prepare_frame and cleanup_frame virtual methods directly. For the
currently queued buffer there is a method on the video aggregator pad
now.
With the way caps negotiation work in encoders, the only way to ensure
that no downstream renegotiation is done in the encoder is to also lock
upstream caps. Anyway with the current behavior upstream of encoders
*require* to handle any file format so locking upstream format should
be safe.
https://bugzilla.gnome.org/show_bug.cgi?id=795464
Otherwise decodebin won't get notified about STREAM_COLLECTION comming
from the sources and thus will never get informored about it. Without
being informed about the stream collection decodebin won't be able to
select any streams. It ends up not creating any output for the streams
defined from outside parserbin.
https://bugzilla.gnome.org/show_bug.cgi?id=795364
Instead go backwards before segment.stop based on the framerate or the
next buffers end timestamp. Otherwise the first buffer will usually be
dropped because outside the segment.
https://bugzilla.gnome.org/show_bug.cgi?id=781899
Buffering messages are only sent for the active group (in case there
is more than one).
If the inactive group posts buffering messages we keep the last one
around and will post it once it becomes the playing one.
In order to flush out multiqueue, we send again a STREAM_START and
then a EOS event.
The problem was that was that we might end up pushing out on the
output of multiqueue (and therefore decodebin3) a series of:
* EOS / STREAM_START / EOS
Apart from the uglyness of such output, If decodebin3 is used with
elements such as concat on their output, they might potentially
block on that second STREAM_START.
In order to make sure we don't end up in that situation we send
a custom STREAM_START event when refreshing multiqueue (which we
drop on the output) and we don't special case EOS events on streams
on which we already got EOS.
At worst we now end up sending at most two EOS on the output of
multiqueue (and decodebin3).
Similar in vein to the playbin2 architecture except that uridecodebin3
are prerolled much earlier and all streams of the same type are
fed through a 'concat' element.
This keeps the philosphy of having all elements connected as soon
as possible.
The 'about-to-finish' signal is emitted whenever one of the uridecodebin
is about to finish, allowing the users to set the next uri/suburi.
The notion of a group being active has changed. It now means that the
uridecodebin3 has been activated, but doesn't mean it is the one
currently being outputted by the sinks (i.e. curr_group and next_group).
This is done via detecting GST_MESSAGE_STREAM_START emission by playsink
and figuring out which group is really playing.
When the current group changes, a new thread is started to deactivate
the previous one and optionnaly fire 'about-to-finish'.
Apologies for the big commit, but it wasn't really possible to split it
in anything smaller.
* Switch to uridecodebin3 instead of managing urisourcebin and decodebin3
ourselves. No major architectural change with this.
* Reconfigure sinks/outputs when needed. This is possible thanks to the
various streams-related API. Instead of blocking new pads and waiting
for a (fake) no-more-pads to decide what to connect, we instead reconfigure
playsink and the combiners to whatever types are currently selected. All of
this is done in reconfigure_output().
New pads are immediately connected to (combiners and) sinks, allowing
immediate negotiation and usage.
* Since elements are always connected, the "cached-duration" feature is gone
and queries can reach the target elements.
* The auto-plugging related code is currently disabled entirely until
we get the new proper API.
* Store collections at the GstSourceGroup level and not globally
* And more comments a bit everywhere
NOTE: gapless is still not functional, but this opens the way to be able
to handle it in a streams-aware fashion (where several uridecodebin3 can
be active at the same time).
With push-based sources, urisourcebin will emit this signal when
the stream has been fully consumed.
This signal can be used to know when the source is done providing
data.
With playbin the last subtitle chunk would not get displayed
if the last chunk was missing a newline at the end. This is
because streamsynchronizer will hold back the EOS event until
the audio and video streams are finished too, so subparse
would never forcefully push out the last chunk until the very
end when it is too late.
We get a STREAM_GROUP_DONE event from streamsynchronizer however,
so handle that like EOS and force out any remaining text then.
https://bugzilla.gnome.org/show_bug.cgi?id=771853
(yes, this has never worked since it was introduced, don't worry)
If we want to actually detect layer/channels/samplerate changes,
it would be better to:
* not reset the various prev_* variables at every iteration.
* and actually store the values when they change
CID #206079
CID #206080
CID #206081
To passthrough crop-meta, the converter would need to allocate and
convert buffers of the size of the originating buffer. This is currently
made difficult by GstBaseTransform since we cannot alter the caps passed
though the allocation query. We would also need to wait for the first
input buffer to be received in order to make the decision around that
size.
So the short and safe solution is just to stop pretending we can
passthrought that meta.
https://bugzilla.gnome.org/show_bug.cgi?id=791412
If select-stream event was send to playbin3 as missing any GstStream of ES type
(V or A or TEX) of collection then, playbin will access to invalid address of
GstStream due to invalid index limit. This caused SIGSEGV.
https://bugzilla.gnome.org/show_bug.cgi?id=791638
The qt typefinder uses guint64 values for offset and size calculation
but the typefinder system only supports gint64 values.
Make sure we don't end up using potentially overflowing values.
The qt typefinder uses guint64 values for offset and size calculation
but the typefinder system only supports gint64 values.
Make sure we don't end up using potentially overflowing values.
n_frames could end up being quite big (potentially up to G_MAXINT64). Which
would result in overflowing 64bits when multiplying it by GST_SECOND.
Instead move GST_SECOND to the num argument
If we are shutting down, don't spawn a cleanup thread to cleanup old
groups and instead queue them to be cleaned up in the state change
thread.
This avoids (hopefully for good) having a race between the state change
thread and other threads trying to deactivate elements/pads.
Deactivating pads from two threads isn't 100% MT-safe. There is a
slim chance that the GstPadActivateFunc might be called twice with
the same values (in this case from the cleanup thread *and* from
the GstElement change_state function when going from PAUSED to READY).
In order to avoid that, call any existing cleanup function *before*
calling the parent change_state implementation on downwards state
changes.
When deactivating pads, we need to ensure that the streaming threads
going through the pads we wish to deactivate can cleanly return.
Failure to do that would result in the streaming locks of those
pads never being released. The end result would be a deadlock
when stopping decodebin2.
In order to avoid that situation, release the "dyn" lock around
the deactivation code. And refactor the code to cope with the
list of blocked pads having potentially changed when re-acquiring
the lock.
We have a dedicated one-shot thread to handle cleanup of old groups.
While this is a good idea. It's an even better idea to make sure
that thread is *completed* before the parsebin element to which
it is related isn't freed/gone.
* There can only be one cleanup thread happening at any point in time.
If there is already one, we wait for the previous one to finish.
* When shutting down (NULL=>READY) make sure the thread is finished
https://bugzilla.gnome.org/show_bug.cgi?id=790007
We have a dedicated one-shot thread to handle cleanup of old groups.
While this is a good idea. It's an even better idea to make sure
that thread is *completed* before the decodebin2 element to which
it is related isn't freed/gone.
* There can only be one cleanup thread happening at any point in time.
If there is already one, we wait for the previous one to finish.
* When shutting down (NULL=>READY) make sure the thread is finished
https://bugzilla.gnome.org/show_bug.cgi?id=790007
Instead of emitting 'drained' whenever every single chain is drained
(which would result in plenty of signal emission, and would also
occur when switching groups), only emit it when the top-level chain
is drained.
Furthermore, mark unknown (and therefore unexposed) pads as drained
since we'll never get EOS on them.
https://bugzilla.gnome.org/show_bug.cgi?id=787367
If we can expose the main chain, recheck whether we are shutting
down or not.
decodebin2 might have been set to READY/NULL during the attempt
to expose, which would cause it to fail ... but it is not a fatal
issue.
By select-streams event, current implementation of decodebin3
supports deactivate output stream (i.e., decoder element)
in reassign slot(), but cannot activate any slot without track change.
https://bugzilla.gnome.org/show_bug.cgi?id=778015
Application might choose only specific type among all available types
using select-streams event. In this case, it is desired that reconfigure
of playsink to clear unused stream path.
https://bugzilla.gnome.org/show_bug.cgi?id=778015
When an empty mix matrix is passed, audio-channel-mixer
will now generate a (potentially truncated) identity matrix,
this replicates the behaviour of audiomixmatrix in first-channels
mode.
https://bugzilla.gnome.org/show_bug.cgi?id=788833
remove_format_info was a bit confusing to read, this removes
it in favor of standard gst_caps_map_in_place calls.
This no longer simplifies the resulting caps, but I
consider this should be the job of basetransform.
https://bugzilla.gnome.org/show_bug.cgi?id=785471
Use the intended sequence for re-using elements:
* EOS
* STREAM_START if element is to be re-used
This avoids having elements (such as queue/multiqueue/queue2) not
properly resetting themselves.
When delaying EOS propagation (because we want to wait until all
streams of a group are done for example), we re-trigger them by
first sending the cached STREAM_START and then EOS (which will
cause elements to re-set themselves if needed and accept new
buffers/events).
https://bugzilla.gnome.org/show_bug.cgi?id=785951
It is forwarding messages to the playbin bus, thus forwarding messages
that contain a floating reference to the application. This generally
makes bindings unhappy, we must not leak floating references to them.
Crossfading is a bit more complex than just having two pads with the
right keyframes as the blending is not exactly the same.
The difference is in the way we compute the alpha channel, in the case
of crossfading, we have to compute an additive operation between
the destination and the source (factored by the alpha property of both
the input pad alpha property and the crossfading ratio) basically so
that the crossfade result of 2 opaque frames is also fully opaque at any
time in the crossfading process, avoid bleeding through the layer
blending.
Some rationnal can be found in https://phabricator.freedesktop.org/T7773.
https://bugzilla.gnome.org/show_bug.cgi?id=784827
channels=1 is always mono, having it 'unpositioned' does not make
sense.
This fixes pipeline such as:
gst-validate-1.0 audiotestsrc ! audio/x-raw,channels=2,rate=44100,layout=interleaved ! audioconvert ! audioresample ! audio/x-raw, rate=44100, channels=1 ! avenc_mp2 ! fakesink
https://bugzilla.gnome.org/show_bug.cgi?id=785407
Do not remove other parsebin's input streams. It will cause unexpected
removal of any input streams in multi-parsebin use case.
Basically, the purpose of blocking buffers is similar to checking
no-more-pads of chain/group. That is, it gives hint to know the timing
to remove old (EOSed) streams of the parsebin and to add/reuse slots
for new input streams. But, that doesn't mean that we need to remove
other parsebin's EOSed stream. Each parsebin has most likely its
own streaming thread and therefore EOSed time can be much different.
(i.e., much early EOS of subtitle only parsebin)
https://bugzilla.gnome.org/show_bug.cgi?id=785120
Fields related to stream handling (input_streams,
output_streams, slots, guint slot_id) where used totally unprotected
until know.
This lead to several races, especially playing back RTSP streams.
To protect those fields, the OBJECT_LOCK can not be used as we sometimes
need to be able to post message on the bus while holding it.
decodebin3 already has a lock to manage stream selection, and in the end
it makes sense to protect all the stream management fields with the same
lock which is why we reuse the SELECTION_LOCK here.
https://bugzilla.gnome.org/show_bug.cgi?id=784012
decodebin3 checks input streams and pushes EOS if all input streams
are EOSed. If not, fake EOS is pushed to the corresponding slot.
When adaptivedemux is used with multi-track configuration,
adaptivedemux never ever push EOS to non-selected track
because streaming thread for the slot stops with not-linked flow return.
So, decodebin3 should generate EOS itself to finish playback.
https://bugzilla.gnome.org/show_bug.cgi?id=777735
linked input of slot can be old input, so urisourcebin should check
eos state to figure out whether it's new one or not.
If not, urisourcebin never ever forwards EOS to downstream at the end
of presentation, because the old input is still there without removal
https://bugzilla.gnome.org/show_bug.cgi?id=777735
group-id in stream-start event might be updated in
parse_chain_output_probe (). This cause duplicated stream-start
twice with identical stream-id and seq-num, but only group-id is
different. Although there is no change, stream-start event will
be followed by the first buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=771088
This makes it possible for GstDiscoverer to work with sources that
have multiple source pads and hence will trigger the creation of multiple
decodebin instances such as rtspsrc.
Based on the work of Vineeth TM <vineeth.tm@samsung.com>
https://bugzilla.gnome.org/show_bug.cgi?id=754178
The base class is trying to align the processed data, but it endup
removing the GstVideoMeta. That caused wrong result. Instead, just copy
from the process function with the appropriate alignment.
https://bugzilla.gnome.org/show_bug.cgi?id=781204
And only set low-percent/high-percent if not using downloadbuffer, just
like in old uridecodebin. using the watermark based buffering causes
playback to hang never finish buffering with downloadbuffer.
With both audiorate and videorate, it seems more sensible to apply rate
adjustments after the first buffer appears. For example, with v4l2src,
there is often a small delay before the first video buffer turns up, and
this can cause a stuttery start because of videorate trying to ensure a
perfect stream.
Those multiqueue are the ones dealing with adaptive demuxers. They should
have a time limit set so that they don't end up buffering too much data.
They would previously be set with no limits at all, which would cause them
to grow indefinitely until downstream blocks.
gst_video_rate_flush_prev() ensures that the pushed buffer is writable
by calling gst_buffer_make_writable() on videorate->prevbuf.
In drop-only mode we always push buffers directly when they are received
from GstBaseTransform (gst_video_rate_transform_ip()) and do not keep them
around. GstBaseTransform already ensures that those buffers are
writable so there is no need to do it twice.
This change saves us from copying buffers in drop-only mode as we no longer
calls gst_buffer_make_writable() with a buffer having a refcount of 2
(one ref owned by GstBaseTransform and one in videorate->prevbuf).
https://bugzilla.gnome.org/show_bug.cgi?id=780767
When caps changes while streaming, the new caps was getting processed
immediately in videoaggregator, but the next buffer in the queue that
corresponds to this new caps was not necessarily being used immediately,
which resulted sometimes in using an old buffer with new caps. Of course
there used to be a separate buffer_vinfo for mapping the buffer with its
own caps, but in compositor the GstVideoConverter was still using wrong
info and resulted in invalid reads and corrupt output.
This approach here is more safe. We delay using the new caps
until we actually select the next buffer in the queue for use.
This way we also eliminate the need for buffer_vinfo, since the
pad->info is always in sync with the format of the selected buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=780682
Instead go backwards before segment.stop based on the framerate or the
next buffers end timestamp. Otherwise the first buffer will usually be
dropped because outside the segment.
https://bugzilla.gnome.org/show_bug.cgi?id=781899
When there are more than 64 channels, we don't want to exceed the
bounds of the ordering_map buffer, and in these cases we don't want to
remap at all. Here we avoid doing that.
Based on a patch originally for plugins-good/interleave in
https://bugzilla.gnome.org/show_bug.cgi?id=780331
HLS files can have arbitrary extra tags in them, and
those can be quite long lines. We need to search
further than 256 bytes sometimes just to get past the
first few lines of the file. Make the limit 4KB,
which matches a typical input block size and should
hopefully cover every crazy input.
https://bugzilla.gnome.org/show_bug.cgi?id=780559
The term stride is confusing here, since the stride is always use
to signal the pixel row size of an image (including padding). Also
a frame may have a single stride, which adds to the confusion. This
patch uses frame-size, which simply indicate the frame size in the
case the images have some padding in between.
https://bugzilla.gnome.org/show_bug.cgi?id=780053
This allow using those property through gst-launch-1.0. This type
gained a deserilizer recently. The syntax is: <val1, val2, ...>.
Note that we also use the type int instead of uint to avoid having
to cast when specifying the values. The deserilizers assume
int by default.
https://bugzilla.gnome.org/show_bug.cgi?id=780053
When a clip has video audio and subtitle, if need send gap event
to audio and subtitle, we should make sure all has been sent, so
need every stream keep one send_gap_event.
https://bugzilla.gnome.org/show_bug.cgi?id=780429
When posting 100% buffering due to removing the last
buffering element, we still need to hold the posting
lock as well, to avoid any race with other elements
that might post a buffering message at that exact
moment
Add locking, and handle EOS properly now that urisourcebin
uses custom events in place of real EOS events, so we
need to manually remove buffering messages and potentially
post 100% in that situation
The expanded 4 second buffering was making radio streams that are
being delivered at real-time speeds too slow. We might need
a better plan for matching the queue2 size to incoming bitrate
in the absence of tag information or timestamping.
In uridecodebin, it used tags on the output of decodebin to
adjust the queue2 buffering, but urisourcebin doesn't have that
view - decodebin is downstream from us.
This adds a property to select the maximum number of threads to use for
conversion and scaling. During processing, each plane is split into
an equal number of consecutive lines that are then processed by each
thread.
During tests, this gave up to 1.8x speedup with 2 threads and up to 3.2x
speedup with 4 threads when converting e.g. 1080p to 4k in v210.
https://bugzilla.gnome.org/show_bug.cgi?id=778974
See https://bugzilla.gnome.org/show_bug.cgi?id=773666
This would ideally be solved in baseparse but that requires further
thought at this point, and in the meantime it would be good to have
rawbaseparse not assert on this but handle it gracefully instead.
Probe for MultiQueue source pad might receive EOS twice,
the first is fake-eos and the other is actual EOS.
And the slot can be freed with fake-eos/EOS if the slot has no input.
Since slot freeing is async, double free can be possible.
So, decodebin3 needs to remove the probe also with slot freeing.
https://bugzilla.gnome.org/show_bug.cgi?id=777530
"requested_selection" list might be generated by select-streams event.
And memory of stream-id(s) in select-streams is independent from that of stream-collection.
https://bugzilla.gnome.org/show_bug.cgi?id=775553
The latency query originally had a fallthrough to the default
label at the end as fallback, but that got messed up when the
DURATION and POSITION queries were added, so it then fell through
to the duration query handler instead. Restore original behaviour.
https://bugzilla.gnome.org/show_bug.cgi?id=699077
Duration query would return TRUE and duration=-1. This
worked in the unit test because the unit test implementation
was a bit broken.
Both queries need to access rate with a lock.
Fix broken duration query test as well. It relied on broken
behaviour by the videorate query handler, and also it was
implemented as a downstream query rather than an upstream
query. And we must return HANDLED from the probe so that the
query we intercept actually returns TRUE.
https://bugzilla.gnome.org/show_bug.cgi?id=699077
When the decodebin state change fails because of an error
message, we might not go through PAUSED->READY. Don't leak
a ref to decodebin pads due to pad blocking in that case.
This is because we return ASYNC going to PAUSED, and if
we fail before reaching PAUSED the only transition we'll
see is READY->NULL.
https://bugzilla.gnome.org/show_bug.cgi?id=775893
This adds some extra options that affect pattern=ball mode, allowing the
animation to be synced to running time or wall-time clock for comparing
sync across different instances / pipelines / machines.
Also added is the ability to invert the rendering colours every second,
and some different ball motion patterns.
https://bugzilla.gnome.org/show_bug.cgi?id=740557
The state of urisourcebin (and all elements contained within) can
change at any point in time, including when setting up the typefind
element.
In order to avoid ending up with typefind starting without being fully
connected, lock the state and connect to the 'have-type' signal.
Due to the special nature of adaptivedemux, reconfigure happens
frequently with seek/track-change.
In very exceptional cases, the following sequence is possible:
* EOS event is pushed to queue element and still buffers are queued
* During draining remaining buffers, reconfiguration downstream
happens due to track switch.
* The queue gets a not-linked flow return from downstream
* Because the sinkpad is EOS, the queue registers an
error on the bus, causing the pipeline to fail.
Avoid the sinkpad getting marked EOS in the first place, by using a
custom event in place of EOS.
https://bugzilla.gnome.org/show_bug.cgi?id=777009
When shutting down decodebin2 and parsebin, they set their
output pads to flushing, and there is a very small window
where elements might send a sticky event such as a tag event
(which silently fails due to flushing) and then sends a buffer,
and the buffer will return GST_FLOW_ERROR because it can't
forward sticky events. The element will then send an error
message on the bus. This can also happen when elements send EOS
just as shutdown is happening. Since we're about to destroy all
the elements inside parsebin and decodebin anyway, just discard
error messages from them.
A nicer but more difficult fix for GStreamer 2.0 is to make
all event pushing / handling in core return a GstFlowReturn
like buffers do, so we can report a FLUSHING state cleanly.
Make sure ticks start with an accumulator value of 0 by incrementing it
after filling in samples instead of before and by resetting the accumulator
every time a tick begins. This prevents it from being discontinuous at the
beginning of the tick.
https://bugzilla.gnome.org/show_bug.cgi?id=774050
When plugging and then exposing a parser, don't fail
if it fails to send sticky events. The most likely
reason is that things were flushed due to the app
immediately doing a seek, but we can't detect flushing
separately to other error conditions without a
gst_pad_send_event_full() core function that returns
a GstFlowReturn.
In some case we might have EncodingProfile that will be defined
in a way that, for example if a Preset is not present, another
profile for that stream should be used.
A test is added showing the feature.
https://bugzilla.gnome.org/show_bug.cgi?id=776188
There are cases when there is no demuxer involved that could do the
buffering, e.g. HLS with raw MP3 or AAC. In this case we want to place
the buffering multiqueue after the parser.
Before this change, we've considered the first element after the
adaptive streaming demuxer as a parser. This is not always true, e.g.
id3demux. Instead we now wait until we actually have a parser (or
decoder).
Fixes playback on such HLS streams.
Compositor does not support it currently and it needs special support
for handling this correctly, and is rather non-trivial to implement for
all formats.
Playbin3 takes lock when querying duration and handling
stream-collection message. So,to post stream-collection message,
duration query should be dropped when input pad is being unlinked.
https://bugzilla.gnome.org/show_bug.cgi?id=773341