Currently the decoder selection is very naive: The type with the highest
rank that matches the current caps is used. This works well for software
decoders. The exact supported caps are always known and the static caps are
defined accordingly.
With hardware decoders, e.g. vaapi, the situation is different. The decoder
may reject the caps later during a caps query. At that point, a new decoder
is created. However, the same type is chosen an after several tries,
decodebin fails.
To avoid this, do the caps query while adding the decoder and try again
with other decoder types if the query fails:
1. create the decoder from the next matching type
2. add and link the decoder
3. change the decoder state to READY
4. do the caps query
if it fails then remove the decoder again and go back to 1.
5. expose the source pad
6. sync the decoder state with the parent.
This way, the decoder is already part of the pipeline when the state change
to READY happens. So context handling should work as before.
Exposing the source pad after the query was successful is important:
Otherwise the thread from the decoder source pad may block in a blocked pad
downstream in the playsink waiting for other pads to be ready.
The thread now blocks trying to set the state back to NULL while holding
the SELECTION_LOCK. Other streams may block on the SELECTION_LOCK and the
playsink never unblocks the pad. The result is a deadlock.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/1201>
uridecodebin assumes that refcount of decodebins stored in pending_decodebins
are floating but it might not be true in case that refcount of the decodebin
was touched in other places. To avoid the floating refcount issue,
hold strong reference.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/1113>
When adding inputs dynamically, we need to make sure the new parsebin are
added *and* activated by the same thread (by taking the state change lock).
The rationale for this is that the calling thread might be an upstream streaming
thread and when activating parsebin it might call back upstream. If we don't use
the same thread (ex: when the application does a state change on decodebin3
between the moment we add parsebin to decodebin3 and we synchronize the state of
parsebin) then we would end up in different threads trying to take upstream
recursive locks.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/932>
This was only taken care of previously if there was a decoder before.
However if previously a decoder was not needed then the ghostpad
would've been linked directly to the slot's srcpad.
Reconfiguring the slot requires this to be undone so that linking can
happen normally.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/985>
Whenever a new collection is calculated, the internal `select_streams_seqnum`
variable is reset. This ensures that we reliably know whether a select-streams
event has been received for that new collection.
Use that to decide whether we should add previously un-selected streams or new
streams in the current selection
Fixes#784
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/733>
When linking source pads to decodebin, make sure we use the *specified* new
source pad and not some random one.
This avoids ending up with source pads being unlinked.
Main cause of random timeouts with rtsp change_state_intensive validate tests
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/687>
The main_input stream-id would not get reset when going to READY state.
This would cause warnings when trying to reuse the same decodebin3, since
you would get a new STREAM_START event with a new stream-id, which would
collide with the now stale stream-id
The default values were not being set on element initialization. This
was a problem for buffer_duration and buffer_size since they would be
zero initialized, rather then being set to -1. This would cause the
underlaying queue2 element to have no limits and depending on the
streamed file, could cause queue2 to allocate massive amounts of memory.
In decodebin3 and uridecodebin3 the `force-sw-decoders` boolean property is
added. In uridecodebin3 it is only a proxy property which will forward
the value to decodebin3.
When decodebin3 has `force-sw-decoders` disabled, it will filter out in its
decoder and decodable factories those elements within the 'Hardware'
class, at reconfiguring output stream.
playbin3 adds by default GST_PLAY_FLAG_FORCE_SW_DECODERS, and sets
`force-sw-decoders` property accordingly to its internal uridecodebin, also
filters out the 'Hardware' class decoder elements when caps
negotiation.
Added `force-sw-decoders` boolean property in decodebin2 and
uridecodebin. By default the property is %FALSE and it bypass the new
code. Otherwise the factory list is filtered removing decoders
within 'Hardware' class.
uridecodebin sets the `force-sw-decoders` property in its internal
decodebin, and also filters out Hardware class in the
autoplug-factories default signal handler.
playbin2 adds by default GST_PLAY_FLAG_FORCE_SW_DECODERS it its flags
property, and depending on it playbin2 sets the `force-sw-decoders`
property on its internal uridecodebin, also filters out the Hardware
class decoding decoders at the autoplug-factories signal handler.
When the playsink's sink is activated its state is set to READY but it remains
unlinked. So, in order for decodebin3 to potentially reuse the context later on,
the whole playbin3 needs to have it internally stored.
There are in the wild (mp4) streams that basically contain no tracks
but do have a redirect info[0], in which case, qtdemux won't be able
to expose any pad (there are no tracks) so can't post anything but
an error on the bus, as:
- it can't send EOS downstream, it has no pad,
- posting an EOS message will be useless as PAUSED state can't be
reached and there is no sink in the pipeline meaning GstBin will
simply ignore it
In that case, currently the application could try to handle that but it
is pretty complex as it will get the REDIRECT message on the bus at
which point it could set the URL but playbin will ignore it, as
it will only be for the next EOS, it thus need to set the pipeline to
NULL (READY won't do as it is already in READY at that point). And it
needs to figure out the following ERROR message on the bus needs to be
ignored, which is not really simple.
The approach here is to allow element to add details to the ERROR
message with a `redirect-location` field which elements like playbin handle
and use right away.
We could also use the element 'redirect' message in playbin, but the
issue with that approach is that the element will still emit the ERROR
message on the bus, leading to wrong behaviour. That can't be avoided
since in the case the app/parent pipeline is not handling the redirect
instruction, the ERROR message is necessary (and there is no way to
detect that the message has been "handled" from the element emitting the
redirect).
[0]: http://movietrailers.apple.com/movies/paramount/terminator-dark-fate/terminator-dark-fate-trailer-2_480p.mov