In order to flush out multiqueue, we send again a STREAM_START and
then a EOS event.
The problem was that was that we might end up pushing out on the
output of multiqueue (and therefore decodebin3) a series of:
* EOS / STREAM_START / EOS
Apart from the uglyness of such output, If decodebin3 is used with
elements such as concat on their output, they might potentially
block on that second STREAM_START.
In order to make sure we don't end up in that situation we send
a custom STREAM_START event when refreshing multiqueue (which we
drop on the output) and we don't special case EOS events on streams
on which we already got EOS.
At worst we now end up sending at most two EOS on the output of
multiqueue (and decodebin3).
Similar in vein to the playbin2 architecture except that uridecodebin3
are prerolled much earlier and all streams of the same type are
fed through a 'concat' element.
This keeps the philosphy of having all elements connected as soon
as possible.
The 'about-to-finish' signal is emitted whenever one of the uridecodebin
is about to finish, allowing the users to set the next uri/suburi.
The notion of a group being active has changed. It now means that the
uridecodebin3 has been activated, but doesn't mean it is the one
currently being outputted by the sinks (i.e. curr_group and next_group).
This is done via detecting GST_MESSAGE_STREAM_START emission by playsink
and figuring out which group is really playing.
When the current group changes, a new thread is started to deactivate
the previous one and optionnaly fire 'about-to-finish'.
Apologies for the big commit, but it wasn't really possible to split it
in anything smaller.
* Switch to uridecodebin3 instead of managing urisourcebin and decodebin3
ourselves. No major architectural change with this.
* Reconfigure sinks/outputs when needed. This is possible thanks to the
various streams-related API. Instead of blocking new pads and waiting
for a (fake) no-more-pads to decide what to connect, we instead reconfigure
playsink and the combiners to whatever types are currently selected. All of
this is done in reconfigure_output().
New pads are immediately connected to (combiners and) sinks, allowing
immediate negotiation and usage.
* Since elements are always connected, the "cached-duration" feature is gone
and queries can reach the target elements.
* The auto-plugging related code is currently disabled entirely until
we get the new proper API.
* Store collections at the GstSourceGroup level and not globally
* And more comments a bit everywhere
NOTE: gapless is still not functional, but this opens the way to be able
to handle it in a streams-aware fashion (where several uridecodebin3 can
be active at the same time).
With push-based sources, urisourcebin will emit this signal when
the stream has been fully consumed.
This signal can be used to know when the source is done providing
data.
Even if the input is monoscopic, the app might want to display
it in a different layout, to do side-by-side for VR for example,
so if the app changes the output-multiview-mode always use that.
We can either receive an element that is floating or not and need to
accomodate that in the signal return values. Do so by removing the
floating flag.
https://bugzilla.gnome.org/show_bug.cgi?id=792597
WARNING: Trying to compare values of different types (str, int).
The result of this is undefined and will become a hard error
in a future Meson release.
When the GstRTSPConnection class sends a RTSP over HTTP tunnelling
request, the HTTP Content-Type header is missing from the HTTP POST
request.
This isn't a problem with most servers, but there are servers that
rejects the request without there also being a Content-Type header.
RFC 1945:
Any HTTP/1.0 message containing an entity body should include a
Content-Type header field defining the media type of that body.
Apple Dispatch 28:
QuickTime Streaming uses the "application/x-rtsp-tunnelled" MIME
type in both the Content-Type and Accept headers. This reflects
the data type that is expected and delivered by the client and server.
https://bugzilla.gnome.org/show_bug.cgi?id=793110
It does not timeout anymore, even though it's a very slow test. For the
context, this test runs routines for a fixes amount of time and prints
the throughput. Which means the test takes more time everytime a pixel
format is added. If that becomes a problem again, we should disable the
benchmarks by default.
The source offset (soff) was not incremented for each component and then
each group of 3 components were inverted. This was causing a staircase
effect combined with some noise.
https://bugzilla.gnome.org/show_bug.cgi?id=789876