This removes the crossfade-ratio property and replaces it with an
operator property. Currently this implements the following operators:
- SOURCE: Copy over the source and don't look at the destination
- OVER: Default blending of the source over the destination
- ADD: Like OVER but simply adding the alpha instead
See the example for how to implement crossfading with this.
https://bugzilla.gnome.org/show_bug.cgi?id=797169
The queue between the audiotee and the audio chain wasn't properly added to the
bin, leading to streamsynchronizer locks on EOS. Reconfiguration of the
visualization chain wasn't working as expected either. It is now possible to
dynamically enable/disable the audio visualization support.
https://bugzilla.gnome.org/show_bug.cgi?id=796553
255 will easily become 0 in the blending function as they expect
the maximum value to be 255.
Can be reproduce with
gst-launch-1.0 videotestsrc pattern=ball ! c.sink_0 \
videotestsrc pattern=snow ! c.sink_1 \
compositor name=c \
sink_0::zorder=0 sink_1::zorder=1 sink_0::crossfade-ratio=0.5 \
background=black ! \
videoconvert ! xvimagesink
crossfade-ratio +/- 0.001 makes it work correctly and the same happens
at e.g. 0.25, 0.75, N*0.0625
https://bugzilla.gnome.org/show_bug.cgi?id=796846
The fomula, 'offset = time / rate', is correct only if
the rate is never changed. When the rate is changed,
the offset should be re-calculated based on the previous
offset.
https://bugzilla.gnome.org/show_bug.cgi?id=791269
adder needs more than just trivial work to support planar buffers properly
because it currently reads sub-buffers from GstCollectPads in order for all
of them to have matching sizes. In planar mode, this means it would truncate
some channels and mix them up in strange ways. It only works if all input
buffers in all sink pads have matching sizes.
This moves all the conversion related code to a single place, allows
less code-duplication inside compositor and makes the glmixer code less
awkward. It's also the same pattern as used by GstAudioAggregator.
The aggregated_frame is now called prepared_frame and passed to the
prepare_frame and cleanup_frame virtual methods directly. For the
currently queued buffer there is a method on the video aggregator pad
now.
With the way caps negotiation work in encoders, the only way to ensure
that no downstream renegotiation is done in the encoder is to also lock
upstream caps. Anyway with the current behavior upstream of encoders
*require* to handle any file format so locking upstream format should
be safe.
https://bugzilla.gnome.org/show_bug.cgi?id=795464
Otherwise decodebin won't get notified about STREAM_COLLECTION comming
from the sources and thus will never get informored about it. Without
being informed about the stream collection decodebin won't be able to
select any streams. It ends up not creating any output for the streams
defined from outside parserbin.
https://bugzilla.gnome.org/show_bug.cgi?id=795364
Instead go backwards before segment.stop based on the framerate or the
next buffers end timestamp. Otherwise the first buffer will usually be
dropped because outside the segment.
https://bugzilla.gnome.org/show_bug.cgi?id=781899
Buffering messages are only sent for the active group (in case there
is more than one).
If the inactive group posts buffering messages we keep the last one
around and will post it once it becomes the playing one.
In order to flush out multiqueue, we send again a STREAM_START and
then a EOS event.
The problem was that was that we might end up pushing out on the
output of multiqueue (and therefore decodebin3) a series of:
* EOS / STREAM_START / EOS
Apart from the uglyness of such output, If decodebin3 is used with
elements such as concat on their output, they might potentially
block on that second STREAM_START.
In order to make sure we don't end up in that situation we send
a custom STREAM_START event when refreshing multiqueue (which we
drop on the output) and we don't special case EOS events on streams
on which we already got EOS.
At worst we now end up sending at most two EOS on the output of
multiqueue (and decodebin3).
Similar in vein to the playbin2 architecture except that uridecodebin3
are prerolled much earlier and all streams of the same type are
fed through a 'concat' element.
This keeps the philosphy of having all elements connected as soon
as possible.
The 'about-to-finish' signal is emitted whenever one of the uridecodebin
is about to finish, allowing the users to set the next uri/suburi.
The notion of a group being active has changed. It now means that the
uridecodebin3 has been activated, but doesn't mean it is the one
currently being outputted by the sinks (i.e. curr_group and next_group).
This is done via detecting GST_MESSAGE_STREAM_START emission by playsink
and figuring out which group is really playing.
When the current group changes, a new thread is started to deactivate
the previous one and optionnaly fire 'about-to-finish'.
Apologies for the big commit, but it wasn't really possible to split it
in anything smaller.
* Switch to uridecodebin3 instead of managing urisourcebin and decodebin3
ourselves. No major architectural change with this.
* Reconfigure sinks/outputs when needed. This is possible thanks to the
various streams-related API. Instead of blocking new pads and waiting
for a (fake) no-more-pads to decide what to connect, we instead reconfigure
playsink and the combiners to whatever types are currently selected. All of
this is done in reconfigure_output().
New pads are immediately connected to (combiners and) sinks, allowing
immediate negotiation and usage.
* Since elements are always connected, the "cached-duration" feature is gone
and queries can reach the target elements.
* The auto-plugging related code is currently disabled entirely until
we get the new proper API.
* Store collections at the GstSourceGroup level and not globally
* And more comments a bit everywhere
NOTE: gapless is still not functional, but this opens the way to be able
to handle it in a streams-aware fashion (where several uridecodebin3 can
be active at the same time).
With push-based sources, urisourcebin will emit this signal when
the stream has been fully consumed.
This signal can be used to know when the source is done providing
data.
With playbin the last subtitle chunk would not get displayed
if the last chunk was missing a newline at the end. This is
because streamsynchronizer will hold back the EOS event until
the audio and video streams are finished too, so subparse
would never forcefully push out the last chunk until the very
end when it is too late.
We get a STREAM_GROUP_DONE event from streamsynchronizer however,
so handle that like EOS and force out any remaining text then.
https://bugzilla.gnome.org/show_bug.cgi?id=771853