The blend functions for alpha formats need to do more work than just
doing a memcpy, so we can do a memcpy when we know that a blend is not
actually needed.
1080p AYUV ! compositor background=transparent ! fakesink - 56% faster
Specifically, when we don't draw the background and the first pad we
draw completely covers the output frame, we can just copy it as-is.
The rest of the pads (if any) will get composited on top normally.
If the background is transparent and obscured by a pad that may or may
not have alpha, we can still skip drawing it entirely
AYUV 1080p ! compositor background=transparent ! fakesink - 75% faster
We don't need to waste time drawing the background when one of the
pads completely covers the output and there's no alpha on the pad or
in the video format. Speedups:
I420 1080p ! compositor ! fakesink - 72% faster
I420 1080p ! compositor background=black ! fakesink - 45% faster
This removes the crossfade-ratio property and replaces it with an
operator property. Currently this implements the following operators:
- SOURCE: Copy over the source and don't look at the destination
- OVER: Default blending of the source over the destination
- ADD: Like OVER but simply adding the alpha instead
See the example for how to implement crossfading with this.
https://bugzilla.gnome.org/show_bug.cgi?id=797169
255 will easily become 0 in the blending function as they expect
the maximum value to be 255.
Can be reproduce with
gst-launch-1.0 videotestsrc pattern=ball ! c.sink_0 \
videotestsrc pattern=snow ! c.sink_1 \
compositor name=c \
sink_0::zorder=0 sink_1::zorder=1 sink_0::crossfade-ratio=0.5 \
background=black ! \
videoconvert ! xvimagesink
crossfade-ratio +/- 0.001 makes it work correctly and the same happens
at e.g. 0.25, 0.75, N*0.0625
https://bugzilla.gnome.org/show_bug.cgi?id=796846
This moves all the conversion related code to a single place, allows
less code-duplication inside compositor and makes the glmixer code less
awkward. It's also the same pattern as used by GstAudioAggregator.
The aggregated_frame is now called prepared_frame and passed to the
prepare_frame and cleanup_frame virtual methods directly. For the
currently queued buffer there is a method on the video aggregator pad
now.
Crossfading is a bit more complex than just having two pads with the
right keyframes as the blending is not exactly the same.
The difference is in the way we compute the alpha channel, in the case
of crossfading, we have to compute an additive operation between
the destination and the source (factored by the alpha property of both
the input pad alpha property and the crossfading ratio) basically so
that the crossfade result of 2 opaque frames is also fully opaque at any
time in the crossfading process, avoid bleeding through the layer
blending.
Some rationnal can be found in https://phabricator.freedesktop.org/T7773.
https://bugzilla.gnome.org/show_bug.cgi?id=784827
When caps changes while streaming, the new caps was getting processed
immediately in videoaggregator, but the next buffer in the queue that
corresponds to this new caps was not necessarily being used immediately,
which resulted sometimes in using an old buffer with new caps. Of course
there used to be a separate buffer_vinfo for mapping the buffer with its
own caps, but in compositor the GstVideoConverter was still using wrong
info and resulted in invalid reads and corrupt output.
This approach here is more safe. We delay using the new caps
until we actually select the next buffer in the queue for use.
This way we also eliminate the need for buffer_vinfo, since the
pad->info is always in sync with the format of the selected buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=780682
Compositor does not support it currently and it needs special support
for handling this correctly, and is rather non-trivial to implement for
all formats.