This moves all the conversion related code to a single place, allows
less code-duplication inside compositor and makes the glmixer code less
awkward. It's also the same pattern as used by GstAudioAggregator.
This is only used for caching reasons and should never actually be in
the public API. If this is ever a bottleneck later, caching around a
class private struct could be implemented.
The aggregated_frame is now called prepared_frame and passed to the
prepare_frame and cleanup_frame virtual methods directly. For the
currently queued buffer there is a method on the video aggregator pad
now.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
The goal here is to minimize the work needed to bring all images
to a common format. A better criteria than the number of pads
with a given format is the number of pixels with a given format.
https://bugzilla.gnome.org/show_bug.cgi?id=786078
Crossfading is a bit more complex than just having two pads with the
right keyframes as the blending is not exactly the same.
The difference is in the way we compute the alpha channel, in the case
of crossfading, we have to compute an additive operation between
the destination and the source (factored by the alpha property of both
the input pad alpha property and the crossfading ratio) basically so
that the crossfade result of 2 opaque frames is also fully opaque at any
time in the crossfading process, avoid bleeding through the layer
blending.
Some rationnal can be found in https://phabricator.freedesktop.org/T7773.
https://bugzilla.gnome.org/show_bug.cgi?id=784827
When the pad has received EOS, its buffer may still be mixed
any number of times, when the pad's framerate is inferior
to the output framerate.
This was introduced by my patch in
https://bugzilla.gnome.org/show_bug.cgi?id=782962, this patch
also correctly addresses the initial issue.
When caps changes while streaming, the new caps was getting processed
immediately in videoaggregator, but the next buffer in the queue that
corresponds to this new caps was not necessarily being used immediately,
which resulted sometimes in using an old buffer with new caps. Of course
there used to be a separate buffer_vinfo for mapping the buffer with its
own caps, but in compositor the GstVideoConverter was still using wrong
info and resulted in invalid reads and corrupt output.
This approach here is more safe. We delay using the new caps
until we actually select the next buffer in the queue for use.
This way we also eliminate the need for buffer_vinfo, since the
pad->info is always in sync with the format of the selected buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=780682
When entering this code path, we know that:
We received EOS on this pad.
We consumed all its buffers.
In any case, we want to replace vaggpad->buffer with NULL,
otherwise we will end up mixing the same buffer twice.
https://bugzilla.gnome.org/show_bug.cgi?id=781037
If the input buffer is after the end of the output buffer, then waiting
for more data won't help. We will never get an input buffer for this point.
This fixes compositing of streams from rtspsrc.
https://bugzilla.gnome.org/show_bug.cgi?id=766422