The fallback stream will usually have a lower latency than the main
stream, so a too low latency would be configured if the fallback is
activated in the beginning.
This property allows to override this and does not require latency
reconfiguration.
Buffers are still forwarded until the timeout is reached even if they're
too late, but if they were continuously too late for more than the
duration of the timeout setting then switch to the fallback stream
instead.
This also fixes the bug that the source wouldn't be restarted another
time if we switched to the fallback stream before and didn't at least
shortly switch to the normal stream. There was no timeout for this.
Based on a patch by Mathieu Duponchelle <mathieu@centricular.com>
Only 64k are allowed for the sum of all private instance structs in the
class hierarchy, as well as for the public instance structs.
The CdgInterpreter itself is huge and adding just another two integers
to GstVideoDecoderPrivate in libgstvideo is causing the limit to be
reached, so let's allocate it in a separate memory area.
It's only used during construction of the internal bin after all.
Keeping it readable would cause e.g. creating a pipeline graph trying to
read it, which is not implemented.
Only two uses of unsafely setting the pad functions is left:
- fallbacksrc for overriding the chain function of the proxy pad of a
ghost pad
- threadshare for overriding the pad functions after creationg, which
probably needs some fixing at some point
I thought I could spare some bandwidth by letting renderers pick
the base row, but it turns out this triggers some unwanted behaviours
with compliant renderers.
Instead, we now follow the protocol laid out in EIA/CEA-608-B,
section B.8.1
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/355>
Useful complement to cea708overlay, that can only render native
708.
The element isn't an aggregator, and simply parses and renders
closed caption meta on its input video buffers.
No property is exposed, the rendering is done using a monospace
font, over a 32 x 15 grid with the font size fitted to fill as
much of the viewport as possible.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/343>
Since those are using the clock for sync, they need to also
provide a clock for good measure. The reason is that even if
downstream elements provide a clock, we don't want to have
that clock selected because it might not be running yet.
Pad{Src,Sink}[Ref] delegate some functions to their respective
Pad{Src,Sink}Inner. Since they act as smart pointers, we can
safely implement the Deref trait to simplify the implementations.
In roll-up mode, the element expects input text without layout
(eg new lines), and the characters it outputs are displayed
immediately, without double-buffering as in pop-on mode.
Once the last column is reached, the element simply outputs
a carriage return and the text scrolls up, potentially splitting
words with no hyphenation.
The main advantage of this mode is its simplicity and the near-zero
latency it introduces.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/347>
The current implementation only makes use of non-partial results,
requiring a crazy high latency.
With this mode, we use items from partial results when they're
older than latency - 2 * GRANULARITY_MS. Depending on the latency
that the user has set this may result in reduced accuracy, the
default latency has been modified to a pretty conservative sweet
spot of 8 seconds.
This complexifies the code a bit, as items aren't identified by
AWS, and their timings can change.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/348>
This moves to Rusoto 0.43, which has moved from futures to async/.await.
As a result, we implement a utility function to convert the
async/streaming bits to blocking operations backed by a tokio runtime.
In the process, we also need to restructure s3sink a little, so that the
client is now part of the started state (like it is for s3src). This is
a better model than a separate client, as it reflects the condition that
the client is only available in the started state.