The `get_mut()` function can return None when the memory is not writable, so
instead, make sure the memory is writable by using `make_mut()`.
Fixes#214
In roll-up mode, when no more timed text comes in, the closed
captions may remain displayed on screen indefinitely (unless the
decoder implements a timeout, but that is not mandatory).
Expose a property to erase the display memory after a configurable
amount of time has elapsed instead.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/754>
This new video filter is able to detect the dominant color in a video frame.
When the color has changed from the previous frame the filter posts an Element
message on the bus, the associated structure is named `colordetect` and has two
fields:
* a string field named `dominant-color`
* a list field containing the whole color palette, stored as uint values, sorted
by dominance, with more dominant colors first
There can be small race where transcription-bin is linked with
tee but state change of the transcription-bin is not finished.
And at the same time, upstream pushes event/buffer to the
transcription-bin. Do state change first then link to avoid
the condition
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/716>
Zero-padding is not specified for the indices but all time components
need to be zero-padded (3 digits for fractional seconds, 2 digits for
everything else).
If transcription runs slow or has issues the queue can fill up and block
all audio processing. This gives the queue a sufficent buffer and allows
it to drop audio if it eventually fills up. This was most noticable with
bad internet connections using the `awstrnascriber` where it would take
quite a while for the websocket to eventually timeout and the bin to
enter `passthrough=true`.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/688>
By using this new property, application can select exclusive caption
source. There are three source types
- Both: Inband and transcription captions are combined if exist.
This is default behavior.
- Inband: Transcription buffers will be dropped
- Transcription: Caption meta of each video buffer will be dropped
In this version, transcriberbin doesn't provide any hint
for application to help caption source decision. That can be done
by application's strategy, passthrough status or probing inband
caption meta for example.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/684>
Fix race between latency query handler and setup_transcription()
method.
Locking order of setup_transcription() is
state lock -> setup_transcription() -> settings lock
So taking state lock inside of setting lock in src_query()
can cause deadlock.
As a side effect this allows us also to handle errors more gracefully
and to reduce memory load by outputting decoded frames immediately.
Also the code was changed a bit to reduce the number of redundant mutex
lock/unlocks.
This plugin takes I420/YUV and appends an alpha plane to give YUVA/A420
to round the corners analogous to the border-radius in CSS. Other video
formats like NV12 not supported yet. Support for other planar formats
will follow.
Not all ways of specifying border-radius as in CSS are implemented at
the moment. Currently, we only support specifying it in pixels and it
gets applied uniformly to all corners.
I hadn't really tested the element with pop-on mode, and the row
for each line in the input text was hardcoded to 13, which was
clearly wrong.
Switch to incrementing it properly.
C.9 Automatic Caption Erasure (Preferred)
[...]
Some manufacturers have suggested building automatic timeout into their
decoders. They propose that if no data are received for the selected caption
channel within a given time, the decoder should automatically erase the
caption. Such erasure may supersede the intentions of the caption service
providers and institute one maximum display time for all captioning services.
If such a timeout is deemed necessary, however, the time limit should be no less
than 16 seconds, an amount of time said by caption service providers to be longer
than their most enduring caption. It is preferred, when automatic caption erasure
is used in a decoder, that only displayed memory be erased, since some caption
service providers may, contrary to recommended practice (see Section B.8.3), send
pop-on style caption data to non-displayed memory more than 16 seconds before
sending the EOC command which causes the caption to display.
In this mode, cues are output as soon as they are ready for
display, without a duration. This can be useful in live mode,
when downstream is OK with determining the duration after the
fact, through clear=True.
The consequence of this is that the current roll-up window will
be output repetitively, it is up to downstream to deal with that
how it prefers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/554>