There can be small race where transcription-bin is linked with
tee but state change of the transcription-bin is not finished.
And at the same time, upstream pushes event/buffer to the
transcription-bin. Do state change first then link to avoid
the condition
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/716>
Requesting the first pad will emit the property because the first pad is
then selected. That will cause the callback to be called, which tries to
take the same mutex that is already locked during element setup and
causes a deadlock.
I give up on crossbeam_channel. For some reasons some receivers are not
always unblocked and I was not able to reproduce using simpler test
cases.
Use with mpsc channels instead which are more reliable.
fallbackswitch now supports multiple sink pads, and on a timeout of the
active pad, it will automatically switch to the next lowest priority pad
that has data available.
fallbackswitch sink pads follow the `sink_%u` template and have
`priority` as a pad property.
Co-authored-by: Vivia Nikolaidou <vivia.nikolaidou@ltnglobal.com>
Zero-padding is not specified for the indices but all time components
need to be zero-padded (3 digits for fractional seconds, 2 digits for
everything else).
Previous version used the Context::block_on_or_add_sub_task which
spawns a full-fledged executor with timer and io Reactor for no
reason when we just need to wait for a Receiver or JoinHandle.
When the iteration loop is throttling, the call to `abort` on the
`loop_abort_handle` returns immediately, but the actual `Future`
for the iteration loop is aborted only when the scheduler throttling
completes. State transitions which requires the loop to be aborted &
which are serialized at the pipeline level can incur long delays.
This commit makes sure the Task Context's scheduler is awaken as soon
as the task loop is aborted.
Keep the state mutex during the whole decodebin pad-added callback.
Fix a race when we were checking if state.waiting_for_ss_eos was set and
it was removed before we actually processed the item.
Fix#184
warning: this expression borrows a value the compiler would automatically borrow
--> net/reqwest/tests/reqwesthttpsrc.rs:126:56
|
126 | async move { Ok::<_, hyper::Error>((&mut *http_func.lock().unwrap())(req)) }
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: change this to: `(*http_func.lock().unwrap())`
|
= note: `#[warn(clippy::needless_borrow)]` on by default
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_borrow
This implements a default timeout and retry duration for the remaining
S3 requests that were still able to be blocked indefinitely. There are 3
classes of operations: multipart upload creation/abort (should not take
too long), uploads (duration depends on part size), multipart upload
completion (can take several minutes according to documentation).
We currently only expose the part upload times as configurable, and hard
code the rest. If it seems sensible, we can expose the other two sets of
parameters as well.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/690>
Previously, the actual reading from the streaming body of a GetObject
request was not within the same timeout/retry path as the dispatch of
the HTTP request itself. We consolidate these two into a single async
block and create a sum type to encapsulate the rusoto and std library
error paths within that future.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/690>
It might still be blocked downstream for a while, e.g. in the clocksync.
Flushing does not cause any problems as fallbackswitch is not going to
forward it and will only unblock everything up to there.
Have seen a few times where machines that are in perfect time sync with a good source the requests fail with `RequestExpired` errors.
https://docs.aws.amazon.com/transcribe/latest/dg/CommonErrors.html
While not perfect, bumping to five minutes gives more a chance that the signed requests to start streaming won't be expired.
If transcription runs slow or has issues the queue can fill up and block
all audio processing. This gives the queue a sufficent buffer and allows
it to drop audio if it eventually fills up. This was most noticable with
bad internet connections using the `awstrnascriber` where it would take
quite a while for the websocket to eventually timeout and the bin to
enter `passthrough=true`.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/688>
By using this new property, application can select exclusive caption
source. There are three source types
- Both: Inband and transcription captions are combined if exist.
This is default behavior.
- Inband: Transcription buffers will be dropped
- Transcription: Caption meta of each video buffer will be dropped
In this version, transcriberbin doesn't provide any hint
for application to help caption source decision. That can be done
by application's strategy, passthrough status or probing inband
caption meta for example.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/684>
Fix race between latency query handler and setup_transcription()
method.
Locking order of setup_transcription() is
state lock -> setup_transcription() -> settings lock
So taking state lock inside of setting lock in src_query()
can cause deadlock.