Zero-padding is not specified for the indices but all time components
need to be zero-padded (3 digits for fractional seconds, 2 digits for
everything else).
Previous version used the Context::block_on_or_add_sub_task which
spawns a full-fledged executor with timer and io Reactor for no
reason when we just need to wait for a Receiver or JoinHandle.
When the iteration loop is throttling, the call to `abort` on the
`loop_abort_handle` returns immediately, but the actual `Future`
for the iteration loop is aborted only when the scheduler throttling
completes. State transitions which requires the loop to be aborted &
which are serialized at the pipeline level can incur long delays.
This commit makes sure the Task Context's scheduler is awaken as soon
as the task loop is aborted.
Keep the state mutex during the whole decodebin pad-added callback.
Fix a race when we were checking if state.waiting_for_ss_eos was set and
it was removed before we actually processed the item.
Fix#184
This implements a default timeout and retry duration for the remaining
S3 requests that were still able to be blocked indefinitely. There are 3
classes of operations: multipart upload creation/abort (should not take
too long), uploads (duration depends on part size), multipart upload
completion (can take several minutes according to documentation).
We currently only expose the part upload times as configurable, and hard
code the rest. If it seems sensible, we can expose the other two sets of
parameters as well.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/690>
Previously, the actual reading from the streaming body of a GetObject
request was not within the same timeout/retry path as the dispatch of
the HTTP request itself. We consolidate these two into a single async
block and create a sum type to encapsulate the rusoto and std library
error paths within that future.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/690>
It might still be blocked downstream for a while, e.g. in the clocksync.
Flushing does not cause any problems as fallbackswitch is not going to
forward it and will only unblock everything up to there.
Have seen a few times where machines that are in perfect time sync with a good source the requests fail with `RequestExpired` errors.
https://docs.aws.amazon.com/transcribe/latest/dg/CommonErrors.html
While not perfect, bumping to five minutes gives more a chance that the signed requests to start streaming won't be expired.
If transcription runs slow or has issues the queue can fill up and block
all audio processing. This gives the queue a sufficent buffer and allows
it to drop audio if it eventually fills up. This was most noticable with
bad internet connections using the `awstrnascriber` where it would take
quite a while for the websocket to eventually timeout and the bin to
enter `passthrough=true`.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/688>
By using this new property, application can select exclusive caption
source. There are three source types
- Both: Inband and transcription captions are combined if exist.
This is default behavior.
- Inband: Transcription buffers will be dropped
- Transcription: Caption meta of each video buffer will be dropped
In this version, transcriberbin doesn't provide any hint
for application to help caption source decision. That can be done
by application's strategy, passthrough status or probing inband
caption meta for example.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/684>
Fix race between latency query handler and setup_transcription()
method.
Locking order of setup_transcription() is
state lock -> setup_transcription() -> settings lock
So taking state lock inside of setting lock in src_query()
can cause deadlock.
Rusoto does not implement timeouts or retries for any of its HTTP
requests. This is particularly problematic for the part upload stage of
multipart uploads, as a blip in the network could cause part uploads to
freeze for a long duration and eventually bail.
To avoid this, for part uploads, we add (a) (configurable) timeouts for
each request, and (b) retries with exponential backoff, upto a
configurable duration.
It is not clear if/how we want to do this for other types of requests.
The creation of a multipart upload should be relatively quick, but the
completion of an upload might take several minutes, so there is no
one-size-fits-all configuration, necessarily.
It would likely make more sense to implement some sensible hard-coded
defaults for these other sorts of requests.
... and replace the deprecated --all argument with --workspace.
Also use --all-targets so that code, unit tests, integration tests
and examples can all be compiled or tested in one go.