Rusoto does not implement timeouts or retries for any of its HTTP
requests. This is particularly problematic for the part upload stage of
multipart uploads, as a blip in the network could cause part uploads to
freeze for a long duration and eventually bail.
To avoid this, for part uploads, we add (a) (configurable) timeouts for
each request, and (b) retries with exponential backoff, upto a
configurable duration.
It is not clear if/how we want to do this for other types of requests.
The creation of a multipart upload should be relatively quick, but the
completion of an upload might take several minutes, so there is no
one-size-fits-all configuration, necessarily.
It would likely make more sense to implement some sensible hard-coded
defaults for these other sorts of requests.
A multipart upload should either be completed or aborted on error. In
the current state of things, a multipart upload would neither be
completed nor aborted, putting the onus on an external entity to take
care of finishing incomplete uploads or relying on a sane bucket
life cycle policy configured to abort incomplete multipart uploads.
An incomplete multipart upload still contributes to the storage costs as
long as it exists.
We introduce a property here to allow the user to select either aborting
or completing multipart uploads on error. Aborting the upload causes
whole of data to be discarded and the same upload ID is not usable for
uploading more parts to the same.
Completing an incomplete multipart upload can be useful in situations
like having a streamable MP4 where one might want to complete the upload
and have part of the data which was uploaded be preserved.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/618>
The design of the element is based on the assumption that when
receiving a partial result, the following result will contain
at least as many items as there were stable items in the previous
result.
This patch adds a sanity check to make sure our "partial index"
isn't larger than the new received result, and errors out otherwise.
partial_index will eventually be reset to 0 once we receive a
new non-partial result.
For the region property this would be provided as
`region-name+https://region.end/point`
while for the URI this unfortunately has to be base32 encoded to allow
usage as the host part of the URI.
The default behavior for the transcriber is to output text buffers
synchronized with the input stream, introducing a configurable
latency.
For use cases where synchronization is not crucial, but latency
is, the lateness property can be used instead of or in combination
with the latency property, in order to introduce a configurable
offset with the input stream.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/534>
As awstranscriber might in theory push out gap events without
any flow of input data, it needs to send its mandatory events
(stream-start, caps, segment) independently.
In addition, track a start time and use it to offset the 0-based
timestamps returned by AWS in order to output buffers timestamped
in the running-time domain, and perform item timing adjustment
only when dequeuing, instead of when queuing.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/525>
<https://aws.amazon.com/blogs/machine-learning/amazon-transcribe-now-supports-partial-results-stabilization-for-streaming-audio/>
Amazon seem to have realized the previous iteration of their API
made it difficult to identify items from one result to the next,
which made the element much more complicated than it should have
been. With that new "stability" option, we can enqueue items as
soon as they stabilize, and simply rely on the current index in
the transcript to output them exactly once.
This also means the "use_partial_results" is now useless, as there
will be no difference in accuracy between a non-partial result and
and of its stable items that might have been pushed from previous
partial versions of the result.
The property is removed, instead a new option is exposed to let
users control how fast results should stabilize.
This greatly simplifies the code, and also improves the output as
punctuation doesn't need to be randomly discarded anymore.
Implemented analogously to souphttpsrc for compatibility. Proxy
prevents sharing the client between element instances.
Change-Id: I50d676fd55f0e1d7051d8cd7d5922b7be4f0c6e8
With the URI handler interface implemented, we can drop the old method
of specifying bucket, key and region. This also brings it in line with
how it is for s3src.
AWS offers the option of creating "vocabularies", lists of words
that are likely to be encountered. Those can be created through
the AWS console, and are given a name. That name can then be
specified when starting a transcription job.
cargo-c will produce a pkg-config file making it easier to statically
link plugins.
Also add 'static' features for plugins depending on < 1.14 as this is the
minimal required version to use static linking because of ABI changes in
core.
This mutex is actually only ever used from a single thread, so use
AtomicRefCell instead. It provides the guarantees of a mutex but panics
instead of blocking.
There is no way to dynamically ask Cargo to build static or dynamic lib
so we have to build both and pick the one we care when doing the meson
processing.
Fix#88