Keeping the upload-part-retry-duration & complete-upload-retry-duration
properties changes the semantics in comparison to their usage in
rusotos3sink. Deprecate these two properties and add a warning while
making them noop.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/759>
`PadTemplate::caps()` returns a reference to the caps now instead of a
new strong reference, so keeping the template in scope as long as the
caps reference is required.
warning: this expression borrows a value the compiler would automatically borrow
--> net/reqwest/tests/reqwesthttpsrc.rs:126:56
|
126 | async move { Ok::<_, hyper::Error>((&mut *http_func.lock().unwrap())(req)) }
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: change this to: `(*http_func.lock().unwrap())`
|
= note: `#[warn(clippy::needless_borrow)]` on by default
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_borrow
This implements a default timeout and retry duration for the remaining
S3 requests that were still able to be blocked indefinitely. There are 3
classes of operations: multipart upload creation/abort (should not take
too long), uploads (duration depends on part size), multipart upload
completion (can take several minutes according to documentation).
We currently only expose the part upload times as configurable, and hard
code the rest. If it seems sensible, we can expose the other two sets of
parameters as well.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/690>
Previously, the actual reading from the streaming body of a GetObject
request was not within the same timeout/retry path as the dispatch of
the HTTP request itself. We consolidate these two into a single async
block and create a sum type to encapsulate the rusoto and std library
error paths within that future.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/690>
Have seen a few times where machines that are in perfect time sync with a good source the requests fail with `RequestExpired` errors.
https://docs.aws.amazon.com/transcribe/latest/dg/CommonErrors.html
While not perfect, bumping to five minutes gives more a chance that the signed requests to start streaming won't be expired.
Rusoto does not implement timeouts or retries for any of its HTTP
requests. This is particularly problematic for the part upload stage of
multipart uploads, as a blip in the network could cause part uploads to
freeze for a long duration and eventually bail.
To avoid this, for part uploads, we add (a) (configurable) timeouts for
each request, and (b) retries with exponential backoff, upto a
configurable duration.
It is not clear if/how we want to do this for other types of requests.
The creation of a multipart upload should be relatively quick, but the
completion of an upload might take several minutes, so there is no
one-size-fits-all configuration, necessarily.
It would likely make more sense to implement some sensible hard-coded
defaults for these other sorts of requests.
A multipart upload should either be completed or aborted on error. In
the current state of things, a multipart upload would neither be
completed nor aborted, putting the onus on an external entity to take
care of finishing incomplete uploads or relying on a sane bucket
life cycle policy configured to abort incomplete multipart uploads.
An incomplete multipart upload still contributes to the storage costs as
long as it exists.
We introduce a property here to allow the user to select either aborting
or completing multipart uploads on error. Aborting the upload causes
whole of data to be discarded and the same upload ID is not usable for
uploading more parts to the same.
Completing an incomplete multipart upload can be useful in situations
like having a streamable MP4 where one might want to complete the upload
and have part of the data which was uploaded be preserved.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/618>
The design of the element is based on the assumption that when
receiving a partial result, the following result will contain
at least as many items as there were stable items in the previous
result.
This patch adds a sanity check to make sure our "partial index"
isn't larger than the new received result, and errors out otherwise.
partial_index will eventually be reset to 0 once we receive a
new non-partial result.