The threadshare executor was based on a modified version of tokio
which implemented the throttling strategy in the BasicScheduler.
Upstream tokio codebase has significantly diverged from what it
was when the throttling strategy was implemented making it hard
to follow. This means that we can hardly get updates from the
upstream project and when we cherry pick fixes, we can't reflect
the state of the project on our fork's version. As a consequence,
tools such as cargo-deny can't check for RUSTSEC fixes in our fork.
The smol ecosystem makes it quite easy to implement and maintain
a custom async executor. This MR imports the smol parts that
need modifications to comply with the threadshare model and implements
a throttling executor in place of the tokio fork.
Networking tokio specific types are replaced with Async wrappers
in the spirit of [smol-rs/async-io]. Note however that the Async
wrappers needed modifications in order to use the per thread
Reactor model. This means that higher level upstream networking
crates such as [async-net] can not be used with our Async
implementation.
Based on the example benchmark with ts-udpsrc, performances seem on par
with what we achieved using the tokio fork.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/118
Related to https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/604
In `sink_chain` when the Segment is pending, attempting to lock
rec state could lead to a dead lock because the stream state is
already locked while the main stream state was not locked.
Function `check_and_update_stream_start` checks whether other streams
reached EOS. The stream being checked might already have locked its
state. If it's about to check other streams too, this results in a
deadlock.
The problem was due to the `main_state` guard being dropped handling
event `StreamStart` checking whether the main stream is EOS:
```rust
let main_is_eos = if let Some(main_state) = main_state {
main_state.eos
} else {
false
};
```
In the above code, `main_state` main state is comsumed and dropped
after evaluating `main_state.eos`.
This is also the case before handling event `Eos`.
This revealed another deadlock handling event `Eos` which is under
investigation.
A multipart upload should either be completed or aborted on error. In
the current state of things, a multipart upload would neither be
completed nor aborted, putting the onus on an external entity to take
care of finishing incomplete uploads or relying on a sane bucket
life cycle policy configured to abort incomplete multipart uploads.
An incomplete multipart upload still contributes to the storage costs as
long as it exists.
We introduce a property here to allow the user to select either aborting
or completing multipart uploads on error. Aborting the upload causes
whole of data to be discarded and the same upload ID is not usable for
uploading more parts to the same.
Completing an incomplete multipart upload can be useful in situations
like having a streamable MP4 where one might want to complete the upload
and have part of the data which was uploaded be preserved.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/618>
The design of the element is based on the assumption that when
receiving a partial result, the following result will contain
at least as many items as there were stable items in the previous
result.
This patch adds a sanity check to make sure our "partial index"
isn't larger than the new received result, and errors out otherwise.
partial_index will eventually be reset to 0 once we receive a
new non-partial result.
This plugin takes I420/YUV and appends an alpha plane to give YUVA/A420
to round the corners analogous to the border-radius in CSS. Other video
formats like NV12 not supported yet. Support for other planar formats
will follow.
Not all ways of specifying border-radius as in CSS are implemented at
the moment. Currently, we only support specifying it in pixels and it
gets applied uniformly to all corners.
I hadn't really tested the element with pop-on mode, and the row
for each line in the input text was hardcoded to 13, which was
clearly wrong.
Switch to incrementing it properly.