State transitions request functions hid the synchronization
details to the caller:
- If the transition was requested from a Context, a subtask was
added or the transition ack was not awaited if the new transition
was requested from a running transition or an iteration function.
- If the transition was not requested from a Context, current
thread was blocked until the ack was received.
This strategy facilitated code in elements, but it suffered from
the following shortcomings:
- The `prepare` transition request didn't comply with the above
strategy and would always return an `Async` form. This was
designed to accomodate the `Prepare` function for elements
such as `ts-tcpclientsrc` which takes times due to the
TCP socket connection delays. The idea was that the actual
transition result would be available after calling `start`.
This was a disadvantage for elements which would prefer to
error immediately in the event of a preparation failure.
- Hidding the transition request synchronization to the caller
meant that they had no options but relying on the internal
mechanism. E.g.: it was not possible to `start` from another
async runtime without blocking. Also it was not possible
to request a transition and choose not to await for the
ack.
This commit introduces a more flexible API for state
transitions requests:
- The transition request function now return a `TransitionStatus`,
which is a Future.
- When an error occurs immediately (e.g. the transition
request is not autorized due to current state of the Task),
the `TransitionStatus` is resolved immediately and can be
`check`ed for errors. This is useful for functions such as
`pepare` in the case of `ts-tcpclientsrc` (see above).
This is also useful for `pause`, because in current design,
the transition is always async. Note however, that `pause` is
forseen to adhere to the same behaviour as the other transition
requests in the near future [1].
- If the caller chooses to await for the ack and they don't know
if they are running on a ts Context (e.g. in `Pad{Src,Sink}`
handlers), they can call `await_maybe_on_context`. This is mostly
the same behaviour as the one that used to be performed internaly.
- If the caller knows for sure they can't possibly block an async
executor, they can call `block_on` which is more explicite, but
will nonetheless make sure no ts Context is being blocked. This
last check was introduced as it was considered low overhead
while it should ease preventing missues in cases where the above
functions should be used.
[1]: https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/793#note_1464400
Task state machines used to execute in an executor from the Futures
crate. State transitions actions and iteration functions were then
spawned on the target threadshare Context.
This commit directly spawns the task state machine on the threadshare
Context. This simplifies code a bit and paves the way for the changes
described in [1].
Also introduces struct `StateMachineHandle`, which gather together
fields to communicate and synchronize with the StateMachine. Renamed
`StateMachine::run` as `spawn` and return `StateMachineHandle`.
[1]: https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/793#note_1464400
The scheduler is awaken when aborting a task loop, but not after
a triggering event is pushed. This can cause throttling to induce
long state transitions for pipelines with many streams.
Observed for Unprepare with:
GST_DEBUG=ts-benchmark:4 ../../target/debug/examples/benchmark 2000 ts-udpsrc 2 20 5000
The I/O handle was dropped prior to removing it from the reactor,
which caused `Poller::delete` to fail due to an invalid file
descriptor. This used to happen silently unless the same fd was
added again, e.g. by changing states in the pipeline as follow:
Null -> Playing -> Null -> Playing.
In which case `Poller::add` failed due to an already existing file.
This commit makes sure the fd is removed from the reactor prior to
dropping the handle. In order to achieve this, a new task is spawned
on the `Context` on which the I/O was originally registered, allowing
it to access the proper `Reactor`. The I/O can then safely be dropped.
Because the I/O handle is moved to the spawned future, this solution
requires adding the `Send + 'static` bounds to the I/O handle used
within the `Async` wrapper. This appears not too restrictive for
existing implementations though. Other attempts were considered,
but they would cause deadlocks.
This new approach also solves a potential race condition where a
fd could be re-registered in a `Reactor` before it was removed.
Previous version used the Context::block_on_or_add_sub_task which
spawns a full-fledged executor with timer and io Reactor for no
reason when we just need to wait for a Receiver or JoinHandle.
When the iteration loop is throttling, the call to `abort` on the
`loop_abort_handle` returns immediately, but the actual `Future`
for the iteration loop is aborted only when the scheduler throttling
completes. State transitions which requires the loop to be aborted &
which are serialized at the pipeline level can incur long delays.
This commit makes sure the Task Context's scheduler is awaken as soon
as the task loop is aborted.
Previous version relied on a plain loop / match / break because
I experimented different strategies. The while variant is better
for the final solution.
The function `enter` is executed in a blocking way from the caller's
point of view. This means that we can guaranty that the provided
function and its output will outlive the underlying Scheduler Task
execution. This requires an unsafe call to
`async_task::spawn_unchecked`. See:
https://docs.rs/async-task/latest/async_task/fn.spawn_unchecked.html
The threadshare executor was based on a modified version of tokio
which implemented the throttling strategy in the BasicScheduler.
Upstream tokio codebase has significantly diverged from what it
was when the throttling strategy was implemented making it hard
to follow. This means that we can hardly get updates from the
upstream project and when we cherry pick fixes, we can't reflect
the state of the project on our fork's version. As a consequence,
tools such as cargo-deny can't check for RUSTSEC fixes in our fork.
The smol ecosystem makes it quite easy to implement and maintain
a custom async executor. This MR imports the smol parts that
need modifications to comply with the threadshare model and implements
a throttling executor in place of the tokio fork.
Networking tokio specific types are replaced with Async wrappers
in the spirit of [smol-rs/async-io]. Note however that the Async
wrappers needed modifications in order to use the per thread
Reactor model. This means that higher level upstream networking
crates such as [async-net] can not be used with our Async
implementation.
Based on the example benchmark with ts-udpsrc, performances seem on par
with what we achieved using the tokio fork.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/118
Related to https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/604
The time driver for the threadshare runtime assigns the timer
entries to the nearest throttling time frame so that the timer
fires as close as possible to the expected instant. This means
that the timer might fire before or after the expected instant
(at most `wait / 2` away).
In some cases, we don't want the timer to fire early. The new
function `delay_for_at_least` ensures that the timer is assigned
to the time frame after the expected instant.
Only two uses of unsafely setting the pad functions is left:
- fallbacksrc for overriding the chain function of the proxy pad of a
ghost pad
- threadshare for overriding the pad functions after creationg, which
probably needs some fixing at some point
Pad{Src,Sink}[Ref] delegate some functions to their respective
Pad{Src,Sink}Inner. Since they act as smart pointers, we can
safely implement the Deref trait to simplify the implementations.
StateMachines are spawned on a runtime::Context which uses a tokio
runtime. The StateMachine doesn't need all the features from tokio
such as the IO and timers drivers.
This commit makes use of a light-weight futures executor to spawn
the StateMachines.
When initializing Pad functions in `Pad{Src,Sink}`, we downgrade the
`Pad{Src,Sink}` and upgrade it when necessary. This was implemented
to avoid reference cycles:
`gst::Pad` -> pad function -> `Pad{Src,Sink}` -> `gst::Pad`.
Since `Pad{Src,Sink}` reset the pad functions when dropping, there is
no cycles, so we can use an `Arc<Pad{Src,Sink}>` in the pad functions,
thus saving an `upgrade`.