Commit graph

171 commits

Author SHA1 Message Date
François Laignel
1be30b8ecc ts/scheduler: fix shutdown
A strong handle reference was held in the `block_on_priv` `Result`
handler in the thread for the `Scheduler::start` code path, which
lead to the `Handler` strong count not dropping to 0 when it
should, leading to the shutdown request not being triggered.

Use an Arc<AtomicBool> instead of a oneshot channel for shutdown.
The main Future is always polled and never relies on a waker, a
`poll_fn` is cheap and does the job.

Unpark the scheduler after posting a request to shutdown.
2022-09-13 07:29:50 +00:00
François Laignel
ab327be9af ts/scheduler: improve tasks / io & timers polling balance
Set a limit to the nb of task checked before checking the reactor
and the main future again.
2022-09-13 07:29:50 +00:00
François Laignel
d39aabe054 ts/Task: don't drain sub tasks after state transition and iteration
Subtasks are used when current async processing needs to execute
a `Future` via a sync function (eg. a call to a C function).
In this case `Context::block_on` would block the whole `Context`,
leading to a deadlock.

The main use case for this is the `Pad{Src,Sink}` functions:
when we `PadSrc::push` and the peer pad is a `PadSink`, we want
`PadSrc::push` to complete after the async function on the
`PadSink` completes. In this case the `PadSink` async function
is added as a subtask of current scheduler task and
`PadSrc::push` only returns when the subtask is executed.

In `runtime::Task` (`Task` here is the execution Task with a
state machine, not a scheduler task), we used to spawn state
transition actions and iteration loop (leading to a new
scheduler Task). At the time, it seemed convenient for the user
to automatically drain sub tasks after a state transition action
or an iteration. User wouldn't have to worry about this, similarly
to the `Pad{Src,Sink}` case.

In current implementation, the `Task` state machine now operates
directly on the target `Context`. State transtions actions and
the iteration loop are no longer spawned. It seems now useless to
abstract the subtasks draining from the user. Either they
transitively use a mechanism such as `Pad{Src,Sink}` which already
handles this automatically, or they add substasks on purpose, in
which case they know better when subtasks must be drained.
2022-09-13 07:29:50 +00:00
François Laignel
af12bce141 ts/executor: clear the reactor instead of closing it...
... so that it can be reused on current thread for subsequent
Scheduler instantiations (e.g. block_on) without the need to
reallocate internal data structures.
2022-09-13 07:29:50 +00:00
François Laignel
61c62ee1e8 ts/timers: multiple improvements
This commit improves threadshare timers predictability
by better making use of current time slice.

Added a dedicate timer BTreeMap for after timers (those
that are guaranteed to fire no sooner than the expected
instant) so as to avoid previous workaround which added
half the max throttling duration. These timers can now
be checked against the reactor processing instant.

Oneshot timers only need to be polled as `Future`s when
intervals are `Stream`s. This also reduces the size for
oneshot timers and make user call `next` on intervals.
Intervals can also implement `FusedStream`, which can help
when used in features such as `select!`.

Also drop the `time` module, which was kepts for
compatibility when the `executor` was migrated from tokio
based to smol-like.
2022-09-13 07:29:50 +00:00
François Laignel
235ded35fd ts: add feature to add counters for performance evaluation
Add a `tuning` feature which adds counters that help with performance
evaluation. The only counter added so far accumulates the duration a
Scheduler has been parked, which is pretty accurate an indication of
CPU usage of the Scheduler.
2022-09-13 07:29:50 +00:00
François Laignel
72acbebff0 ts/standalone: multiple improvements
- Reworked buffer push.
- Reworked stats.
- Make first elements logs stand out. This make it possible to
  follow what's going on with pipelines containing 1000s of
  elements.
- Actually handle EOS.
- Use more significant defaults.
- Allow building without `clap` feature.
2022-09-13 07:29:50 +00:00
François Laignel
2355be1cef ts/jitterbuffer: extra robustness for Windows CI
jitterbuffer tests crash on Windows CI sometimes. Activating logs
showed time values which are probably not expected in a regular
environment, but which can happen there. Adding extra robustness
to `next_wakeup` computation seems to fix the problem judging by
the few runs I triggered.
2022-09-12 18:42:34 +00:00
Jordan Petridis
005fbafb30 threadshare: disable tests that can't work on windows
These depend on socket properties that are not available on windows
2022-09-05 11:47:20 +03:00
Sebastian Dröge
1a40186485 Update for GLib ParamSpec builder API changes 2022-09-05 11:45:47 +03:00
Sebastian Dröge
46dddaf31c Update minimum supported Rust version to 1.63 2022-09-04 21:31:55 +03:00
Thibault Saunier
664e2b75bd tsjitterbuffer: Fix latency type when getting property 2022-09-02 21:41:35 +00:00
Thibault Saunier
67e651f57c Allow "unused_doc_comments" as we use hotdoc and not rustdoc 2022-08-29 18:33:22 -04:00
Thibault Saunier
31a53bba8a Generate plugins documentation using hotdoc
Which will automatically be integrated in gstreamer documentation
2022-08-29 18:33:22 -04:00
Vivia Nikolaidou
5606111345 plugins: Simplify code using ParamSpecBuilder 2022-08-22 17:58:43 +03:00
François Laignel
2bb071a950 ts/runtime: slight optimizations for sub tasks related operations
Using callgrind with the standalone test showed opportunities for
improvements for sub tasks addition and drain.

All sub task additions were performed after making sure we were
operating on a Context Task. The Context and Task were checked
again when adding the sub task.

Draining sub tasks was perfomed in a loop on every call places,
checking whether there were remaining sub tasks first. This
commit implements the loop and checks directly in
`executor::Task::drain_subtasks`, saving one `Mutex` lock and
one `thread_local` access per iteration when there are sub
tasks to drain.

The `PadSink` functions wrapper were performing redundant checks
on the `Context` presence and were adding the delayed Future only
when there were already sub tasks.
2022-08-18 18:42:18 +02:00
François Laignel
57da8e649d ts/examples: introduce a standalone pipeline test
Implement a test that initializes pipelines with minimalistic
theadshare src and sink. This can help with the evaluation of
changes to the threadshare runtime or with element
implementation details. It makes it easy to run flamegraph or
callgrind and to focus on the threadshare runtime overhead.
2022-08-18 18:42:18 +02:00
Sebastian Dröge
374bb8323f Fix build after glib SignalBuilder::param_types() API change 2022-08-17 23:37:39 +03:00
François Laignel
21da753607 ts/udpsink: move sync on buffer to try_next
By moving sync on buffer ts to `try_next`, the resulting delay
can be cancelled when a state transition occurs.

To prevent item loss, this requires first peeking the incoming
item from the channel without popping it. After the delay has
elasped, we can pop the item as the last await point in
`try_next`: either it will be cancelled before popping or the
popped item will be passed on to `handle_item`.

Also add `flush` which was missing from `stop` and `flush_start`
transition actions.
2022-08-13 13:03:43 +02:00
François Laignel
33e601d33e ts: migrate elements to try_next / handle_item
See previous commit for details.

Also switched to panicking for some programming errors.
2022-08-10 20:10:08 +02:00
François Laignel
8b54c3fed6 ts/Task: split iterate into try_next and handle_item
Previous Task iteration model suffered from the following
shortcomings:

- When an iteration was engaged it could be cancelled at
  await points by Stop or Flush state transitions,
  which could lead to inconsistent states.
- When an iteration was engaged it could not be cancelled
  by a Pause state transition so as to prevent data loss.
  This meant we couldn't block on the Pause request because
  the mechanism couldn't guarantee Paused would be reached
  in a timely manner.

This commit split the Task iteration into:

- `try_next`: this function returns a future that awaits
  for a new iteration to begin. The regular use case is
  to return an item to process. The item can be left to
  `()` if `try_next` acts as a tick generator. It can
  also return an error. This function can be cancelled at
  await points when a state transition request occurs.
- `handle_item`: this function is called with the item
  returned by `try_next` and is guaranteed to run to
  completion even if a transition request is received.

Note that this model plays well with the common Future
cancellation pitfalls in Rust.
2022-08-10 20:02:53 +02:00
Sebastian Dröge
bdaa39e267 threadshare: Fix some new clippy beta warnings
warning: this expression borrows a value the compiler would automatically borrow
   --> generic/threadshare/src/runtime/executor/async_wrapper.rs:402:19
    |
402 |             match (&mut *self).get_mut().read(buf) {
    |                   ^^^^^^^^^^^^ help: change this to: `(*self)`
    |
    = note: `#[warn(clippy::needless_borrow)]` on by default
    = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_borrow
2022-08-10 12:58:28 +03:00
François Laignel
fb7929dda6 ts: update elements for new TransitionStatus
See previous commit
2022-08-09 19:48:06 +02:00
François Laignel
d4061774a4 ts/Task: return a future for state transitions
State transitions request functions hid the synchronization
details to the caller:

- If the transition was requested from a Context, a subtask was
  added or the transition ack was not awaited if the new transition
  was requested from a running transition or an iteration function.
- If the transition was not requested from a Context, current
  thread was blocked until the ack was received.

This strategy facilitated code in elements, but it suffered from
the following shortcomings:

- The `prepare` transition request didn't comply with the above
  strategy and would always return an `Async` form. This was
  designed to accomodate the `Prepare` function for elements
  such as `ts-tcpclientsrc` which takes times due to the
  TCP socket connection delays. The idea was that the actual
  transition result would be available after calling `start`.
  This was a disadvantage for elements which would prefer to
  error immediately in the event of a preparation failure.
- Hidding the transition request synchronization to the caller
  meant that they had no options but relying on the internal
  mechanism. E.g.: it was not possible to `start` from another
  async runtime without blocking. Also it was not possible
  to request a transition and choose not to await for the
  ack.

This commit introduces a more flexible API for state
transitions requests:

- The transition request function now return a `TransitionStatus`,
  which is a Future.
- When an error occurs immediately (e.g. the transition
  request is not autorized due to current state of the Task),
  the `TransitionStatus` is resolved immediately and can be
  `check`ed for errors. This is useful for functions such as
  `pepare` in the case of `ts-tcpclientsrc` (see above).
  This is also useful for `pause`, because in current design,
  the transition is always async. Note however, that `pause` is
  forseen to adhere to the same behaviour as the other transition
  requests in the near future [1].
- If the caller chooses to await for the ack and they don't know
  if they are running on a ts Context (e.g. in `Pad{Src,Sink}`
  handlers), they can call `await_maybe_on_context`. This is mostly
  the same behaviour as the one that used to be performed internaly.
- If the caller knows for sure they can't possibly block an async
  executor, they can call `block_on` which is more explicite, but
  will nonetheless make sure no ts Context is being blocked. This
  last check was introduced as it was considered low overhead
  while it should ease preventing missues in cases where the above
  functions should be used.

[1]: https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/793#note_1464400
2022-08-09 19:48:06 +02:00
François Laignel
625fce3934 ts/Task: spawn StateMachine on ts Context
Task state machines used to execute in an executor from the Futures
crate. State transitions actions and iteration functions were then
spawned on the target threadshare Context.

This commit directly spawns the task state machine on the threadshare
Context. This simplifies code a bit and paves the way for the changes
described in [1].

Also introduces struct `StateMachineHandle`, which gather together
fields to communicate and synchronize with the StateMachine. Renamed
`StateMachine::run` as `spawn` and return `StateMachineHandle`.

[1]: https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/793#note_1464400
2022-08-09 19:48:06 +02:00
François Laignel
0858dfedb4 ts-udpsrc: align default port with C counterpart
... and also with the default settings for ts-udpsink.
2022-08-09 17:42:53 +02:00
François Laignel
28a62e622e ts/scheduler: rename awake / wake_up as unpark 2022-08-09 13:17:21 +00:00
François Laignel
833331ab66 ts/Task: wake up after the triggering event is pushed
The scheduler is awaken when aborting a task loop, but not after
a triggering event is pushed. This can cause throttling to induce
long state transitions for pipelines with many streams.

Observed for Unprepare with:

GST_DEBUG=ts-benchmark:4 ../../target/debug/examples/benchmark 2000 ts-udpsrc 2 20 5000
2022-08-09 13:17:21 +00:00
François Laignel
374671cb6f ts/udpsink: fix default clients not leading to socket configuration
During MR !793, the socket configuration mechanism was changed to
use commands passed to the Task via a channel. This worked properly
for user changes via settings and signals, however the default
clients setting was not used.

A simple solution could have been to send a command at initialization
to add the default clients, but it was considered a better solution
to just wait for the Task preparation to configure the sockets based
on the value of settings.clients at that time, thus avoiding
unnecessary successive removals and additions of clients which could
have happened before preparation.

Of course, users can still add or remove clients as before, before
and after Task preparation.

See also https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/793
2022-08-09 12:43:36 +00:00
Bilal Elmoussaoui
52973d975e Update per glib::SignalBuilder changes 2022-07-21 20:03:13 +02:00
François Laignel
907d89c998 ts/tests/pad: minor ckeanups 2022-07-09 17:03:21 +00:00
François Laignel
d6a9106ffa ts/tcpclientsrc: reduce sync primitives in async hot path 2022-07-09 17:03:21 +00:00
François Laignel
7e826385c7 ts: Queue & Proxy: minor cleanups 2022-07-09 17:03:21 +00:00
François Laignel
5720faa808 ts/appsrc: reduce sync primitives in async hot path 2022-07-09 17:03:21 +00:00
François Laignel
a1b89c1fb9 ts/udpsrc: reduce sync primitives in async hot path
- Moved UdpSrcPadHandlerState and related funtions to UdpSrcTask.
- Moved Socket preparation in UdpSrcTask. No longer need for
  Context::enter.
2022-07-09 17:03:21 +00:00
François Laignel
885d3de7bb ts/udpsink: reduce sync primitives in async hot path
The way the runtime::Task is implemented, UdpSinkTask is available
as a mutable ref in all the TaskImpl functions, which offers the
opportunity to avoid using Mutexes.

Main higlights:

- Removed the back and forth calls between UdpSinkPadHandler
  and UdpSinkTask.
- Udp sockets are now part of UdpSinkTask, which is also in
  charge of preparing them instead of leaving this to UdpSink.
  This removed the need for Context::enter since
  TaskImpl::prepare already operates under the target Context.
- In order for the clients list to be visible from the UdpSink,
  this list was maintained by UdpSinkPadHandler which was also
  in charge of (un)configuring the Udp sockets. The sockets are
  now part of UdpSinkTask, which is also in charge of the
  (un)configuration. Add/remove/replace requests are passed as
  commands to the UdpSinkTask via a channel.
- The clients list visible to the UdpSink is now part of the
  Settings (it is also a read/write property). Since the actual
  socket (un)configuration is asynchronously handled by the Task,
  the clients list is updated by the add/remove/replace signals
  and set_property("clients", ..). Should a problem occur during
  the async (un)configuration, and only in this case, the
  UdpSinkTask would update the clients lists in Settings
  accordingly so that it stays consistent with the internal state.
- The function clear_clients was renamed as replace_with_clients.
- clients is now based on a BTreeSet instead of a Vec. All the
  managing functions perform some sort of lookup prior to updating
  the collection. It also ease implementation.
- Removed the UdpSinkPadHandler RwLock. Using flume channels, we
  are able to clone the Receiver so it can be stored in UdpSink
  and reused when preparing the UdpSinkTask.
2022-07-09 17:03:21 +00:00
Sebastian Dröge
51c7d0652e Fix/silence a couple new clippy warnings 2022-06-30 16:07:32 +03:00
François Laignel
a45f944edd ts/async_wrapper: remove fd from reactor before dropping its handle
The I/O handle was dropped prior to removing it from the reactor,
which caused `Poller::delete` to fail due to an invalid file
descriptor. This used to happen silently unless the same fd was
added again, e.g. by changing states in the pipeline as follow:

    Null -> Playing -> Null -> Playing.

In which case `Poller::add` failed due to an already existing file.

This commit makes sure the fd is removed from the reactor prior to
dropping the handle. In order to achieve this, a new task is spawned
on the `Context` on which the I/O was originally registered, allowing
it to access the proper `Reactor`. The I/O can then safely be dropped.

Because the I/O handle is moved to the spawned future, this solution
requires adding the `Send + 'static` bounds to the I/O handle used
within the `Async` wrapper. This appears not too restrictive for
existing implementations though. Other attempts were considered,
but they would cause deadlocks.

This new approach also solves a potential race condition where a
fd could be re-registered in a `Reactor` before it was removed.
2022-06-30 11:13:39 +00:00
François Laignel
06273ed628 ts: add test pipeline::socket_play_null_play 2022-06-30 11:13:39 +00:00
Sebastian Dröge
cb84206457 Fix a couple of new 1.62 clippy warnings 2022-06-28 14:52:20 +03:00
Mathieu Duponchelle
943a138d49 ts-jitterbuffer: set jbuf delay when instantiating it
The internal (C) jitterbuffer needs to know about the configured
latency when calculating a PTS, as it otherwise may consider that
the packet is too late, trigger a resync and cause the element to
discard the packet altogether.

I could not identify when this was broken, but the net effect was
that in the current state, ts-jitterbuffer was discarding up to
half of all the incoming packets.
2022-05-11 06:29:22 +00:00
Sebastian Dröge
2f16b5dd3e threadshare: Use into_glib_ptr() instead of into_ptr() 2022-05-08 13:31:10 +03:00
Tim-Philipp Müller
90c203857a threadshare: fix build on Windows 2022-04-27 00:13:46 +01:00
Sebastian Dröge
5af52f94a8 threadshare: Remove glib::SendUnique usage
It's being removed from the GLib bindings because it does not add much
value.
2022-04-09 08:41:58 +00:00
Sebastian Dröge
803e452889 Update minimum supported GStreamer version to 1.14 2022-04-07 12:41:54 +03:00
François Laignel
59ca466081 ts: log max throttling when creating Context 2022-03-28 08:47:32 +00:00
François Laignel
1ef9ae6398 ts/jitterbuffer: don't wake up immediately...
... when next wakeup delay is shorter than the max throttling duration.

See https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/169
2022-03-28 08:47:32 +00:00
François Laignel
8eb8ea0e7d ts/rt/Task: use light weight executor blocking on ack or join handle
Previous version used the Context::block_on_or_add_sub_task which
spawns a full-fledged executor with timer and io Reactor for no
reason when we just need to wait for a Receiver or JoinHandle.
2022-03-28 08:47:32 +00:00
François Laignel
c1615d01e6 ts/rt/Task: awake the iteration loop when it needs to be aborted
When the iteration loop is throttling, the call to `abort` on the
`loop_abort_handle` returns immediately, but the actual `Future`
for the iteration loop is aborted only when the scheduler throttling
completes. State transitions which requires the loop to be aborted &
which are serialized at the pipeline level can incur long delays.

This commit makes sure the Task Context's scheduler is awaken as soon
as the task loop is aborted.
2022-03-28 08:47:32 +00:00
François Laignel
97985d6442 ts/examples: add rtp mode with jitter-buffer & trace stop duration 2022-03-28 08:47:32 +00:00