Required renaming threadshare/benchmark to threadshare/ts-benchmark
because 'benchmark' as a target name is reserved for meson's
`benchmark` target.
Disabled by default because cargo decides that it has to rebuild
everything, and is really slow because of that.
Also required adding --features for setting features required by the
examples.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1028>
... instead of forwarding them to a Task via a channel.
This improves CPU usage by 5% according to `udpsrc-benchmark-sender`
with the `tuning` feature using default audio test buffers and
400 streams on the same ts-context.
It is expected to improve latency significantly. This is inferred
from `ts-standalone`: latency shrinks from around 5ms to 1.5µs
using the `task` sink compared to the `async-mutex` sink.
The async Mutex is mandatory here as we need to hold the lock
across await points.
This makes it easy to generate "listenable" signals and to evaluate
discontinuities.
When the `tuning` feature is activated and the `main-elem` property
is set, the element can log the parked duration in %, which is an
image of the CPU usage for the ts-context.
This commit adds a test mode to `udpsrc-benchmark-sender` which
generates default audio buffers from `ts-audiotestsrc`. The `rtp`
mode is modified so that it uses `ts-audiotestsrc`.
Contrary to the existing Task Sink, the Async and Sync Mutex Sinks
handle buffers in the `PadSinkHandler` directly. The Async Mutex
Sink uses an async Mutex for the `PadSinkHandlerInner` while the
Sync Mutex Sink uses... a sync Mutex.
All Sinks share the same settings and stats manager.
Use the `--sink` command line option to select the sink (default is
`sync-mutex` since it allows evaluating the framework with as little
overhead as possible.
Also apply various fixes:
- Only keep the segment start instead of the full `Segment`. This
helps with cache locality (`Segment` is a plain struct with many
fields) and avoids downcasting the generic `Segment` upon each
buffer handling.
- Box the `Stat`s. This should improve cache locality a bit.
- Fix EOS handling which took ages for no benefits in this
particular use case.
- Use a macro to raise log level in the main element.
- Move error handling during item processing in `handle_loop_error`.
This function was precisely designed for this and it should reduce
the `handle_item`'s Future size.
... instead of the `Pad{Src,Sink}Ref` wrappers:
- In practice, only the `gst::Pad` is useful in these functions.
Some of these which need a `Pad{Src,Sink}Ref`, but it's the one
for the opposite stream direction. In those cases, it is accessed
via the element's implementation.
- It saves a few `clone`s.
- The implementations usually use the `gst::Pad` for logging.
They no longer need to access it via `pad.gst_pad()`.
- They are either unit types or `Clone` (in which case they are implemented
as pointers).
- Internally, we already use an owned version, so there's no need to get a
reference.
- It facilitates implementation if the handler must be moved into a closure
or a `Future`.
Commit 24b7cfc8 applied changes related to nullability as declared
by gir. One consequence was that some functions signature ended up
requiring users to pass `Some(val)` when they could use `val`
before.
This commit applies changes on `gstreamer-rs` which, will honoring
the nullability stil allow users to pass `val` for the few affected
functions.
This commit also fixes the signature for `Element::request_new_pad`
which was updated upstream.
This is a follow-up to commit 7ee4afac.
This commit cleans up the `Pad{Sink,Src}Handler` by
- Keeping arguments which are strictly necessary.
- Passing arguments by value for the trait functions which return
a `Future`. The arguments which were previously passed by reference
were `clone`d internally and then `clone`d again in most
implementations.
There are unfortunate differences in trait function signatures
between those which return a `Future` and the sync functions. This
is due to the requirement for the arguments to be moved to the
resulting `Future`, whereas sync functions can rely on references.
One particular notable difference is the use of the `imp` in sync
functions instead of the `elem` in functions returning a `Future`.
Because the `imp` is not guaranteed to implement `Clone`, we can't
move it to the resulting `Future`, so the `elem` is used.
This is no longer available as this could lead to building a defined
value in Rust which could be interpreted as undefined in C due to
the sentinel `u64::MAX` for `None`.
Use the constants (e.g. `ONE`, `K`, `M`, ...) and operations to build
a value and deref (`*`) to get the quantity as an integer.
It is now guaranteed that each fragment is at most fragment-duration
long unless the one and only GOP of the fragment is longer than that.
The first (non-EOS) stream determines the duration of each fragment and
all other streams are drained to at most the fragment end timestamp
determined this way.
In addition the next fragment's target time is now at the end of the
previous fragment plus fragment-duration instead of using
first-fragment + N*fragment-duration
regardless of where fragments were split before.
That is, fmp4mux now uses the same strategy as used by splitmuxsink and
as is required e.g. by HLS with regards to the target duration.
A strong handle reference was held in the `block_on_priv` `Result`
handler in the thread for the `Scheduler::start` code path, which
lead to the `Handler` strong count not dropping to 0 when it
should, leading to the shutdown request not being triggered.
Use an Arc<AtomicBool> instead of a oneshot channel for shutdown.
The main Future is always polled and never relies on a waker, a
`poll_fn` is cheap and does the job.
Unpark the scheduler after posting a request to shutdown.
Subtasks are used when current async processing needs to execute
a `Future` via a sync function (eg. a call to a C function).
In this case `Context::block_on` would block the whole `Context`,
leading to a deadlock.
The main use case for this is the `Pad{Src,Sink}` functions:
when we `PadSrc::push` and the peer pad is a `PadSink`, we want
`PadSrc::push` to complete after the async function on the
`PadSink` completes. In this case the `PadSink` async function
is added as a subtask of current scheduler task and
`PadSrc::push` only returns when the subtask is executed.
In `runtime::Task` (`Task` here is the execution Task with a
state machine, not a scheduler task), we used to spawn state
transition actions and iteration loop (leading to a new
scheduler Task). At the time, it seemed convenient for the user
to automatically drain sub tasks after a state transition action
or an iteration. User wouldn't have to worry about this, similarly
to the `Pad{Src,Sink}` case.
In current implementation, the `Task` state machine now operates
directly on the target `Context`. State transtions actions and
the iteration loop are no longer spawned. It seems now useless to
abstract the subtasks draining from the user. Either they
transitively use a mechanism such as `Pad{Src,Sink}` which already
handles this automatically, or they add substasks on purpose, in
which case they know better when subtasks must be drained.
... so that it can be reused on current thread for subsequent
Scheduler instantiations (e.g. block_on) without the need to
reallocate internal data structures.
This commit improves threadshare timers predictability
by better making use of current time slice.
Added a dedicate timer BTreeMap for after timers (those
that are guaranteed to fire no sooner than the expected
instant) so as to avoid previous workaround which added
half the max throttling duration. These timers can now
be checked against the reactor processing instant.
Oneshot timers only need to be polled as `Future`s when
intervals are `Stream`s. This also reduces the size for
oneshot timers and make user call `next` on intervals.
Intervals can also implement `FusedStream`, which can help
when used in features such as `select!`.
Also drop the `time` module, which was kepts for
compatibility when the `executor` was migrated from tokio
based to smol-like.
Add a `tuning` feature which adds counters that help with performance
evaluation. The only counter added so far accumulates the duration a
Scheduler has been parked, which is pretty accurate an indication of
CPU usage of the Scheduler.
- Reworked buffer push.
- Reworked stats.
- Make first elements logs stand out. This make it possible to
follow what's going on with pipelines containing 1000s of
elements.
- Actually handle EOS.
- Use more significant defaults.
- Allow building without `clap` feature.