Always first try draining queued data in the loop and only start waiting
if there's nothing to drain right now. Otherwise data might have to be
drained right now but we still wait and nothing is ever waking up the
source pad task again.
Also make sure to not wait multiple times on the same gst::ClockId but
instead unset it after waiting on it and no new one was scheduled in the
meantime. Future waits on the same ClockId will immediately return and
instead we should wait on the condvar if no new ClockId is available.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/698
raised the dependency via bumping the dav1d rust crate used, but didn't add
a requirement at meson level, thus with automatic or enabled option for dav1d
it would pass with an older failure, but then during compilation phase fail
with:
--- stderr
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: PkgConfig(`"pkg-config" "--libs" "--cflags" "dav1d" "dav1d >= 1.0.0"` did not exit successfully: exit status: 1
error: could not find system library 'dav1d' required by the 'dav1d-sys' crate
--- stderr
Package dependency requirement 'dav1d >= 1.0.0' could not be satisfied.
Package 'dav1d' has version '0.8.2', required version is '>= 1.0.0'
)', /home/user/.cargo/registry/src/github.com-1ecc6299db9ec823/dav1d-sys-0.5.0/build.rs:80:10
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...
hlssink can be built by default because it has no dependencies.
tutorial and rsfile should not be built by default because they are
not very useful, and flavors should not be built by default because
it's very incomplete.
followup to 23c07d3c
When we trigger an update pipeline, instead of build-jobs we
replace them update-* jobs. In order for gitlab to not complain
add both of the jobs in the needs array, but mark them as optional
so either of them can be used depending on the usecase.
When call_timeout is triggered, request will fail
irrespective of the retry setting. call_timeout define
max time request can take along with retry.
It can be solved by either setting call_timeout to
retry * call_attempt_timeout or not setting the call_timeout.
As per thread call_attempt and rety setting is enough.
https://github.com/awslabs/aws-sdk-rust/issues/558
A strong handle reference was held in the `block_on_priv` `Result`
handler in the thread for the `Scheduler::start` code path, which
lead to the `Handler` strong count not dropping to 0 when it
should, leading to the shutdown request not being triggered.
Use an Arc<AtomicBool> instead of a oneshot channel for shutdown.
The main Future is always polled and never relies on a waker, a
`poll_fn` is cheap and does the job.
Unpark the scheduler after posting a request to shutdown.
Subtasks are used when current async processing needs to execute
a `Future` via a sync function (eg. a call to a C function).
In this case `Context::block_on` would block the whole `Context`,
leading to a deadlock.
The main use case for this is the `Pad{Src,Sink}` functions:
when we `PadSrc::push` and the peer pad is a `PadSink`, we want
`PadSrc::push` to complete after the async function on the
`PadSink` completes. In this case the `PadSink` async function
is added as a subtask of current scheduler task and
`PadSrc::push` only returns when the subtask is executed.
In `runtime::Task` (`Task` here is the execution Task with a
state machine, not a scheduler task), we used to spawn state
transition actions and iteration loop (leading to a new
scheduler Task). At the time, it seemed convenient for the user
to automatically drain sub tasks after a state transition action
or an iteration. User wouldn't have to worry about this, similarly
to the `Pad{Src,Sink}` case.
In current implementation, the `Task` state machine now operates
directly on the target `Context`. State transtions actions and
the iteration loop are no longer spawned. It seems now useless to
abstract the subtasks draining from the user. Either they
transitively use a mechanism such as `Pad{Src,Sink}` which already
handles this automatically, or they add substasks on purpose, in
which case they know better when subtasks must be drained.
... so that it can be reused on current thread for subsequent
Scheduler instantiations (e.g. block_on) without the need to
reallocate internal data structures.
This commit improves threadshare timers predictability
by better making use of current time slice.
Added a dedicate timer BTreeMap for after timers (those
that are guaranteed to fire no sooner than the expected
instant) so as to avoid previous workaround which added
half the max throttling duration. These timers can now
be checked against the reactor processing instant.
Oneshot timers only need to be polled as `Future`s when
intervals are `Stream`s. This also reduces the size for
oneshot timers and make user call `next` on intervals.
Intervals can also implement `FusedStream`, which can help
when used in features such as `select!`.
Also drop the `time` module, which was kepts for
compatibility when the `executor` was migrated from tokio
based to smol-like.
Add a `tuning` feature which adds counters that help with performance
evaluation. The only counter added so far accumulates the duration a
Scheduler has been parked, which is pretty accurate an indication of
CPU usage of the Scheduler.
- Reworked buffer push.
- Reworked stats.
- Make first elements logs stand out. This make it possible to
follow what's going on with pipelines containing 1000s of
elements.
- Actually handle EOS.
- Use more significant defaults.
- Allow building without `clap` feature.
jitterbuffer tests crash on Windows CI sometimes. Activating logs
showed time values which are probably not expected in a regular
environment, but which can happen there. Adding extra robustness
to `next_wakeup` computation seems to fix the problem judging by
the few runs I triggered.
Instead of having a matrix of jobs, use a single job running
all the tests like the linux jobs do.
This helps with improved cache hits as most of the deps are shared
between the builds.
In gst-rs we build each crate on its own, since not all
crates share the same features and some conflict with each other.
However currently, that isn't the case in plugins-rs and instead
we can be building all the crates with the same flags and simplify
the the script.
Close#241