It is now guaranteed that each fragment is at most fragment-duration
long unless the one and only GOP of the fragment is longer than that.
The first (non-EOS) stream determines the duration of each fragment and
all other streams are drained to at most the fragment end timestamp
determined this way.
In addition the next fragment's target time is now at the end of the
previous fragment plus fragment-duration instead of using
first-fragment + N*fragment-duration
regardless of where fragments were split before.
That is, fmp4mux now uses the same strategy as used by splitmuxsink and
as is required e.g. by HLS with regards to the target duration.
Always first try draining queued data in the loop and only start waiting
if there's nothing to drain right now. Otherwise data might have to be
drained right now but we still wait and nothing is ever waking up the
source pad task again.
Also make sure to not wait multiple times on the same gst::ClockId but
instead unset it after waiting on it and no new one was scheduled in the
meantime. Future waits on the same ClockId will immediately return and
instead we should wait on the condvar if no new ClockId is available.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/698
raised the dependency via bumping the dav1d rust crate used, but didn't add
a requirement at meson level, thus with automatic or enabled option for dav1d
it would pass with an older failure, but then during compilation phase fail
with:
--- stderr
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: PkgConfig(`"pkg-config" "--libs" "--cflags" "dav1d" "dav1d >= 1.0.0"` did not exit successfully: exit status: 1
error: could not find system library 'dav1d' required by the 'dav1d-sys' crate
--- stderr
Package dependency requirement 'dav1d >= 1.0.0' could not be satisfied.
Package 'dav1d' has version '0.8.2', required version is '>= 1.0.0'
)', /home/user/.cargo/registry/src/github.com-1ecc6299db9ec823/dav1d-sys-0.5.0/build.rs:80:10
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...
hlssink can be built by default because it has no dependencies.
tutorial and rsfile should not be built by default because they are
not very useful, and flavors should not be built by default because
it's very incomplete.
followup to 23c07d3c
When we trigger an update pipeline, instead of build-jobs we
replace them update-* jobs. In order for gitlab to not complain
add both of the jobs in the needs array, but mark them as optional
so either of them can be used depending on the usecase.
When call_timeout is triggered, request will fail
irrespective of the retry setting. call_timeout define
max time request can take along with retry.
It can be solved by either setting call_timeout to
retry * call_attempt_timeout or not setting the call_timeout.
As per thread call_attempt and rety setting is enough.
https://github.com/awslabs/aws-sdk-rust/issues/558
A strong handle reference was held in the `block_on_priv` `Result`
handler in the thread for the `Scheduler::start` code path, which
lead to the `Handler` strong count not dropping to 0 when it
should, leading to the shutdown request not being triggered.
Use an Arc<AtomicBool> instead of a oneshot channel for shutdown.
The main Future is always polled and never relies on a waker, a
`poll_fn` is cheap and does the job.
Unpark the scheduler after posting a request to shutdown.
Subtasks are used when current async processing needs to execute
a `Future` via a sync function (eg. a call to a C function).
In this case `Context::block_on` would block the whole `Context`,
leading to a deadlock.
The main use case for this is the `Pad{Src,Sink}` functions:
when we `PadSrc::push` and the peer pad is a `PadSink`, we want
`PadSrc::push` to complete after the async function on the
`PadSink` completes. In this case the `PadSink` async function
is added as a subtask of current scheduler task and
`PadSrc::push` only returns when the subtask is executed.
In `runtime::Task` (`Task` here is the execution Task with a
state machine, not a scheduler task), we used to spawn state
transition actions and iteration loop (leading to a new
scheduler Task). At the time, it seemed convenient for the user
to automatically drain sub tasks after a state transition action
or an iteration. User wouldn't have to worry about this, similarly
to the `Pad{Src,Sink}` case.
In current implementation, the `Task` state machine now operates
directly on the target `Context`. State transtions actions and
the iteration loop are no longer spawned. It seems now useless to
abstract the subtasks draining from the user. Either they
transitively use a mechanism such as `Pad{Src,Sink}` which already
handles this automatically, or they add substasks on purpose, in
which case they know better when subtasks must be drained.