The internal (C) jitterbuffer needs to know about the configured
latency when calculating a PTS, as it otherwise may consider that
the packet is too late, trigger a resync and cause the element to
discard the packet altogether.
I could not identify when this was broken, but the net effect was
that in the current state, ts-jitterbuffer was discarding up to
half of all the incoming packets.
Previous version used the Context::block_on_or_add_sub_task which
spawns a full-fledged executor with timer and io Reactor for no
reason when we just need to wait for a Receiver or JoinHandle.
When the iteration loop is throttling, the call to `abort` on the
`loop_abort_handle` returns immediately, but the actual `Future`
for the iteration loop is aborted only when the scheduler throttling
completes. State transitions which requires the loop to be aborted &
which are serialized at the pipeline level can incur long delays.
This commit makes sure the Task Context's scheduler is awaken as soon
as the task loop is aborted.
Previous version relied on a plain loop / match / break because
I experimented different strategies. The while variant is better
for the final solution.
The function `enter` is executed in a blocking way from the caller's
point of view. This means that we can guaranty that the provided
function and its output will outlive the underlying Scheduler Task
execution. This requires an unsafe call to
`async_task::spawn_unchecked`. See:
https://docs.rs/async-task/latest/async_task/fn.spawn_unchecked.html
The threadshare executor was based on a modified version of tokio
which implemented the throttling strategy in the BasicScheduler.
Upstream tokio codebase has significantly diverged from what it
was when the throttling strategy was implemented making it hard
to follow. This means that we can hardly get updates from the
upstream project and when we cherry pick fixes, we can't reflect
the state of the project on our fork's version. As a consequence,
tools such as cargo-deny can't check for RUSTSEC fixes in our fork.
The smol ecosystem makes it quite easy to implement and maintain
a custom async executor. This MR imports the smol parts that
need modifications to comply with the threadshare model and implements
a throttling executor in place of the tokio fork.
Networking tokio specific types are replaced with Async wrappers
in the spirit of [smol-rs/async-io]. Note however that the Async
wrappers needed modifications in order to use the per thread
Reactor model. This means that higher level upstream networking
crates such as [async-net] can not be used with our Async
implementation.
Based on the example benchmark with ts-udpsrc, performances seem on par
with what we achieved using the tokio fork.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/118
Related to https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/604
The time driver for the threadshare runtime assigns the timer
entries to the nearest throttling time frame so that the timer
fires as close as possible to the expected instant. This means
that the timer might fire before or after the expected instant
(at most `wait / 2` away).
In some cases, we don't want the timer to fire early. The new
function `delay_for_at_least` ensures that the timer is assigned
to the time frame after the expected instant.
cargo-c will produce a pkg-config file making it easier to statically
link plugins.
Also add 'static' features for plugins depending on < 1.14 as this is the
minimal required version to use static linking because of ABI changes in
core.
There is no way to dynamically ask Cargo to build static or dynamic lib
so we have to build both and pick the one we care when doing the meson
processing.
Fix#88
error[E0412]: cannot find type `Error` in this scope
--> generic\threadshare\src\socket.rs:237:56
|
237 | pub fn set_tos(&self, qos_dscp: i32) -> Result<(), Error> {
| ^^^^^ not found in this scope
Otherwise we can deadlock because of a lock order issue:
- render() is called with the sink pad handler lock and takes the
settings lock
- clearing clients takes the sink pad handler lock
Only two uses of unsafely setting the pad functions is left:
- fallbacksrc for overriding the chain function of the proxy pad of a
ghost pad
- threadshare for overriding the pad functions after creationg, which
probably needs some fixing at some point
Since those are using the clock for sync, they need to also
provide a clock for good measure. The reason is that even if
downstream elements provide a clock, we don't want to have
that clock selected because it might not be running yet.
Pad{Src,Sink}[Ref] delegate some functions to their respective
Pad{Src,Sink}Inner. Since they act as smart pointers, we can
safely implement the Deref trait to simplify the implementations.
StateMachines are spawned on a runtime::Context which uses a tokio
runtime. The StateMachine doesn't need all the features from tokio
such as the IO and timers drivers.
This commit makes use of a light-weight futures executor to spawn
the StateMachines.
When initializing Pad functions in `Pad{Src,Sink}`, we downgrade the
`Pad{Src,Sink}` and upgrade it when necessary. This was implemented
to avoid reference cycles:
`gst::Pad` -> pad function -> `Pad{Src,Sink}` -> `gst::Pad`.
Since `Pad{Src,Sink}` reset the pad functions when dropping, there is
no cycles, so we can use an `Arc<Pad{Src,Sink}>` in the pad functions,
thus saving an `upgrade`.
This is similar to what the standard jitterbuffer does, and
is necessary to avoid deadlocks with the rtpbin session lock,
if the user calls onto any API that requires us to take the
state lock at the wrong time (eg setting the latency property,
clearing the pt map)
They're actually used as a hackish intrusive list around GQueue so using
a different allocator is rather dangerous. Or rather, this is generally
dangerous and shouldn't be done but ...
Also make sure to free items manually in finalize() so that any
contained buffers can also be unreffed.
We don't want to run it every time a strong reference is dropped but
only at the very end. Otherwise dropping the socket stream will cause a
panic because the socket itself is still running.
This can only happen if the receiver is dropped, which only happens when
the task is stopped. As such, Flushing should be returned instead of
panicking.
This should start making navigating the tree a little easier to start
with, and we can then move to allowing building specific groups of
plugins as well.
The plugins are moved into the following hierarchy:
audio
/ gst-plugin-audiofx
/ gst-plugin-claxon
/ gst-plugin-csound
/ gst-plugin-lewton
generic
/ gst-plugin-file
/ gst-plugin-sodium
/ gst-plugin-threadshare
net
/ gst-plugin-reqwest
/ gst-plugin-rusoto
utils
/ gst-plugin-fallbackswitch
/ gst-plugin-togglerecord
video
/ gst-plugin-cdg
/ gst-plugin-closedcaption
/ gst-plugin-dav1d
/ gst-plugin-flv
/ gst-plugin-gif
/ gst-plugin-rav1e
gst-plugin-tutorial
gst-plugin-version-helper