The scheduler is awaken when aborting a task loop, but not after
a triggering event is pushed. This can cause throttling to induce
long state transitions for pipelines with many streams.
Observed for Unprepare with:
GST_DEBUG=ts-benchmark:4 ../../target/debug/examples/benchmark 2000 ts-udpsrc 2 20 5000
During MR !793, the socket configuration mechanism was changed to
use commands passed to the Task via a channel. This worked properly
for user changes via settings and signals, however the default
clients setting was not used.
A simple solution could have been to send a command at initialization
to add the default clients, but it was considered a better solution
to just wait for the Task preparation to configure the sockets based
on the value of settings.clients at that time, thus avoiding
unnecessary successive removals and additions of clients which could
have happened before preparation.
Of course, users can still add or remove clients as before, before
and after Task preparation.
See also https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/793
This reverts commit 402500f79c.
This commit introduces race conditions. It was intended as solving
an issue with some pipelines which had their queues filling up,
causing the streams to stall. It is reverted as this solution is
considered a workaround for another issue.
See discussion in:
https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/803
The way the runtime::Task is implemented, UdpSinkTask is available
as a mutable ref in all the TaskImpl functions, which offers the
opportunity to avoid using Mutexes.
Main higlights:
- Removed the back and forth calls between UdpSinkPadHandler
and UdpSinkTask.
- Udp sockets are now part of UdpSinkTask, which is also in
charge of preparing them instead of leaving this to UdpSink.
This removed the need for Context::enter since
TaskImpl::prepare already operates under the target Context.
- In order for the clients list to be visible from the UdpSink,
this list was maintained by UdpSinkPadHandler which was also
in charge of (un)configuring the Udp sockets. The sockets are
now part of UdpSinkTask, which is also in charge of the
(un)configuration. Add/remove/replace requests are passed as
commands to the UdpSinkTask via a channel.
- The clients list visible to the UdpSink is now part of the
Settings (it is also a read/write property). Since the actual
socket (un)configuration is asynchronously handled by the Task,
the clients list is updated by the add/remove/replace signals
and set_property("clients", ..). Should a problem occur during
the async (un)configuration, and only in this case, the
UdpSinkTask would update the clients lists in Settings
accordingly so that it stays consistent with the internal state.
- The function clear_clients was renamed as replace_with_clients.
- clients is now based on a BTreeSet instead of a Vec. All the
managing functions perform some sort of lookup prior to updating
the collection. It also ease implementation.
- Removed the UdpSinkPadHandler RwLock. Using flume channels, we
are able to clone the Receiver so it can be stored in UdpSink
and reused when preparing the UdpSinkTask.
The I/O handle was dropped prior to removing it from the reactor,
which caused `Poller::delete` to fail due to an invalid file
descriptor. This used to happen silently unless the same fd was
added again, e.g. by changing states in the pipeline as follow:
Null -> Playing -> Null -> Playing.
In which case `Poller::add` failed due to an already existing file.
This commit makes sure the fd is removed from the reactor prior to
dropping the handle. In order to achieve this, a new task is spawned
on the `Context` on which the I/O was originally registered, allowing
it to access the proper `Reactor`. The I/O can then safely be dropped.
Because the I/O handle is moved to the spawned future, this solution
requires adding the `Send + 'static` bounds to the I/O handle used
within the `Async` wrapper. This appears not too restrictive for
existing implementations though. Other attempts were considered,
but they would cause deadlocks.
This new approach also solves a potential race condition where a
fd could be re-registered in a `Reactor` before it was removed.
Durations might accumulate rounding errors and streams might not
actually start at the same time. For that reason also start with the
stream that has the lowest timestamp.
The internal (C) jitterbuffer needs to know about the configured
latency when calculating a PTS, as it otherwise may consider that
the packet is too late, trigger a resync and cause the element to
discard the packet altogether.
I could not identify when this was broken, but the net effect was
that in the current state, ts-jitterbuffer was discarding up to
half of all the incoming packets.