Whenever a new sticky event arrives we must make sure to forward it
downstream before the next buffer.
Also make sure to unlock all our mutexes when they're not needed
anymore.
Instead of directly forwarding the list, handle each buffer separately
for now. Previously we would directly forward the lists from any pad,
including inactive ones, downstream.
GstElementClass.release_pad() may be called after the element
has transitioned back to NULL, we need to keep our sink_pads
map around until then.
They should also not be affected by state transitions at all but only be
removed once the user does so or the element is destroyed, so they need
to live independent of the state.
Replaces the RTPJitterBufferItem.get_buffer() method with an
into_buffer() version, ensuring that when we make it mutable we
don't make a copy (unless necessary)
The biggest changes are
- Many functions are not asynchronous anymore as it would be difficult
to run them correctly with our mix of synchronous C code and Rust
code.
- The pad context and its corresponding custom event are gone and
instead thread local storage and task local storage are used. This
makes it easier to correctly pass it through the different layers
of Rust and C code and back.
- Sink events have a different function for serialized and oob events,
src events are handled correctly by default now by simply forwarding
them.
- Task::prepare() has a separate variant that takes a preparation
function as this is a very common task.
- The task loop function can signal via its return value if it wants to
be called again or not.
When a new payload type is encountered, we first check whether
it matches the caps received as an event before emitting the
request-pt-map signal if not, which means we shouldn't consider
errors from the first call to parse_caps as fatal.
When emptying the pending queue, we need to break as soon as
as we failed to push an item to the data queue, otherwise only
the last failed item is pushed back to the front of the pending
queue, and all the previously failed items in the loop get
discarded
If the proxy sink has already queued buffers in the shared
context pending queue and is waiting for space in the data
queue, we should signal that space is available when creating
it!
ProxySink previously blocked on receiving the source pad
of ProxySrc in its ReadyToPaused transition, which meant
ProxySrc had to transition to Ready at the same time.
The usual use case is for the source and sink to reside in
two separate pipelines, and such an arrangement easily led
to deadlocks, as examplified by the new test case.
Instead we now maintain two more global hash maps holding
per-context sink pads and src pads weak references, and
forward events to those when needed.
As ProxySink may not have a source pad context to run
a future on when receiving FlushStart, gst::Element::call_async
is used instead, with a simple oneshot channel used to synchronize
flush start and flush stop handling.
Returning 0 as the max latency in those sources is incorrect,
and may lead to sinks incorrectly complaining about insufficient
buffering elements.
Reproduce with:
gst-launch-1.0 ts-udpsrc port=50000 address=127.0.0.1 \
caps="application/x-rtp, media=(string)audio, clock-rate=(int)8000, encoding-name=(string)PCMA, payload=(int)8" ! \
rtppcmadepay ! alawdec ! autoaudiosink
gst-launch-1.0 audiotestsrc do-timestamp=true samplesperbuffer=400 ! \
alawenc ! rtppcmapay max-ptime=50000000 min-ptime=50000000 ! \
udpsink host=127.0.0.1 port=50000
Logs:
Not enough buffering available for the processing deadline of 0:00:00.020000000,
add enough queues to buffer 0:00:00.020000000 additional data.
Shortening processing latency to 0:00:00.000000000.
This then causes glitches, there are many other ways for the problems
to manifest.
Not returning NoPreroll in that transition causes downstream sinks
to wait for preroll forever.
Reproduce with:
gst-launch-1.0 audiotestsrc ! udpsink host=127.0.0.1 port=50000
gst-launch-1.0 ts-udpsrc address=127.0.0.1 port=50000 ! fakesink
ctrl + C in the receiver pipeline -> hangs forever
Now that we can `Context::enter`, it is no longer necessary to spawn
a `Future` to prepare the `UdpSocket` and beneficiate from the
`Context`'s IO driver.