This change addresses a cosmetic issue with livekit, where the
connection quality indicator seen by other users shows bad quality
unless the track is added with a high quality layer. The details of the
layer submitted aren't significant for this purpose.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1443>
In the signaller clients and servers, the following sequence is used to close
the websocket (in the [send task]):
```rust
ws_sink.send(WsMessage::Close(None)).await?;
ws_sink.close().await?;
```
tungstenite's [`WebSocket::close()` doc] states:
> Calling this function is the same as calling `write(Message::Close(..))``
So we might think they are redundant and either could be used for this purpose
(`send()` calls `write()`, then `flush()`).
The result is actually is bit different as `write()` starts by checking the
state of the connection and [returns `SendAfterClosing`] if the socket is no
longer active, which is the case when a closing request has been received from
the peer via a [call to `do_close()`]). Note that `do_close()` also enqueues a
`Close` frame.
This behaviour is visible from the server's logs:
```
1. tungstenite::protocol: Received close frame: None
2. tungstenite::protocol: Replying to close with Frame { header: FrameHeader { .., opcode: Control(Close), .. }, payload: [] }
3. gst_plugin_webrtc_signalling::server: Received message Ok(Close(None))
4. gst_plugin_webrtc_signalling::server: connection closed: None this_id=cb13892f-b4d5-4d59-95e2-b3873a7bd319
5. remove_peer{peer_id="cb13892f-b4d5-4d59-95e2-b3873a7bd319"}: gst_plugin_webrtc_signalling::server: close time.busy=285µs time.idle=55.5µs
6. async_tungstenite: websocket start_send error: WebSocket protocol error: Sending after closing is not allowed
```
1: The server's websocket receives the peer's `Close(None)`.
2: `do_close()` enqueues a `Close` frame.
3: The incoming `Close(None)` is handled by the server.
4 & 5: perform session closing.
6: `ws_sink.send(WsMessage::Close(None))` attempts to `write()` while the ws
is no longer active. The error causes an early return, which means that
the enqueued `Close` frame is not flushed.
Depending on the peer's shutdown sequence, this can result in the following
error, which can bubble up as a `Message` on the application's bus:
```
ERROR: from element /GstPipeline:pipeline0/GstWebRTCSrc:webrtcsrc0: GStreamer encountered a general stream error.
Additional debug info:
net/webrtc/src/webrtcsrc/imp.rs(625): gstrswebrtc::webrtcsrc:👿:BaseWebRTCSrc::connect_signaller::{{closure}}::{{closure}} (): /GstPipeline:pipeline0/GstWebRTCSrc:webrtcsrc0:
Signalling error: Error receiving: WebSocket protocol error: Connection reset without closing handshake
```
On the other hand, [`close()` ensures the ws is active] before attempting to
write a `Close` frame. If it's not, it only flushes the stream.
Thus, when we want to be able to close the websocket and/or to honor the closing
handshake in response to the peer `Close` message, the `ws_sink.close()`
variant is preferable.
This can be verified in the resulting server's logs:
```
tungstenite::protocol: Received close frame: None
tungstenite::protocol: Replying to close with Frame { header: FrameHeader { is_final: true, rsv1: false, rsv2: false, rsv3: false, opcode: Control(Close), mask: None}, payload: [] }
gst_plugin_webrtc_signalling::server: Received message Ok(Close(None))
gst_plugin_webrtc_signalling::server: connection closed: None this_id=192ed7ff-3b9d-45c5-be66-872cbe67d190
remove_peer{peer_id="192ed7ff-3b9d-45c5-be66-872cbe67d190"}: gst_plugin_webrtc_signalling::server: close time.busy=22.7µs time.idle=37.4µs
tungstenite::protocol: Sending pong/close
```
We now get the notification `Sending pong/close` (the closing handshake) instead
of `websocket start_send error` from step 6 with previous variant.
The `Connection reset without closing handshake` was not observed after this
change.
[send task]: 63b568f4a0/net/webrtc/signalling/src/server/mod.rs (L165)
[`WebSocket::close()` doc]: https://docs.rs/tungstenite/0.21.0/tungstenite/protocol/struct.WebSocket.html#method.close
[returns `SendAfterClosing`]: 85463b264e/src/protocol/mod.rs (L437)
[call to `do_close()`]: 85463b264e/src/protocol/mod.rs (L601)
[`close()` ensures the ws is active]: 85463b264e/src/protocol/mod.rs (L531)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1435>
We were setting audio and video caps by default even when the user
might have requested only video or audio. This would then result
in a `Could not reuse transceiver` error from the webrtcbin.
Fix this by allowing the user to specify audio or video caps as
None. This allows us to maintain the earlier behaviour for backward
compatibility while allowing the user to not request audio or video
as need be.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1433>
When streaming small amounts of data, using awss3sink might not be a
good idea, as we need to accumulate at least 5 MB of data for a
multipart upload (or we flush on EOS).
The alternative, while inefficient, is to do a complete PutObject of
_all_ the data periodically so as to not lose data in case of a pipeline
failure. This element makes a start on this idea by doing a PutObject
for every buffer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1337>
webrtcbin will refuse pad requests for all sorts of reasons, and should
be logging an error when doing so, simply post an error message and let
the application deal with it, the reason for the refusal should
hopefully be available in the logs to the user.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1399>
Implement new signaller WhipServerSignaller
- an http server using 'warp'
- handlers for the POST, OPTIONS, PATCH and DELETE
- fixed path `/whip/endpoint` as the URI
- fixed value 'whip-client' as the producer peer id
- fixed resource url `/whip/resource/whip-client`
Derive whipserversrc element from BaseWebRTCSrc
- implement constructed method for ObjectImpl to set
non-default signaller, i.e., WhipServerSignaller
- bind the properties stun-server and turn-servers to those on
the Signaller
Connect to 'webrtcbin-ready' signal in the constructor of WhipServerSignaller
- it will be emitted by the webrtcsrc when the webrtcbin element is ready
- the closure for this signal will in turn connect to webrtcbin's ice-gathering-state
and perform send with the answer sdp via the channel
- the WhipServer will hold its HTTP response in POST handler until this signal
is received or timeout which happens early
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1284>
add a new signal webrtcbin-ready in this place doing same
thing but can be used for both consumers and producers
Please note this change is only to the consumer-added
signal on the signaller interface.
The consumer-added signal on the webrtcsink is unchanged
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1284>
Also move processing from the capture thread to the streaming thread.
The NDI SDK can cause frame drops if not reading fast enough from it.
All frame processing is now handled inside the ndisrcdemux.
Also use a buffer pool for video if copying is necessary.
Additionally, make sure to use different stream ids in the stream-start
event for the audio and video pad.
This plugin now requires GStreamer 1.16 or newer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1365>
If a packet is starting with a leading fragment but we do not expect to
receive one, then skip over it to the next OBU.
Not doing so would cause parsing of the middle of an OBU, which would
most likely fail and cause unnecessary warning messages about a
corrupted stream.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1367>
The "signaller" property used to be defined as MUTABLE_READY which meant that
the property was always set after `constructed()` was called.
Since `connect_signaller()` was called from `constructed()`, only the default
signaller was used.
This commit sets the "signaller" property as CONSTRUCT_ONLY. Using a builder,
this property will now be set before the call to `constructed()`.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1324>
During `on_remote_description_set()` processing, current session is removed
from the sessions `HashMap`. If an ice candidate is submitted to `handle_ice()`
by that time, the session can't be found and the candidate is ignored.
This commit wraps the Session in the sessions `HashMap` so an entry is kept
while `on_remote_description_set()` is running. Incoming candidates received by
`handle_ice()` will be processed immediately or enqueued and handled when the
session is restored by `on_remote_description_set()`.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1325>
The NDI closed captions specifications [1] define a variation where metadata is
attached to the video frame. This requires the AFD buffer to be v210 encoded.
This commit applies this strategy.
Another difference with previous version is that when an error occurs while
encoding or decoding a meta, next meta are also tried instead of failing
immediately.
Receiving closed captions as a standalone metadata is kept for interoperability
purposes. In this case, metadata is also expected to be v210 encoded.
[1]: http://www.sienna-tv.com/ndi/ndiclosedcaptions.html
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1356>
@to-owned increases refcount of the element, which prevents the object from proper destruction, as the initial refcount with ElementFactory::make is larger than 1.
Instead, use @watch to create a weak reference and unbind the closure automatically if the object gets destroyed
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1355>
* Simplify state/playlist management
* Fix a bug that segment is not deleted if location contains directory
and playlist-root is unset
* Split playlist update routine into two steps, adding segment
to playlist and playlist write
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1306>
This is required to take care of clock skew between
system time and pipeline time.
`track-pipeline-clock-for-pdt: true` mean utd time is
sampled for first segment and for subsequent segments
keep adding the time based on pipeline clock. difference
of segment duration and PDT time will match.
track-pipeline-clock-for-pdt: false` mean utd time is
sampled for each segment. system time may jump forward
or backward based on adjustments. If application needs
to synchronization of external events `false` is
recommended.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1145>
- connect to `format-location-full` it provide the first
sample of the fragment. preserve the running-time of the
first sample in fragment.
- on fragment-close message, find the mapping of running-time
to UTC time.
- on each subsequent fragment, calculate the offset of the
running-time with first fragment and add offset to base
utc time
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1145>
`quick-xml::reader::Reader::trim_text(true)` doesn't remove white spaces and
tabs from XML text. Besides, for interoperability robustness we also need to
remove carriage returns and line feeds.
Also improve the default capacities for the `SmallVec`s.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1321>
Since ab1ec12698:
webrtcsink: Add support for pre encoded streams
Discovery pipelines for remote offers were no longer fed any buffers.
While some encoders could already produce caps with no input buffers,
others, such as x264enc, simply hung forever. This resulted in no answer
getting produced if for instance video-caps were constrained to H264.
Fix this by tracking discovery pipelines at the State rather than the
InputStream level, removing the useless distinction of Initial vs.
CodecSelection discoveries, and always feeding all the current
discovery pipelines with incoming buffers.
For reference, the issue here was that codec selection discoveries were
assigned to local clones of InputStreams, not tracked anywhere, and thus
not iterated for discoveries when queuing incoming buffers from the
chain function, as it only looked at the original instance of
InputStream's in state.streams.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1319>
This provides support GstNavigation events handling in webrtcsrc so that
a GStreamer client can be used to control remotely a GStreamer server,
similar to how the web client is capable of controlling a wpesrc.
This is part of a larger set of patches that require more work on the
sinks and sources.
server: d3d11screencapturesrc ! webrtcsink enable-data-channel-navigation=true
client: webrtcsrc enable-data-channel-navigation=true ! d3d11videosink
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1281>
When starting a webrtcsrc-signaller client in Listener mode, only the producers
started after the client connection were advertised. All currently
running producers were ignored unlike the gstwebrtc-api behavior. This
commit now lists all running producers when the client Listener connects
and advertises them through the "producer-added" signal.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1296>
Commit 08b6251a added the check to ensure only one canceller at a time for net/webrtc.
In `whipsink` and since `whipwebrtcsink` picked up the same implementation, there exists a
bug around the use of canceller. `whipsink` calls `wait_async` while passing the canceller
as an argument. The path `send_offer -> do_post -> parse_endpoint_response` results in the
canceller being replaced in each subsequent call to `wait_async`. Since `wait_async` call
does not ensure one canceller, with the async call the use of canceller/abort was subtly
broken. Similarly, for `whepsrc`.
We really don't need to use `wait_async` inside `do_post` for any `await` calls. If the
root future viz. `do_post` with `wait_async` is aborted, the child futures will be taken
care of.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1290>
The "encoder-setup" signal must also be emitted for the encoders
used in discovery pipelines in order for the default settings to
be applied.
This otherwise meant that for instance the x264 encoder would
use a 60 frames latency, greatly delaying startup.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1289>
Spawning one task per message to send out instead of sending them out
sequentially from the one task used to poll the handler sometimes
resulted in peers receiving ICE candidates before SDP offers, triggering
hard to understand errors in the browser.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1236>
This is a first step where we try to replicate encoding conditions from
the input stream into the discovery pipeline. A second patch will
implement using input buffers in the discovery pipelines.
This moves discovery to using input buffers directly. Instead of trying
to replicate buffers that `webrtcsink` is getting as input with testsrc,
directly run discovery based on the real buffers. This way we are sure
we work with the exact right stream type and we don't need encoders to
support encoding streams inputs.
We use the same logic for both encoded and raw input to avoid having
several code paths and makes it all more correct in any case.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1194>
In `webrtcsink`, we terminate a session by setting the session's pipeline to
`Null` like this:
```rust
pipeline.call_async(|pipeline| {
[...]
pipeline.set_state(gst::State::Null);
[...]
// the following cvar is awaited in unprepare()
cvar.notify_one();
});
```
However, `pipeline.call_async` keeps a ref on the pipeline until it's done,
which means the `cvar` is notified before `pipeline` is actually 'disposed',
which happens in a different thread than `unprepare`'s. [`gst_rtp_bin_dispose`]
releases some resources when the pipeline is unrefed. In some cases, those
resources are actually released after the main thread has returned, leading
various issues.
This commit uses tokio runtime's `spawn_blocking` instead, which allows owning
and disposing of the pipeline before the `cvar` is notified.
[`gst_rtp_bin_dispose`]: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/blob/main/subprojects/gst-plugins-good/gst/rtpmanager/gstrtpbin.c#L3108
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1225>
This signal is emitted as soon as the pipeline for each consumer
is created, and can be used by applications that require a greater
level of control over webrtcsink's internals.
An example is also provided to demonstrate usage
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1220>
Adapt a commit [1] that was introduced as part of the forward port of the MR
'add signal "request-encoded-filter"' [2].
The deadlock said commit was fixing doesn't happen on main branch due to
changes in the element design: the Sessions are no longer aborted with the
element `State` held. However, we want to ensure the stats collection task
is terminated when the `webrtcbin` element returns from the Ready to Null
transition, meaning that the related resources are released.
[1]: gstreamer/gst-plugins-rs!1176 (0e6b9df9)
[2]: gstreamer/gst-plugins-rs!1176
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1222>
First off, we just created the session, we know stats_sigid is None
at this point.
Second, don't first assign the result of connecting on-new-ssrc to the
field, then the result of connection twcc-stats, that simply doesn't
make sense.
Finally, actually check that stats_sigid *is* None before connecting
twcc-stats, as I understand it this must have been the original
intention / behavior.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1217>
`State::finalize_session()` asynchronously sets the Session pipeline to Null.
In some cases, sessions `webrtcbin` could terminate their transition to Null
after `webrtcsink` had reached Null.
This commit adds a set of `finalizing_sessions`. When the finalization process
starts, the session is added to the set. After `webrtcbin` has reached the Null
state, the session is removed from the set and a condvar is notified.
In `unprepare`, `webrtcsink` loops until the `finalizing_sessions` set is
empty, awaiting for the condvar to be notified when it's not.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1221>
In some rare cases, the webrtc-test entered a deadlock while executing
`WebRTCSink::unprepare`. Attaching gdb to a blocked instance showed:
* `gstrswebrtc::signaller:👿:Signaller::stop()` parked, waiting for a
`Condvar` in `Signaller::stop()`. This was most likely awaiting for the
receive task to complete while it was locked in `element.end_session()`.
This code path is triggered from `unprepare` with the `State` `Mutex` locked.
* `webrtcsink:👿:WebRtcSink::process_stats` waiting for a contended `Mutex`,
which is also the `State` `Mutex`. This prevented completion of the signal
`gst_webrtc_bin_get_stats`.
This commit aborts the task in charge of periodically collecting stats and
ensures any remaining iteration completes before requesting the Signaller to
stop.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1202>
Prior to this commit, we were sending over words concatenated together
with no separators, for instance "Idon'twanttobeanemperor".
The translation service seems clever enough to translate the contents
anyway, but there is no reason to make its task harder than necessary,
and it didn't re-add separators when the target language was the same as
the source language, which resulted in less than ideal output.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1171>
Session ending is bidirectional: the signaller can tell the sink that a
session was ended, and the sink can tell the signaller to end a session.
As such, two signals are needed, before this patch the second case was
not working as in essence the sink was telling itself that a session was
ended, and obviously failing to even find it when trying to end it again.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1167>
In order to support the use case of an external user providing their own
signalling mechanism, we want the signals to be used and only if nothing
is connected, fallback to the default handling. Calling the interface
vtable directly will bypass the signal emission entirely.
Also ensure that the signals are defined properly for this case. i.e.
1. Signals the the application/external code is expected to emit are
marked as an action signal.
2. Add accumulators to avoid calling the default class handler if
another signal handler is connected.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1141>
This pattern is used for subclassing and calling parent class/interface functions.
However that is not useful for the signaller object.
1. The signals are the API contract and should instead be used by
webrtcsrc/sink to ask or provide outside for/with information.
2. The default case (no signal attached)is instead handled by default class
handlers that call directly using the relevant rust trait. No parent
(GObject) vfuncs necessary.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1141>
This subproject adds a high-level web API compatible with GStreamer
webrtcsrc and webrtcsink elements and the corresponding signaling
server. It allows a perfect bidirectional communication between HTML5
WebRTC API and native GStreamer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/946>
In this context, the bitrate variable is for all encoders, but the
max_bitrate field is per encoder. To calculate a proper FEC ratio, we
need to scale max_bitrate to the number of encoders.
+ Also clamp the fec-percentage that we set on the transceiver for extra
safety
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1151>
* A queue dedicated to transcript items not intended for translation.
* A queue dedicated to transcript items intended for translation. The items are
enqueued after a separator is detected or translate-lookahead was reached.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1137>
This commit adds an optional experimental translation tokenization feature.
It can be activated using the `translation_src_%u` pads property
`tokenization-method`. For the moment, the feature is deactivated by default.
The Translate ws accepts '<span></span>' tags in the input and adds matching
tags in the output. When an 'id' is also provided as an attribute of the
'span', the matching output tag also uses this 'id'.
In the context of close captions, the 'id's are of little use. However, we can
take advantage of the spans in the output to identify translation chunks, which
more or less reflect the rythm of the input transcript.
This commit adds simples spans (no 'id') to the input Transcript Items and
parses the resulting spans in the translated output, assigning the timestamps
and durations sequentially from the input Transcript Items. Edge cases such as
absence of spans, nested spans were observed and are handled here. Similarly,
mismatches between the number of input and output items are taken care of by
some sort of reconcialiation.
Note that this is still experimental and requires further testings.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1109>
This commit adds an optional transcript translation feature implemented as
request src Pads.
When requesting a src Pad, the user can specify the translation language code
using Pad properties 'language-code'.
The following properties are defined on the Element:
- 'transcribe-latency': formerly 'latency', defines the expected latency for
the Transcribe webservice.
- 'translate-latency': defines the expected latency for the Translate
webservice.
- 'transcript-lookahead': maximum transcript duration to send to translation
when a transcript is hitting its deadline and no punctuation was found.
When the input and output languages are the same, only the 'transcribe-latency'
is used for the Pad. Otherwise, the resulting latency is the addition of
'transcribe-latency' and 'translate-latency'.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1109>
This helps gather together the details related to the `TranscriberLoop`.
One difference with previous implementation is that the ws `Client` is
build each time the loop is started instead of being reused. With the new
approach, we don't keep the connection open after EOS and we should be
more resistant in case of a connection failure.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1104>
Instead of sending transcription events to the src pad loop, this commit
enqueues the transcribed buffers immediately in the ws loop, then notifies
the src pad loop. The src pad loop is only in charge of dequeuing the buffers.
This should help with upcoming evolutions.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1104>
If creating a playlist or fragment stream fails (disk is full, the
directory is removed, ...), we will currently crash because the signal
handler expects a non-None GIOStream. The actual callback is allowed to
return None values and we handle this in the caller, so let's not have
this restriction on the signal handler.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1093>
for uploaded object default content-type is set to binary/octet-stream,
which is correct.
metadata cannot be used to set content-type and content-disposition as
setting metadata add a prefix x-amz-meta to key
e.g. setting metadate "content-type=video/mp4" actually set value as
x-amz-meta-content-type. So these has to be seaprate property.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1085>
Commit ad3f1cf fixed the name of hlssink child element to be the same
for hlssink2 and hlssink3. However, we rely on element name to return
boolean in case of hlssink3 or None in case of hlssink2 as the return
value of the delete-fragment closure.
Fix this by using the factory name instead of the element name.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1076>
Simplifies state tracking and potentially reduces latency as it's not
necessary to wait until all fragments of an OBU are received.
The last OBU of a TU is marked with the marker flag to allow parsers to
detect this without first seeing the beginning of the next TU.
Also use a simple `Vec` for collecting complete OBUs instead of a
`gst_base::Adapter` as this reduces the number of allocations.
And also handle invalid packets a little bit more gracefully.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/244
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1072>
We dispose of consumer pipelines asynchronously, potentially after the
session objects have been disposed of.
As session objects are the owner of the cc element, it is entirely
possible for the bwe-request signal to get emitted after cc has been
disposed of, as the closure only takes a weak reference to it.
Fix by simply checking if cc is None
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1044>
Without this auto-pluggers such as decodebin or parsebin will be unable to
process AV1 RTP payloads.
Tested with: `videotestsrc num-buffers=50 ! videoconvert ! av1enc ! av1parse ! rtpav1pay ! queue ! decodebin3 ! videoconvert ! queue ! autovideosink`
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1034>
Right now the code manually pieces together the components
in a String for efficiency. When credentials contain special
characters this can result in invalid URLs, so do it the proper
way (with Url::parse + format) to make sure components are escaped
as needed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/949>
WHIP endpoint providers like Cloudflare do not support Trickle ICE
and need candidates to be send along with the initial offer. Instead
of sending the offer in create-offer promise, send it once the ICE
candidates have been gathered.
While at it add properties to set STUN and TURN server along with the
ICE transport policy as at least when testing the Cloudflare WHIP
endpoint seems unreachable without it. This has also been observed
with Cloudflare provided demos.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/949>