Instead of exposing all ids properties as strings, we now have two
signaller implementations exposing those properties using their actual
type. This API is more natural and save the element and application
conversions when using numerical ids (Janus's default).
I also removed the 'joined-id' property as it's actually the same id as
'feed-id'. I think it would be better to have a 'janus-state' property or
something like that for applications willing to know when the room has
been joined.
This id is also no longer generated by the element by default, as Janus
will take care of generating one if not provided.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1486>
Without sending a Leave request to the server before disconnecting, the
disconnected client will appear present and stuck in the room for a little
while until the server removes it due to inactivity.
After this change, the disconnecting client will immediately leave the room.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1482>
It may be necessary for some signalling clients but the source element
doesn't need to depend on it.
Also, the value will fall back to the pad's MSID for the first argument
to the request-encoded-filter gobject signal when it isn't available
from the signalling client.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1477>
Don't use connect(), since that is incompatible with multicast.
Instead, drop received packets that are from senders we do not want.
Also set multicast loopback = false so we don't receive RTCP RRs from
ourselves and interpret them as RTCP SRs.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1425>
If we only send a single Transport in the Transports header, then the
server is allowed to omit it in the response. This has some strange
consequences for UDP transport: specifically, we have no idea what
addr/port we will get the packets from.
In those cases, we connect() on the socket when we receive the first
packet, so we can send RTCP RRs, and also so we can ensure that we
ignore data from other addresses.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1425>
GST_PLUGIN_FEATURE_RANK=rtspsrc2:1 gst-play-1.0 [URI]
Features:
* Live streaming N audio and N video
- With RTCP-based A/V sync
* Lower transports: TCP, UDP, UDP-Multicast
* RTP, RTCP SR, RTCP RR
* OPTIONS DESCRIBE SETUP PLAY TEARDOWN
* Custom UDP socket management, does not use udpsrc/udpsink
* Supports both rtpbin and the rtpbin2 rust rewrite
- Set USE_RTPBIN2=1 to use rtpbin2 (needs other MRs)
* Properties:
- protocols selection and priority (NEW!)
- location supports rtsp[ut]://
- port-start instead of port-range
Co-Authored-by: Tim-Philipp Müller <tim@centricular.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1425>
This change addresses a cosmetic issue with livekit, where the
connection quality indicator seen by other users shows bad quality
unless the track is added with a high quality layer. The details of the
layer submitted aren't significant for this purpose.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1443>
In the signaller clients and servers, the following sequence is used to close
the websocket (in the [send task]):
```rust
ws_sink.send(WsMessage::Close(None)).await?;
ws_sink.close().await?;
```
tungstenite's [`WebSocket::close()` doc] states:
> Calling this function is the same as calling `write(Message::Close(..))``
So we might think they are redundant and either could be used for this purpose
(`send()` calls `write()`, then `flush()`).
The result is actually is bit different as `write()` starts by checking the
state of the connection and [returns `SendAfterClosing`] if the socket is no
longer active, which is the case when a closing request has been received from
the peer via a [call to `do_close()`]). Note that `do_close()` also enqueues a
`Close` frame.
This behaviour is visible from the server's logs:
```
1. tungstenite::protocol: Received close frame: None
2. tungstenite::protocol: Replying to close with Frame { header: FrameHeader { .., opcode: Control(Close), .. }, payload: [] }
3. gst_plugin_webrtc_signalling::server: Received message Ok(Close(None))
4. gst_plugin_webrtc_signalling::server: connection closed: None this_id=cb13892f-b4d5-4d59-95e2-b3873a7bd319
5. remove_peer{peer_id="cb13892f-b4d5-4d59-95e2-b3873a7bd319"}: gst_plugin_webrtc_signalling::server: close time.busy=285µs time.idle=55.5µs
6. async_tungstenite: websocket start_send error: WebSocket protocol error: Sending after closing is not allowed
```
1: The server's websocket receives the peer's `Close(None)`.
2: `do_close()` enqueues a `Close` frame.
3: The incoming `Close(None)` is handled by the server.
4 & 5: perform session closing.
6: `ws_sink.send(WsMessage::Close(None))` attempts to `write()` while the ws
is no longer active. The error causes an early return, which means that
the enqueued `Close` frame is not flushed.
Depending on the peer's shutdown sequence, this can result in the following
error, which can bubble up as a `Message` on the application's bus:
```
ERROR: from element /GstPipeline:pipeline0/GstWebRTCSrc:webrtcsrc0: GStreamer encountered a general stream error.
Additional debug info:
net/webrtc/src/webrtcsrc/imp.rs(625): gstrswebrtc::webrtcsrc:👿:BaseWebRTCSrc::connect_signaller::{{closure}}::{{closure}} (): /GstPipeline:pipeline0/GstWebRTCSrc:webrtcsrc0:
Signalling error: Error receiving: WebSocket protocol error: Connection reset without closing handshake
```
On the other hand, [`close()` ensures the ws is active] before attempting to
write a `Close` frame. If it's not, it only flushes the stream.
Thus, when we want to be able to close the websocket and/or to honor the closing
handshake in response to the peer `Close` message, the `ws_sink.close()`
variant is preferable.
This can be verified in the resulting server's logs:
```
tungstenite::protocol: Received close frame: None
tungstenite::protocol: Replying to close with Frame { header: FrameHeader { is_final: true, rsv1: false, rsv2: false, rsv3: false, opcode: Control(Close), mask: None}, payload: [] }
gst_plugin_webrtc_signalling::server: Received message Ok(Close(None))
gst_plugin_webrtc_signalling::server: connection closed: None this_id=192ed7ff-3b9d-45c5-be66-872cbe67d190
remove_peer{peer_id="192ed7ff-3b9d-45c5-be66-872cbe67d190"}: gst_plugin_webrtc_signalling::server: close time.busy=22.7µs time.idle=37.4µs
tungstenite::protocol: Sending pong/close
```
We now get the notification `Sending pong/close` (the closing handshake) instead
of `websocket start_send error` from step 6 with previous variant.
The `Connection reset without closing handshake` was not observed after this
change.
[send task]: 63b568f4a0/net/webrtc/signalling/src/server/mod.rs (L165)
[`WebSocket::close()` doc]: https://docs.rs/tungstenite/0.21.0/tungstenite/protocol/struct.WebSocket.html#method.close
[returns `SendAfterClosing`]: 85463b264e/src/protocol/mod.rs (L437)
[call to `do_close()`]: 85463b264e/src/protocol/mod.rs (L601)
[`close()` ensures the ws is active]: 85463b264e/src/protocol/mod.rs (L531)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1435>
We were setting audio and video caps by default even when the user
might have requested only video or audio. This would then result
in a `Could not reuse transceiver` error from the webrtcbin.
Fix this by allowing the user to specify audio or video caps as
None. This allows us to maintain the earlier behaviour for backward
compatibility while allowing the user to not request audio or video
as need be.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1433>
When streaming small amounts of data, using awss3sink might not be a
good idea, as we need to accumulate at least 5 MB of data for a
multipart upload (or we flush on EOS).
The alternative, while inefficient, is to do a complete PutObject of
_all_ the data periodically so as to not lose data in case of a pipeline
failure. This element makes a start on this idea by doing a PutObject
for every buffer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1337>
webrtcbin will refuse pad requests for all sorts of reasons, and should
be logging an error when doing so, simply post an error message and let
the application deal with it, the reason for the refusal should
hopefully be available in the logs to the user.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1399>
Implement new signaller WhipServerSignaller
- an http server using 'warp'
- handlers for the POST, OPTIONS, PATCH and DELETE
- fixed path `/whip/endpoint` as the URI
- fixed value 'whip-client' as the producer peer id
- fixed resource url `/whip/resource/whip-client`
Derive whipserversrc element from BaseWebRTCSrc
- implement constructed method for ObjectImpl to set
non-default signaller, i.e., WhipServerSignaller
- bind the properties stun-server and turn-servers to those on
the Signaller
Connect to 'webrtcbin-ready' signal in the constructor of WhipServerSignaller
- it will be emitted by the webrtcsrc when the webrtcbin element is ready
- the closure for this signal will in turn connect to webrtcbin's ice-gathering-state
and perform send with the answer sdp via the channel
- the WhipServer will hold its HTTP response in POST handler until this signal
is received or timeout which happens early
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1284>
add a new signal webrtcbin-ready in this place doing same
thing but can be used for both consumers and producers
Please note this change is only to the consumer-added
signal on the signaller interface.
The consumer-added signal on the webrtcsink is unchanged
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1284>
Also move processing from the capture thread to the streaming thread.
The NDI SDK can cause frame drops if not reading fast enough from it.
All frame processing is now handled inside the ndisrcdemux.
Also use a buffer pool for video if copying is necessary.
Additionally, make sure to use different stream ids in the stream-start
event for the audio and video pad.
This plugin now requires GStreamer 1.16 or newer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1365>