This avoids full traversals of the sessions HashMap.
Also fixes accidental session teardown when a producer is stopped while
also in a session as a consumer.
This allows for more use cases to be handled like having several session
between 2 peers, and it simplifies the code a bit and makes the protocol
sensibly cleaner
Webrtcsink has been refactored a bit to take the new concept into
account
This way we can use the same WebSocket where several peerTypes being
communicated and some those type can be unregistered, re registered
without ever closing the Socket connection.
This also introduces sensible symmetry between different message types.
Computation of actual max bitrate was broken and in the end it is simpler
to keep the value set by the user and take into account the fec only
when required.
See https://datatracker.ietf.org/doc/html/draft-ietf-rmcat-gcc-02
This commit implements the bandwidth estimation as a GStreamer element
that is then used in webrtcbin through the new `request-bandwidth-estimator`
signal.
This keeps our Homegrown congestion controller but removes the possibility
to switch CC algorithm at runtime.
In some applications people might need to be able to pass more metadata
than the only "display-name" this commit allow that by adding a field in
all signalling structure where it is free form json and removing the
"display-name" as the free form `meta` field allows application to pass
this kind of information without any problem.
On the "webrtcsink" side this commit adds a `meta` property as a
gst::Structure which will be passed as a json string to the signalling
server so we enforce data to be structured.
The buttonMask variable was useless and using it as the button index
in the navigation events was nonsensical.
In addition, the button index must be increased by one to map it to
the X domain.
As specified in Google Congestion Control we should run the packet loss
estimation algorithm "every time feedback from the receiver is
received".
And, also as defined by GCC, we now have 2 different estimated bitrates,
one for the delay-based controller value and one for the loss-based one,
and we use the minimum value between those 2 as our current estimation.
[GCC]: https://datatracker.ietf.org/doc/html/draft-ietf-rmcat-gcc-02
1. Working scenario:
T1 -> Caps event (all caps have been received)
T1 -> Start discovering
T2 -> Change state to Playing
T2 -> The signaller is not started as:
- Sink current_state() == Paused as it will be set to
playing after the change_state vmethod returns
- Discovery is not done anyway
T1 -> Discovery is done
=> The signaller is started, and **everything works well**.
2. Failing scenario:
T1 -> Caps event (all caps have been received)
T1 -> Start discovering
T1 -> Discovery is done
T1 -> The signaller is not started as:
- Current state == Paused (it will be set to playing
after the change_state vmethod returns)
- Discovery is not done anyway
T2 -> Change state to Playing
T2 -> The signaller is not started as:
- Sink current_state == Paused as it will be set to
playing after the we return from the change_state
vmethod
In that case the signaller never starts.
Under certain circumstances one would like to use a self signed
certificate without the obstacles many browsers implement to
safeguard their users. One approach is to create a certificate
authority. One can now add this to the OS certificate store, or simply
add it to the browser. In the latter case one also needs to specify the
new authority on the webrtcsink. This commit enables this.