In `webrtcsink`, we terminate a session by setting the session's pipeline to
`Null` like this:
```rust
pipeline.call_async(|pipeline| {
[...]
pipeline.set_state(gst::State::Null);
[...]
// the following cvar is awaited in unprepare()
cvar.notify_one();
});
```
However, `pipeline.call_async` keeps a ref on the pipeline until it's done,
which means the `cvar` is notified before `pipeline` is actually 'disposed',
which happens in a different thread than `unprepare`'s. [`gst_rtp_bin_dispose`]
releases some resources when the pipeline is unrefed. In some cases, those
resources are actually released after the main thread has returned, leading
various issues.
This commit uses tokio runtime's `spawn_blocking` instead, which allows owning
and disposing of the pipeline before the `cvar` is notified.
[`gst_rtp_bin_dispose`]: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/blob/main/subprojects/gst-plugins-good/gst/rtpmanager/gstrtpbin.c#L3108
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1225>
This signal is emitted as soon as the pipeline for each consumer
is created, and can be used by applications that require a greater
level of control over webrtcsink's internals.
An example is also provided to demonstrate usage
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1220>
Adapt a commit [1] that was introduced as part of the forward port of the MR
'add signal "request-encoded-filter"' [2].
The deadlock said commit was fixing doesn't happen on main branch due to
changes in the element design: the Sessions are no longer aborted with the
element `State` held. However, we want to ensure the stats collection task
is terminated when the `webrtcbin` element returns from the Ready to Null
transition, meaning that the related resources are released.
[1]: gstreamer/gst-plugins-rs!1176 (0e6b9df9)
[2]: gstreamer/gst-plugins-rs!1176
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1222>
First off, we just created the session, we know stats_sigid is None
at this point.
Second, don't first assign the result of connecting on-new-ssrc to the
field, then the result of connection twcc-stats, that simply doesn't
make sense.
Finally, actually check that stats_sigid *is* None before connecting
twcc-stats, as I understand it this must have been the original
intention / behavior.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1217>
`State::finalize_session()` asynchronously sets the Session pipeline to Null.
In some cases, sessions `webrtcbin` could terminate their transition to Null
after `webrtcsink` had reached Null.
This commit adds a set of `finalizing_sessions`. When the finalization process
starts, the session is added to the set. After `webrtcbin` has reached the Null
state, the session is removed from the set and a condvar is notified.
In `unprepare`, `webrtcsink` loops until the `finalizing_sessions` set is
empty, awaiting for the condvar to be notified when it's not.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1221>
In some rare cases, the webrtc-test entered a deadlock while executing
`WebRTCSink::unprepare`. Attaching gdb to a blocked instance showed:
* `gstrswebrtc::signaller:👿:Signaller::stop()` parked, waiting for a
`Condvar` in `Signaller::stop()`. This was most likely awaiting for the
receive task to complete while it was locked in `element.end_session()`.
This code path is triggered from `unprepare` with the `State` `Mutex` locked.
* `webrtcsink:👿:WebRtcSink::process_stats` waiting for a contended `Mutex`,
which is also the `State` `Mutex`. This prevented completion of the signal
`gst_webrtc_bin_get_stats`.
This commit aborts the task in charge of periodically collecting stats and
ensures any remaining iteration completes before requesting the Signaller to
stop.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1202>
Prior to this commit, we were sending over words concatenated together
with no separators, for instance "Idon'twanttobeanemperor".
The translation service seems clever enough to translate the contents
anyway, but there is no reason to make its task harder than necessary,
and it didn't re-add separators when the target language was the same as
the source language, which resulted in less than ideal output.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1171>
Session ending is bidirectional: the signaller can tell the sink that a
session was ended, and the sink can tell the signaller to end a session.
As such, two signals are needed, before this patch the second case was
not working as in essence the sink was telling itself that a session was
ended, and obviously failing to even find it when trying to end it again.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1167>
In order to support the use case of an external user providing their own
signalling mechanism, we want the signals to be used and only if nothing
is connected, fallback to the default handling. Calling the interface
vtable directly will bypass the signal emission entirely.
Also ensure that the signals are defined properly for this case. i.e.
1. Signals the the application/external code is expected to emit are
marked as an action signal.
2. Add accumulators to avoid calling the default class handler if
another signal handler is connected.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1141>
This pattern is used for subclassing and calling parent class/interface functions.
However that is not useful for the signaller object.
1. The signals are the API contract and should instead be used by
webrtcsrc/sink to ask or provide outside for/with information.
2. The default case (no signal attached)is instead handled by default class
handlers that call directly using the relevant rust trait. No parent
(GObject) vfuncs necessary.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1141>
This subproject adds a high-level web API compatible with GStreamer
webrtcsrc and webrtcsink elements and the corresponding signaling
server. It allows a perfect bidirectional communication between HTML5
WebRTC API and native GStreamer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/946>
In this context, the bitrate variable is for all encoders, but the
max_bitrate field is per encoder. To calculate a proper FEC ratio, we
need to scale max_bitrate to the number of encoders.
+ Also clamp the fec-percentage that we set on the transceiver for extra
safety
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1151>
* A queue dedicated to transcript items not intended for translation.
* A queue dedicated to transcript items intended for translation. The items are
enqueued after a separator is detected or translate-lookahead was reached.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1137>
This commit adds an optional experimental translation tokenization feature.
It can be activated using the `translation_src_%u` pads property
`tokenization-method`. For the moment, the feature is deactivated by default.
The Translate ws accepts '<span></span>' tags in the input and adds matching
tags in the output. When an 'id' is also provided as an attribute of the
'span', the matching output tag also uses this 'id'.
In the context of close captions, the 'id's are of little use. However, we can
take advantage of the spans in the output to identify translation chunks, which
more or less reflect the rythm of the input transcript.
This commit adds simples spans (no 'id') to the input Transcript Items and
parses the resulting spans in the translated output, assigning the timestamps
and durations sequentially from the input Transcript Items. Edge cases such as
absence of spans, nested spans were observed and are handled here. Similarly,
mismatches between the number of input and output items are taken care of by
some sort of reconcialiation.
Note that this is still experimental and requires further testings.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1109>
This commit adds an optional transcript translation feature implemented as
request src Pads.
When requesting a src Pad, the user can specify the translation language code
using Pad properties 'language-code'.
The following properties are defined on the Element:
- 'transcribe-latency': formerly 'latency', defines the expected latency for
the Transcribe webservice.
- 'translate-latency': defines the expected latency for the Translate
webservice.
- 'transcript-lookahead': maximum transcript duration to send to translation
when a transcript is hitting its deadline and no punctuation was found.
When the input and output languages are the same, only the 'transcribe-latency'
is used for the Pad. Otherwise, the resulting latency is the addition of
'transcribe-latency' and 'translate-latency'.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1109>
This helps gather together the details related to the `TranscriberLoop`.
One difference with previous implementation is that the ws `Client` is
build each time the loop is started instead of being reused. With the new
approach, we don't keep the connection open after EOS and we should be
more resistant in case of a connection failure.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1104>
Instead of sending transcription events to the src pad loop, this commit
enqueues the transcribed buffers immediately in the ws loop, then notifies
the src pad loop. The src pad loop is only in charge of dequeuing the buffers.
This should help with upcoming evolutions.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1104>
If creating a playlist or fragment stream fails (disk is full, the
directory is removed, ...), we will currently crash because the signal
handler expects a non-None GIOStream. The actual callback is allowed to
return None values and we handle this in the caller, so let's not have
this restriction on the signal handler.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1093>
for uploaded object default content-type is set to binary/octet-stream,
which is correct.
metadata cannot be used to set content-type and content-disposition as
setting metadata add a prefix x-amz-meta to key
e.g. setting metadate "content-type=video/mp4" actually set value as
x-amz-meta-content-type. So these has to be seaprate property.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1085>
Commit ad3f1cf fixed the name of hlssink child element to be the same
for hlssink2 and hlssink3. However, we rely on element name to return
boolean in case of hlssink3 or None in case of hlssink2 as the return
value of the delete-fragment closure.
Fix this by using the factory name instead of the element name.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1076>
Simplifies state tracking and potentially reduces latency as it's not
necessary to wait until all fragments of an OBU are received.
The last OBU of a TU is marked with the marker flag to allow parsers to
detect this without first seeing the beginning of the next TU.
Also use a simple `Vec` for collecting complete OBUs instead of a
`gst_base::Adapter` as this reduces the number of allocations.
And also handle invalid packets a little bit more gracefully.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/244
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1072>
We dispose of consumer pipelines asynchronously, potentially after the
session objects have been disposed of.
As session objects are the owner of the cc element, it is entirely
possible for the bwe-request signal to get emitted after cc has been
disposed of, as the closure only takes a weak reference to it.
Fix by simply checking if cc is None
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1044>
Without this auto-pluggers such as decodebin or parsebin will be unable to
process AV1 RTP payloads.
Tested with: `videotestsrc num-buffers=50 ! videoconvert ! av1enc ! av1parse ! rtpav1pay ! queue ! decodebin3 ! videoconvert ! queue ! autovideosink`
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1034>
Right now the code manually pieces together the components
in a String for efficiency. When credentials contain special
characters this can result in invalid URLs, so do it the proper
way (with Url::parse + format) to make sure components are escaped
as needed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/949>
WHIP endpoint providers like Cloudflare do not support Trickle ICE
and need candidates to be send along with the initial offer. Instead
of sending the offer in create-offer promise, send it once the ICE
candidates have been gathered.
While at it add properties to set STUN and TURN server along with the
ICE transport policy as at least when testing the Cloudflare WHIP
endpoint seems unreachable without it. This has also been observed
with Cloudflare provided demos.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/949>
This implements WHEP specification based on
https://datatracker.ietf.org/doc/html/draft-murillo-whep-00
and has been tested with Cloudflare.
Server offers are likely to be removed from the WHEP specification
in upcoming revisions, to avoid compatibility issues. None of the
commercial services implementing WHEP support server initiated offers.
So we only support client side initiated offers.
Follows session setup and tear down as covered in Figure 1, Section 3
of the specification.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/949>
error[E0425]: cannot find value `LIBRARY_NAME` in this scope
--> net/ndi/src/ndisys.rs:336:23
|
336 | path.push(LIBRARY_NAME);
| ^^^^^^^^^^^^ not found in this scope
error[E0425]: cannot find value `LIBRARY_NAME` in this scope
--> net/ndi/src/ndisys.rs:339:33
|
339 | path::PathBuf::from(LIBRARY_NAME)
| ^^^^^^^^^^^^ not found in this scope
Commit 24b7cfc8 applied changes related to nullability as declared
by gir. One consequence was that some functions signature ended up
requiring users to pass `Some(val)` when they could use `val`
before.
This commit applies changes on `gstreamer-rs` which, will honoring
the nullability stil allow users to pass `val` for the few affected
functions.
This commit also fixes the signature for `Element::request_new_pad`
which was updated upstream.
And use a `Vec` plus offset for consuming partial byte buffers.
Removing the first element from a `Vec` repeatedly is not very cheap.
Also simplify calculation of the current packet by removing a mostly
unused type and keeping track of the calculations always locally instead
of sometimes storing it in the element state.
- NDI HX Camera Android in the past used 1ns instead of 100ns as unit
for timecodes/timestamps.
- NDI HX Camera iOS uses 0 for all timecodes and the same non-zero
value for all audio timestamps
Detect such situations and try to compensate for them. Also add a new
"auto" timestamping mode that prefers to use timecodes and otherwise
falls back to timestamps or receive times.
Fixes https://github.com/teltek/gst-plugin-ndi/issues/79
Audio/video are in practice not always from the same clock and can have
different behaviours with regards to clock rate and jitter. Handling
them separately generally gives better results for the timestamps output
by the source element.
This is no longer available as this could lead to building a defined
value in Rust which could be interpreted as undefined in C due to
the sentinel `u64::MAX` for `None`.
Use the constants (e.g. `ONE`, `K`, `M`, ...) and operations to build
a value and deref (`*`) to get the quantity as an integer.
Always first try draining queued data in the loop and only start waiting
if there's nothing to drain right now. Otherwise data might have to be
drained right now but we still wait and nothing is ever waking up the
source pad task again.
Also make sure to not wait multiple times on the same gst::ClockId but
instead unset it after waiting on it and no new one was scheduled in the
meantime. Future waits on the same ClockId will immediately return and
instead we should wait on the condvar if no new ClockId is available.
When call_timeout is triggered, request will fail
irrespective of the retry setting. call_timeout define
max time request can take along with retry.
It can be solved by either setting call_timeout to
retry * call_attempt_timeout or not setting the call_timeout.
As per thread call_attempt and rety setting is enough.
https://github.com/awslabs/aws-sdk-rust/issues/558
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/1410
Created a new plugin 'webrtchttp' to implement all the
WebRTC HTTP protocols under /net/webrtc-http directory.
WhipSink wraps around 'webrtcbin' with HTTP capabilites
to exchange SDP offer/answer so an ICE/DTLS session can
be established between the encoder/media producer (WHIP client)
and the broadcasting ingestion endpoint (Media Server).
Once the ICE/DTLS session is set up, the media will
flow unidirectionally from the WHIP client to the
broadcasting ingestion endpoint (Media Server).
Spec:
https://www.ietf.org/archive/id/draft-ietf-wish-whip-04.html
The encoding of ONVIF metadata is always UTF-8. ONVIF metadata may
or may not be encoded with gzip, but we don't see a use case for
transporting compressed ONVIF metadata between elements for now.
The aggregator was consuming meta buffers too greedily, causing
potential interleaving starvation upstream. Refactor to consume
media and meta buffers synchronously
Also expect parsed=true metadata caps (requiring an upstream
onvifmetadataparse element).
warning: you are deriving `PartialEq` and can implement `Eq`
--> net/raptorq/src/fecscheme.rs:13:24
|
13 | #[derive(Clone, Debug, PartialEq)]
| ^^^^^^^^^ help: consider deriving `Eq` as well: `PartialEq, Eq`
|
= note: `#[warn(clippy::derive_partial_eq_without_eq)]` on by default
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#derive_partial_eq_without_eq
warning: you are deriving `PartialEq` and can implement `Eq`
--> net/raptorq/src/fecscheme.rs:38:24
|
38 | #[derive(Clone, Debug, PartialEq)]
| ^^^^^^^^^ help: consider deriving `Eq` as well: `PartialEq, Eq`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#derive_partial_eq_without_eq
A regression was introduced during the migration to AWS SDK. One used
to be able to provide credentials in multiple ways with the earlier
Rusoto ChainProvider (config file / environment variables). Now one
has to explicitly set the properties.
Use the DefaultCredentialsChain from AWS SDK to restore the previous
functionality.
See
https://docs.rs/aws-config/0.46.0/aws_config/default_provider/credentials/struct.DefaultCredentialsChain.html.
Allow specifying an endpoint to be used for S3 requests. This makes
it possible to use integrations providing object storage based on S3
API like MinIO.
When the endpoint-uri property is specified, the endpoint resolver to
use will be overridden when making S3 requests.
Ideally, when player encounter PLAYLIST-TYPE is VOD, player should
not reload the playlist. For playlist-type=vod, initially we put
PLAYLIST-TYPE=EVENT, and later change it to VOD, which confuse some
players so we shold put ENDLIST here.
In any case putting ENDLIST is right thing to do to indicate no new
segment will be added to playlist.
in current implementation EXT-X-ENDLIST is never set for any playlist-type.
After calling playlist.stop(), during write_final_playlist() is called.
in final playlist write, segment is not added but update_playlist is called.
and update_playlist reset end_list again, so plugin never put ENDLIST.
STS provide temporary credentials to access AWS resource. Temporary
credentials include, AccessKeyId, SecretAccessKey and SessionToken.
With session-token property, element will be able to use temporary
credentials. When session-token is not set, element can use long
term credentials.
Keeping the upload-part-retry-duration & complete-upload-retry-duration
properties changes the semantics in comparison to their usage in
rusotos3sink. Deprecate these two properties and add a warning while
making them noop.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/759>
`PadTemplate::caps()` returns a reference to the caps now instead of a
new strong reference, so keeping the template in scope as long as the
caps reference is required.
warning: this expression borrows a value the compiler would automatically borrow
--> net/reqwest/tests/reqwesthttpsrc.rs:126:56
|
126 | async move { Ok::<_, hyper::Error>((&mut *http_func.lock().unwrap())(req)) }
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: change this to: `(*http_func.lock().unwrap())`
|
= note: `#[warn(clippy::needless_borrow)]` on by default
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_borrow
This implements a default timeout and retry duration for the remaining
S3 requests that were still able to be blocked indefinitely. There are 3
classes of operations: multipart upload creation/abort (should not take
too long), uploads (duration depends on part size), multipart upload
completion (can take several minutes according to documentation).
We currently only expose the part upload times as configurable, and hard
code the rest. If it seems sensible, we can expose the other two sets of
parameters as well.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/690>
Previously, the actual reading from the streaming body of a GetObject
request was not within the same timeout/retry path as the dispatch of
the HTTP request itself. We consolidate these two into a single async
block and create a sum type to encapsulate the rusoto and std library
error paths within that future.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/690>
Have seen a few times where machines that are in perfect time sync with a good source the requests fail with `RequestExpired` errors.
https://docs.aws.amazon.com/transcribe/latest/dg/CommonErrors.html
While not perfect, bumping to five minutes gives more a chance that the signed requests to start streaming won't be expired.
Rusoto does not implement timeouts or retries for any of its HTTP
requests. This is particularly problematic for the part upload stage of
multipart uploads, as a blip in the network could cause part uploads to
freeze for a long duration and eventually bail.
To avoid this, for part uploads, we add (a) (configurable) timeouts for
each request, and (b) retries with exponential backoff, upto a
configurable duration.
It is not clear if/how we want to do this for other types of requests.
The creation of a multipart upload should be relatively quick, but the
completion of an upload might take several minutes, so there is no
one-size-fits-all configuration, necessarily.
It would likely make more sense to implement some sensible hard-coded
defaults for these other sorts of requests.
A multipart upload should either be completed or aborted on error. In
the current state of things, a multipart upload would neither be
completed nor aborted, putting the onus on an external entity to take
care of finishing incomplete uploads or relying on a sane bucket
life cycle policy configured to abort incomplete multipart uploads.
An incomplete multipart upload still contributes to the storage costs as
long as it exists.
We introduce a property here to allow the user to select either aborting
or completing multipart uploads on error. Aborting the upload causes
whole of data to be discarded and the same upload ID is not usable for
uploading more parts to the same.
Completing an incomplete multipart upload can be useful in situations
like having a streamable MP4 where one might want to complete the upload
and have part of the data which was uploaded be preserved.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/618>
The design of the element is based on the assumption that when
receiving a partial result, the following result will contain
at least as many items as there were stable items in the previous
result.
This patch adds a sanity check to make sure our "partial index"
isn't larger than the new received result, and errors out otherwise.
partial_index will eventually be reset to 0 once we receive a
new non-partial result.