When streaming small amounts of data, using awss3sink might not be a
good idea, as we need to accumulate at least 5 MB of data for a
multipart upload (or we flush on EOS).
The alternative, while inefficient, is to do a complete PutObject of
_all_ the data periodically so as to not lose data in case of a pipeline
failure. This element makes a start on this idea by doing a PutObject
for every buffer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1337>
webrtcbin will refuse pad requests for all sorts of reasons, and should
be logging an error when doing so, simply post an error message and let
the application deal with it, the reason for the refusal should
hopefully be available in the logs to the user.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1399>
Implement new signaller WhipServerSignaller
- an http server using 'warp'
- handlers for the POST, OPTIONS, PATCH and DELETE
- fixed path `/whip/endpoint` as the URI
- fixed value 'whip-client' as the producer peer id
- fixed resource url `/whip/resource/whip-client`
Derive whipserversrc element from BaseWebRTCSrc
- implement constructed method for ObjectImpl to set
non-default signaller, i.e., WhipServerSignaller
- bind the properties stun-server and turn-servers to those on
the Signaller
Connect to 'webrtcbin-ready' signal in the constructor of WhipServerSignaller
- it will be emitted by the webrtcsrc when the webrtcbin element is ready
- the closure for this signal will in turn connect to webrtcbin's ice-gathering-state
and perform send with the answer sdp via the channel
- the WhipServer will hold its HTTP response in POST handler until this signal
is received or timeout which happens early
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1284>
add a new signal webrtcbin-ready in this place doing same
thing but can be used for both consumers and producers
Please note this change is only to the consumer-added
signal on the signaller interface.
The consumer-added signal on the webrtcsink is unchanged
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1284>
Also move processing from the capture thread to the streaming thread.
The NDI SDK can cause frame drops if not reading fast enough from it.
All frame processing is now handled inside the ndisrcdemux.
Also use a buffer pool for video if copying is necessary.
Additionally, make sure to use different stream ids in the stream-start
event for the audio and video pad.
This plugin now requires GStreamer 1.16 or newer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1365>
If a packet is starting with a leading fragment but we do not expect to
receive one, then skip over it to the next OBU.
Not doing so would cause parsing of the middle of an OBU, which would
most likely fail and cause unnecessary warning messages about a
corrupted stream.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1367>
The "signaller" property used to be defined as MUTABLE_READY which meant that
the property was always set after `constructed()` was called.
Since `connect_signaller()` was called from `constructed()`, only the default
signaller was used.
This commit sets the "signaller" property as CONSTRUCT_ONLY. Using a builder,
this property will now be set before the call to `constructed()`.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1324>
During `on_remote_description_set()` processing, current session is removed
from the sessions `HashMap`. If an ice candidate is submitted to `handle_ice()`
by that time, the session can't be found and the candidate is ignored.
This commit wraps the Session in the sessions `HashMap` so an entry is kept
while `on_remote_description_set()` is running. Incoming candidates received by
`handle_ice()` will be processed immediately or enqueued and handled when the
session is restored by `on_remote_description_set()`.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1325>
The NDI closed captions specifications [1] define a variation where metadata is
attached to the video frame. This requires the AFD buffer to be v210 encoded.
This commit applies this strategy.
Another difference with previous version is that when an error occurs while
encoding or decoding a meta, next meta are also tried instead of failing
immediately.
Receiving closed captions as a standalone metadata is kept for interoperability
purposes. In this case, metadata is also expected to be v210 encoded.
[1]: http://www.sienna-tv.com/ndi/ndiclosedcaptions.html
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1356>
@to-owned increases refcount of the element, which prevents the object from proper destruction, as the initial refcount with ElementFactory::make is larger than 1.
Instead, use @watch to create a weak reference and unbind the closure automatically if the object gets destroyed
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1355>
* Simplify state/playlist management
* Fix a bug that segment is not deleted if location contains directory
and playlist-root is unset
* Split playlist update routine into two steps, adding segment
to playlist and playlist write
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1306>
This is required to take care of clock skew between
system time and pipeline time.
`track-pipeline-clock-for-pdt: true` mean utd time is
sampled for first segment and for subsequent segments
keep adding the time based on pipeline clock. difference
of segment duration and PDT time will match.
track-pipeline-clock-for-pdt: false` mean utd time is
sampled for each segment. system time may jump forward
or backward based on adjustments. If application needs
to synchronization of external events `false` is
recommended.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1145>
- connect to `format-location-full` it provide the first
sample of the fragment. preserve the running-time of the
first sample in fragment.
- on fragment-close message, find the mapping of running-time
to UTC time.
- on each subsequent fragment, calculate the offset of the
running-time with first fragment and add offset to base
utc time
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1145>
`quick-xml::reader::Reader::trim_text(true)` doesn't remove white spaces and
tabs from XML text. Besides, for interoperability robustness we also need to
remove carriage returns and line feeds.
Also improve the default capacities for the `SmallVec`s.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1321>
Since ab1ec12698:
webrtcsink: Add support for pre encoded streams
Discovery pipelines for remote offers were no longer fed any buffers.
While some encoders could already produce caps with no input buffers,
others, such as x264enc, simply hung forever. This resulted in no answer
getting produced if for instance video-caps were constrained to H264.
Fix this by tracking discovery pipelines at the State rather than the
InputStream level, removing the useless distinction of Initial vs.
CodecSelection discoveries, and always feeding all the current
discovery pipelines with incoming buffers.
For reference, the issue here was that codec selection discoveries were
assigned to local clones of InputStreams, not tracked anywhere, and thus
not iterated for discoveries when queuing incoming buffers from the
chain function, as it only looked at the original instance of
InputStream's in state.streams.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1319>
This provides support GstNavigation events handling in webrtcsrc so that
a GStreamer client can be used to control remotely a GStreamer server,
similar to how the web client is capable of controlling a wpesrc.
This is part of a larger set of patches that require more work on the
sinks and sources.
server: d3d11screencapturesrc ! webrtcsink enable-data-channel-navigation=true
client: webrtcsrc enable-data-channel-navigation=true ! d3d11videosink
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1281>
When starting a webrtcsrc-signaller client in Listener mode, only the producers
started after the client connection were advertised. All currently
running producers were ignored unlike the gstwebrtc-api behavior. This
commit now lists all running producers when the client Listener connects
and advertises them through the "producer-added" signal.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1296>
Commit 08b6251a added the check to ensure only one canceller at a time for net/webrtc.
In `whipsink` and since `whipwebrtcsink` picked up the same implementation, there exists a
bug around the use of canceller. `whipsink` calls `wait_async` while passing the canceller
as an argument. The path `send_offer -> do_post -> parse_endpoint_response` results in the
canceller being replaced in each subsequent call to `wait_async`. Since `wait_async` call
does not ensure one canceller, with the async call the use of canceller/abort was subtly
broken. Similarly, for `whepsrc`.
We really don't need to use `wait_async` inside `do_post` for any `await` calls. If the
root future viz. `do_post` with `wait_async` is aborted, the child futures will be taken
care of.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1290>
The "encoder-setup" signal must also be emitted for the encoders
used in discovery pipelines in order for the default settings to
be applied.
This otherwise meant that for instance the x264 encoder would
use a 60 frames latency, greatly delaying startup.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1289>
Spawning one task per message to send out instead of sending them out
sequentially from the one task used to poll the handler sometimes
resulted in peers receiving ICE candidates before SDP offers, triggering
hard to understand errors in the browser.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1236>
This is a first step where we try to replicate encoding conditions from
the input stream into the discovery pipeline. A second patch will
implement using input buffers in the discovery pipelines.
This moves discovery to using input buffers directly. Instead of trying
to replicate buffers that `webrtcsink` is getting as input with testsrc,
directly run discovery based on the real buffers. This way we are sure
we work with the exact right stream type and we don't need encoders to
support encoding streams inputs.
We use the same logic for both encoded and raw input to avoid having
several code paths and makes it all more correct in any case.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1194>
In `webrtcsink`, we terminate a session by setting the session's pipeline to
`Null` like this:
```rust
pipeline.call_async(|pipeline| {
[...]
pipeline.set_state(gst::State::Null);
[...]
// the following cvar is awaited in unprepare()
cvar.notify_one();
});
```
However, `pipeline.call_async` keeps a ref on the pipeline until it's done,
which means the `cvar` is notified before `pipeline` is actually 'disposed',
which happens in a different thread than `unprepare`'s. [`gst_rtp_bin_dispose`]
releases some resources when the pipeline is unrefed. In some cases, those
resources are actually released after the main thread has returned, leading
various issues.
This commit uses tokio runtime's `spawn_blocking` instead, which allows owning
and disposing of the pipeline before the `cvar` is notified.
[`gst_rtp_bin_dispose`]: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/blob/main/subprojects/gst-plugins-good/gst/rtpmanager/gstrtpbin.c#L3108
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1225>
This signal is emitted as soon as the pipeline for each consumer
is created, and can be used by applications that require a greater
level of control over webrtcsink's internals.
An example is also provided to demonstrate usage
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1220>
Adapt a commit [1] that was introduced as part of the forward port of the MR
'add signal "request-encoded-filter"' [2].
The deadlock said commit was fixing doesn't happen on main branch due to
changes in the element design: the Sessions are no longer aborted with the
element `State` held. However, we want to ensure the stats collection task
is terminated when the `webrtcbin` element returns from the Ready to Null
transition, meaning that the related resources are released.
[1]: gstreamer/gst-plugins-rs!1176 (0e6b9df9)
[2]: gstreamer/gst-plugins-rs!1176
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1222>
First off, we just created the session, we know stats_sigid is None
at this point.
Second, don't first assign the result of connecting on-new-ssrc to the
field, then the result of connection twcc-stats, that simply doesn't
make sense.
Finally, actually check that stats_sigid *is* None before connecting
twcc-stats, as I understand it this must have been the original
intention / behavior.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1217>
`State::finalize_session()` asynchronously sets the Session pipeline to Null.
In some cases, sessions `webrtcbin` could terminate their transition to Null
after `webrtcsink` had reached Null.
This commit adds a set of `finalizing_sessions`. When the finalization process
starts, the session is added to the set. After `webrtcbin` has reached the Null
state, the session is removed from the set and a condvar is notified.
In `unprepare`, `webrtcsink` loops until the `finalizing_sessions` set is
empty, awaiting for the condvar to be notified when it's not.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1221>
In some rare cases, the webrtc-test entered a deadlock while executing
`WebRTCSink::unprepare`. Attaching gdb to a blocked instance showed:
* `gstrswebrtc::signaller:👿:Signaller::stop()` parked, waiting for a
`Condvar` in `Signaller::stop()`. This was most likely awaiting for the
receive task to complete while it was locked in `element.end_session()`.
This code path is triggered from `unprepare` with the `State` `Mutex` locked.
* `webrtcsink:👿:WebRtcSink::process_stats` waiting for a contended `Mutex`,
which is also the `State` `Mutex`. This prevented completion of the signal
`gst_webrtc_bin_get_stats`.
This commit aborts the task in charge of periodically collecting stats and
ensures any remaining iteration completes before requesting the Signaller to
stop.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1202>
Prior to this commit, we were sending over words concatenated together
with no separators, for instance "Idon'twanttobeanemperor".
The translation service seems clever enough to translate the contents
anyway, but there is no reason to make its task harder than necessary,
and it didn't re-add separators when the target language was the same as
the source language, which resulted in less than ideal output.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1171>
Session ending is bidirectional: the signaller can tell the sink that a
session was ended, and the sink can tell the signaller to end a session.
As such, two signals are needed, before this patch the second case was
not working as in essence the sink was telling itself that a session was
ended, and obviously failing to even find it when trying to end it again.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1167>
In order to support the use case of an external user providing their own
signalling mechanism, we want the signals to be used and only if nothing
is connected, fallback to the default handling. Calling the interface
vtable directly will bypass the signal emission entirely.
Also ensure that the signals are defined properly for this case. i.e.
1. Signals the the application/external code is expected to emit are
marked as an action signal.
2. Add accumulators to avoid calling the default class handler if
another signal handler is connected.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1141>
This pattern is used for subclassing and calling parent class/interface functions.
However that is not useful for the signaller object.
1. The signals are the API contract and should instead be used by
webrtcsrc/sink to ask or provide outside for/with information.
2. The default case (no signal attached)is instead handled by default class
handlers that call directly using the relevant rust trait. No parent
(GObject) vfuncs necessary.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1141>
This subproject adds a high-level web API compatible with GStreamer
webrtcsrc and webrtcsink elements and the corresponding signaling
server. It allows a perfect bidirectional communication between HTML5
WebRTC API and native GStreamer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/946>
In this context, the bitrate variable is for all encoders, but the
max_bitrate field is per encoder. To calculate a proper FEC ratio, we
need to scale max_bitrate to the number of encoders.
+ Also clamp the fec-percentage that we set on the transceiver for extra
safety
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1151>
* A queue dedicated to transcript items not intended for translation.
* A queue dedicated to transcript items intended for translation. The items are
enqueued after a separator is detected or translate-lookahead was reached.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1137>
This commit adds an optional experimental translation tokenization feature.
It can be activated using the `translation_src_%u` pads property
`tokenization-method`. For the moment, the feature is deactivated by default.
The Translate ws accepts '<span></span>' tags in the input and adds matching
tags in the output. When an 'id' is also provided as an attribute of the
'span', the matching output tag also uses this 'id'.
In the context of close captions, the 'id's are of little use. However, we can
take advantage of the spans in the output to identify translation chunks, which
more or less reflect the rythm of the input transcript.
This commit adds simples spans (no 'id') to the input Transcript Items and
parses the resulting spans in the translated output, assigning the timestamps
and durations sequentially from the input Transcript Items. Edge cases such as
absence of spans, nested spans were observed and are handled here. Similarly,
mismatches between the number of input and output items are taken care of by
some sort of reconcialiation.
Note that this is still experimental and requires further testings.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1109>
This commit adds an optional transcript translation feature implemented as
request src Pads.
When requesting a src Pad, the user can specify the translation language code
using Pad properties 'language-code'.
The following properties are defined on the Element:
- 'transcribe-latency': formerly 'latency', defines the expected latency for
the Transcribe webservice.
- 'translate-latency': defines the expected latency for the Translate
webservice.
- 'transcript-lookahead': maximum transcript duration to send to translation
when a transcript is hitting its deadline and no punctuation was found.
When the input and output languages are the same, only the 'transcribe-latency'
is used for the Pad. Otherwise, the resulting latency is the addition of
'transcribe-latency' and 'translate-latency'.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1109>
This helps gather together the details related to the `TranscriberLoop`.
One difference with previous implementation is that the ws `Client` is
build each time the loop is started instead of being reused. With the new
approach, we don't keep the connection open after EOS and we should be
more resistant in case of a connection failure.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1104>
Instead of sending transcription events to the src pad loop, this commit
enqueues the transcribed buffers immediately in the ws loop, then notifies
the src pad loop. The src pad loop is only in charge of dequeuing the buffers.
This should help with upcoming evolutions.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1104>
If creating a playlist or fragment stream fails (disk is full, the
directory is removed, ...), we will currently crash because the signal
handler expects a non-None GIOStream. The actual callback is allowed to
return None values and we handle this in the caller, so let's not have
this restriction on the signal handler.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1093>
for uploaded object default content-type is set to binary/octet-stream,
which is correct.
metadata cannot be used to set content-type and content-disposition as
setting metadata add a prefix x-amz-meta to key
e.g. setting metadate "content-type=video/mp4" actually set value as
x-amz-meta-content-type. So these has to be seaprate property.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1085>
Commit ad3f1cf fixed the name of hlssink child element to be the same
for hlssink2 and hlssink3. However, we rely on element name to return
boolean in case of hlssink3 or None in case of hlssink2 as the return
value of the delete-fragment closure.
Fix this by using the factory name instead of the element name.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1076>
Simplifies state tracking and potentially reduces latency as it's not
necessary to wait until all fragments of an OBU are received.
The last OBU of a TU is marked with the marker flag to allow parsers to
detect this without first seeing the beginning of the next TU.
Also use a simple `Vec` for collecting complete OBUs instead of a
`gst_base::Adapter` as this reduces the number of allocations.
And also handle invalid packets a little bit more gracefully.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/244
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1072>
We dispose of consumer pipelines asynchronously, potentially after the
session objects have been disposed of.
As session objects are the owner of the cc element, it is entirely
possible for the bwe-request signal to get emitted after cc has been
disposed of, as the closure only takes a weak reference to it.
Fix by simply checking if cc is None
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1044>
Without this auto-pluggers such as decodebin or parsebin will be unable to
process AV1 RTP payloads.
Tested with: `videotestsrc num-buffers=50 ! videoconvert ! av1enc ! av1parse ! rtpav1pay ! queue ! decodebin3 ! videoconvert ! queue ! autovideosink`
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1034>
Right now the code manually pieces together the components
in a String for efficiency. When credentials contain special
characters this can result in invalid URLs, so do it the proper
way (with Url::parse + format) to make sure components are escaped
as needed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/949>
WHIP endpoint providers like Cloudflare do not support Trickle ICE
and need candidates to be send along with the initial offer. Instead
of sending the offer in create-offer promise, send it once the ICE
candidates have been gathered.
While at it add properties to set STUN and TURN server along with the
ICE transport policy as at least when testing the Cloudflare WHIP
endpoint seems unreachable without it. This has also been observed
with Cloudflare provided demos.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/949>
This implements WHEP specification based on
https://datatracker.ietf.org/doc/html/draft-murillo-whep-00
and has been tested with Cloudflare.
Server offers are likely to be removed from the WHEP specification
in upcoming revisions, to avoid compatibility issues. None of the
commercial services implementing WHEP support server initiated offers.
So we only support client side initiated offers.
Follows session setup and tear down as covered in Figure 1, Section 3
of the specification.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/949>
error[E0425]: cannot find value `LIBRARY_NAME` in this scope
--> net/ndi/src/ndisys.rs:336:23
|
336 | path.push(LIBRARY_NAME);
| ^^^^^^^^^^^^ not found in this scope
error[E0425]: cannot find value `LIBRARY_NAME` in this scope
--> net/ndi/src/ndisys.rs:339:33
|
339 | path::PathBuf::from(LIBRARY_NAME)
| ^^^^^^^^^^^^ not found in this scope
Commit 24b7cfc8 applied changes related to nullability as declared
by gir. One consequence was that some functions signature ended up
requiring users to pass `Some(val)` when they could use `val`
before.
This commit applies changes on `gstreamer-rs` which, will honoring
the nullability stil allow users to pass `val` for the few affected
functions.
This commit also fixes the signature for `Element::request_new_pad`
which was updated upstream.
And use a `Vec` plus offset for consuming partial byte buffers.
Removing the first element from a `Vec` repeatedly is not very cheap.
Also simplify calculation of the current packet by removing a mostly
unused type and keeping track of the calculations always locally instead
of sometimes storing it in the element state.
- NDI HX Camera Android in the past used 1ns instead of 100ns as unit
for timecodes/timestamps.
- NDI HX Camera iOS uses 0 for all timecodes and the same non-zero
value for all audio timestamps
Detect such situations and try to compensate for them. Also add a new
"auto" timestamping mode that prefers to use timecodes and otherwise
falls back to timestamps or receive times.
Fixes https://github.com/teltek/gst-plugin-ndi/issues/79
Audio/video are in practice not always from the same clock and can have
different behaviours with regards to clock rate and jitter. Handling
them separately generally gives better results for the timestamps output
by the source element.
This is no longer available as this could lead to building a defined
value in Rust which could be interpreted as undefined in C due to
the sentinel `u64::MAX` for `None`.
Use the constants (e.g. `ONE`, `K`, `M`, ...) and operations to build
a value and deref (`*`) to get the quantity as an integer.
Always first try draining queued data in the loop and only start waiting
if there's nothing to drain right now. Otherwise data might have to be
drained right now but we still wait and nothing is ever waking up the
source pad task again.
Also make sure to not wait multiple times on the same gst::ClockId but
instead unset it after waiting on it and no new one was scheduled in the
meantime. Future waits on the same ClockId will immediately return and
instead we should wait on the condvar if no new ClockId is available.
When call_timeout is triggered, request will fail
irrespective of the retry setting. call_timeout define
max time request can take along with retry.
It can be solved by either setting call_timeout to
retry * call_attempt_timeout or not setting the call_timeout.
As per thread call_attempt and rety setting is enough.
https://github.com/awslabs/aws-sdk-rust/issues/558
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/1410
Created a new plugin 'webrtchttp' to implement all the
WebRTC HTTP protocols under /net/webrtc-http directory.
WhipSink wraps around 'webrtcbin' with HTTP capabilites
to exchange SDP offer/answer so an ICE/DTLS session can
be established between the encoder/media producer (WHIP client)
and the broadcasting ingestion endpoint (Media Server).
Once the ICE/DTLS session is set up, the media will
flow unidirectionally from the WHIP client to the
broadcasting ingestion endpoint (Media Server).
Spec:
https://www.ietf.org/archive/id/draft-ietf-wish-whip-04.html
The encoding of ONVIF metadata is always UTF-8. ONVIF metadata may
or may not be encoded with gzip, but we don't see a use case for
transporting compressed ONVIF metadata between elements for now.
The aggregator was consuming meta buffers too greedily, causing
potential interleaving starvation upstream. Refactor to consume
media and meta buffers synchronously
Also expect parsed=true metadata caps (requiring an upstream
onvifmetadataparse element).
warning: you are deriving `PartialEq` and can implement `Eq`
--> net/raptorq/src/fecscheme.rs:13:24
|
13 | #[derive(Clone, Debug, PartialEq)]
| ^^^^^^^^^ help: consider deriving `Eq` as well: `PartialEq, Eq`
|
= note: `#[warn(clippy::derive_partial_eq_without_eq)]` on by default
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#derive_partial_eq_without_eq
warning: you are deriving `PartialEq` and can implement `Eq`
--> net/raptorq/src/fecscheme.rs:38:24
|
38 | #[derive(Clone, Debug, PartialEq)]
| ^^^^^^^^^ help: consider deriving `Eq` as well: `PartialEq, Eq`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#derive_partial_eq_without_eq
A regression was introduced during the migration to AWS SDK. One used
to be able to provide credentials in multiple ways with the earlier
Rusoto ChainProvider (config file / environment variables). Now one
has to explicitly set the properties.
Use the DefaultCredentialsChain from AWS SDK to restore the previous
functionality.
See
https://docs.rs/aws-config/0.46.0/aws_config/default_provider/credentials/struct.DefaultCredentialsChain.html.
Allow specifying an endpoint to be used for S3 requests. This makes
it possible to use integrations providing object storage based on S3
API like MinIO.
When the endpoint-uri property is specified, the endpoint resolver to
use will be overridden when making S3 requests.
Ideally, when player encounter PLAYLIST-TYPE is VOD, player should
not reload the playlist. For playlist-type=vod, initially we put
PLAYLIST-TYPE=EVENT, and later change it to VOD, which confuse some
players so we shold put ENDLIST here.
In any case putting ENDLIST is right thing to do to indicate no new
segment will be added to playlist.
in current implementation EXT-X-ENDLIST is never set for any playlist-type.
After calling playlist.stop(), during write_final_playlist() is called.
in final playlist write, segment is not added but update_playlist is called.
and update_playlist reset end_list again, so plugin never put ENDLIST.
STS provide temporary credentials to access AWS resource. Temporary
credentials include, AccessKeyId, SecretAccessKey and SessionToken.
With session-token property, element will be able to use temporary
credentials. When session-token is not set, element can use long
term credentials.
Keeping the upload-part-retry-duration & complete-upload-retry-duration
properties changes the semantics in comparison to their usage in
rusotos3sink. Deprecate these two properties and add a warning while
making them noop.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/759>