It attempts to produce a (nearly) gapless live stream by synchronizing
its output to the running time and forwarding the next input buffer if
its start is (nearly) flush with the end of the last output buffer.
If the input buffer is missing or too far in the future, it duplicates
the last output buffer with adjusted timestamps. If it is operating on a
raw audio stream, it will fill duplicate buffers with silence.
If an input buffer arrives too late, it is thrown away. If the last
input buffer was accepted too long ago (according to `late-threshold`),
a late input buffer is accepted anyway, but immediately considered a
duplicate. Due to the silence-filling, this has no effect on audio, but
video gets a "slideshow" effect instead of freezing completely.
The "many-repeats" property will be notified when this element has
recently duplicated a lot of buffers or recovered from such a state.
Co-authored-by: Vivia Nikolaidou <vivia@ahiru.eu>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1017>
hlssink can be built by default because it has no dependencies.
tutorial and rsfile should not be built by default because they are
not very useful, and flavors should not be built by default because
it's very incomplete.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/1410
Created a new plugin 'webrtchttp' to implement all the
WebRTC HTTP protocols under /net/webrtc-http directory.
WhipSink wraps around 'webrtcbin' with HTTP capabilites
to exchange SDP offer/answer so an ICE/DTLS session can
be established between the encoder/media producer (WHIP client)
and the broadcasting ingestion endpoint (Media Server).
Once the ICE/DTLS session is set up, the media will
flow unidirectionally from the WHIP client to the
broadcasting ingestion endpoint (Media Server).
Spec:
https://www.ietf.org/archive/id/draft-ietf-wish-whip-04.html
This plugin takes I420/YUV and appends an alpha plane to give YUVA/A420
to round the corners analogous to the border-radius in CSS. Other video
formats like NV12 not supported yet. Support for other planar formats
will follow.
Not all ways of specifying border-radius as in CSS are implemented at
the moment. Currently, we only support specifying it in pixels and it
gets applied uniformly to all corners.
The element expects an array of "commands", as GstStructures,
in the form:
operation, pattern=<pattern>, ...
The only operation implemented for now is replace-all, eg:
replace-all, pattern=foo, replacement=bar
Other operations can be implemented if useful in the future,
eg. "match" could post a message to the bus when the pattern
is encountered.
The main use case for this is automatic speech recognition,
as implemented by eg awstranscribe as users may want to replace
swear words with tamer language.
Commands are applied in order.
The interface is usable through the CLI with the usual escaping
strategies, though trying to pass in actual regular expressions
through it is a bit tricky, as this introduces yet another
level of escaping.
This new crate consists of two elements, jsongstenc and jsongstparse
Both these elements can deal with an ndjson based format, consisting
for now of two item types: "Buffer" and "Header"
eg:
{"Header":{"format":"foobar"}}
{"Buffer":{"pts":0,"duration":43,"data":{"foo":"bar"}}}
jsongstparse will interpret this by first sending caps
application/x-json, format=foobar, then a buffer containing
{"foo":"bar"}, timestamped as required.
Elements further downstream can then interpret the data further.
jsongstenc will simply perform the reverse operation.
This should start making navigating the tree a little easier to start
with, and we can then move to allowing building specific groups of
plugins as well.
The plugins are moved into the following hierarchy:
audio
/ gst-plugin-audiofx
/ gst-plugin-claxon
/ gst-plugin-csound
/ gst-plugin-lewton
generic
/ gst-plugin-file
/ gst-plugin-sodium
/ gst-plugin-threadshare
net
/ gst-plugin-reqwest
/ gst-plugin-rusoto
utils
/ gst-plugin-fallbackswitch
/ gst-plugin-togglerecord
video
/ gst-plugin-cdg
/ gst-plugin-closedcaption
/ gst-plugin-dav1d
/ gst-plugin-flv
/ gst-plugin-gif
/ gst-plugin-rav1e
gst-plugin-tutorial
gst-plugin-version-helper
- Implemented a simple gif encoder based on the rust crate "gif".
- Currently supported input pixel formats are RGB and RGBA
- The encoder dynamically changes frame delays to approximate the actual
input framerate
- For the moment, each frame uses its own local colorpalette, leading to
good image quality, but big files
- Every frame is currently a full frame. No incremental frames for now
- The produced GIF is currently compressed (LZW)
Audio decoder element structure is based in `gst-plugin-lewton` (a lewton based Vorbis decoder created by @slomo)
The element assumes correctly parsed input from `flacparse`.
Implementation details:
* Claxon returned frames does not contain audio channels interleaved, the reorganization of the channels is done by the element.
* Claxon output buffers are always Vec<i32>, mapping to the correct type (depending on the audio format) is also done by the element.
Mono s16 and stereo_s32 test are provided.
Complete pipelines to test are:
```
gst-launch-1.0 -v souphttpsrc location=https://archive.org/download/MLKDream/MLKDream.flac ! queue2 ! flacparse ! flacdec ! audioconvert ! autoaudiosink
gst-launch-1.0 -v audiotestsrc ! audio/x-raw, format=S16LE, layout=interleaved, rate=44100, channels=1 ! audioconvert ! flacenc ! flacparse ! claxondec ! autoaudiosink
```
Fixes#71
Allows having a live input stream and falling back to another input
stream after a configurable timeout without any buffers received on the
main input.