The goal is to be able to get back the original buffer
after performing analysis on a transformed version. Then put the
various GstMeta back on the original buffer.
An example pipeline would be
.. ! originalbuffersave ! videoscale ! analysis ! originalbufferestore ! draw_overlay ! sink
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1428>
GST_PLUGIN_FEATURE_RANK=rtspsrc2:1 gst-play-1.0 [URI]
Features:
* Live streaming N audio and N video
- With RTCP-based A/V sync
* Lower transports: TCP, UDP, UDP-Multicast
* RTP, RTCP SR, RTCP RR
* OPTIONS DESCRIBE SETUP PLAY TEARDOWN
* Custom UDP socket management, does not use udpsrc/udpsink
* Supports both rtpbin and the rtpbin2 rust rewrite
- Set USE_RTPBIN2=1 to use rtpbin2 (needs other MRs)
* Properties:
- protocols selection and priority (NEW!)
- location supports rtsp[ut]://
- port-start instead of port-range
Co-Authored-by: Tim-Philipp Müller <tim@centricular.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1425>
This new plugin exposes two elements, intersink and intersrc. These act
as wormholes for data in the same process and can be used to forward
data from one pipeline to another.
The implementation makes use of gstreamer-utils' StreamProducer, and
supports dynamically adding and removing consumers, before and after
producers, and changing producer names while PLAYING, both on the sink
and the src.
This initial implementation comes with a small demo, and a few tests.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1257>
It attempts to produce a (nearly) gapless live stream by synchronizing
its output to the running time and forwarding the next input buffer if
its start is (nearly) flush with the end of the last output buffer.
If the input buffer is missing or too far in the future, it duplicates
the last output buffer with adjusted timestamps. If it is operating on a
raw audio stream, it will fill duplicate buffers with silence.
If an input buffer arrives too late, it is thrown away. If the last
input buffer was accepted too long ago (according to `late-threshold`),
a late input buffer is accepted anyway, but immediately considered a
duplicate. Due to the silence-filling, this has no effect on audio, but
video gets a "slideshow" effect instead of freezing completely.
The "many-repeats" property will be notified when this element has
recently duplicated a lot of buffers or recovered from such a state.
Co-authored-by: Vivia Nikolaidou <vivia@ahiru.eu>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/708>
hlssink can be built by default because it has no dependencies.
tutorial and rsfile should not be built by default because they are
not very useful, and flavors should not be built by default because
it's very incomplete.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/1410
Created a new plugin 'webrtchttp' to implement all the
WebRTC HTTP protocols under /net/webrtc-http directory.
WhipSink wraps around 'webrtcbin' with HTTP capabilites
to exchange SDP offer/answer so an ICE/DTLS session can
be established between the encoder/media producer (WHIP client)
and the broadcasting ingestion endpoint (Media Server).
Once the ICE/DTLS session is set up, the media will
flow unidirectionally from the WHIP client to the
broadcasting ingestion endpoint (Media Server).
Spec:
https://www.ietf.org/archive/id/draft-ietf-wish-whip-04.html
This plugin takes I420/YUV and appends an alpha plane to give YUVA/A420
to round the corners analogous to the border-radius in CSS. Other video
formats like NV12 not supported yet. Support for other planar formats
will follow.
Not all ways of specifying border-radius as in CSS are implemented at
the moment. Currently, we only support specifying it in pixels and it
gets applied uniformly to all corners.
The element expects an array of "commands", as GstStructures,
in the form:
operation, pattern=<pattern>, ...
The only operation implemented for now is replace-all, eg:
replace-all, pattern=foo, replacement=bar
Other operations can be implemented if useful in the future,
eg. "match" could post a message to the bus when the pattern
is encountered.
The main use case for this is automatic speech recognition,
as implemented by eg awstranscribe as users may want to replace
swear words with tamer language.
Commands are applied in order.
The interface is usable through the CLI with the usual escaping
strategies, though trying to pass in actual regular expressions
through it is a bit tricky, as this introduces yet another
level of escaping.
This new crate consists of two elements, jsongstenc and jsongstparse
Both these elements can deal with an ndjson based format, consisting
for now of two item types: "Buffer" and "Header"
eg:
{"Header":{"format":"foobar"}}
{"Buffer":{"pts":0,"duration":43,"data":{"foo":"bar"}}}
jsongstparse will interpret this by first sending caps
application/x-json, format=foobar, then a buffer containing
{"foo":"bar"}, timestamped as required.
Elements further downstream can then interpret the data further.
jsongstenc will simply perform the reverse operation.
This should start making navigating the tree a little easier to start
with, and we can then move to allowing building specific groups of
plugins as well.
The plugins are moved into the following hierarchy:
audio
/ gst-plugin-audiofx
/ gst-plugin-claxon
/ gst-plugin-csound
/ gst-plugin-lewton
generic
/ gst-plugin-file
/ gst-plugin-sodium
/ gst-plugin-threadshare
net
/ gst-plugin-reqwest
/ gst-plugin-rusoto
utils
/ gst-plugin-fallbackswitch
/ gst-plugin-togglerecord
video
/ gst-plugin-cdg
/ gst-plugin-closedcaption
/ gst-plugin-dav1d
/ gst-plugin-flv
/ gst-plugin-gif
/ gst-plugin-rav1e
gst-plugin-tutorial
gst-plugin-version-helper
- Implemented a simple gif encoder based on the rust crate "gif".
- Currently supported input pixel formats are RGB and RGBA
- The encoder dynamically changes frame delays to approximate the actual
input framerate
- For the moment, each frame uses its own local colorpalette, leading to
good image quality, but big files
- Every frame is currently a full frame. No incremental frames for now
- The produced GIF is currently compressed (LZW)