Compare commits

...

201 commits
0.12.2 ... main

Author SHA1 Message Date
Sebastian Dröge 0215339c5a Update Cargo.lock
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1576>
2024-05-17 07:50:51 +00:00
Sebastian Dröge 539000574b aws: Update to base32 0.5
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1576>
2024-05-17 07:50:51 +00:00
Robert Ayrapetyan bac5845be1 webrtc: add support for insecure tls connections
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1553>
2024-05-16 19:34:57 +00:00
Mathieu Duponchelle c282bc1bca examples/dash_vod: compare durations to the millisecond
Otherwise when the segment durations aren't as clean cut as in the
example, multiple segments with the exact same duration in milliseconds
will get output, even though they could have been repeated.

Fix this so that people copying this code don't encounter the bug.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1574>
2024-05-15 06:28:14 +00:00
Martin Nordholts 9a7f37e2b7 rtpgccbwe: Support linear regression based delay estimation
In our tests, the slope (found with linear regression) on a
history of the (smoothed) accumulated inter-group delays
gives a more stable congestion control. In particular,
low-end devices becomes less sensitive to spikes in
inter-group delay measurements.

This flavour of delay based bandwidth estimation with Google
Congestion Control is also what Chromium is using.

To make it easy to experiment with the new estimator, as
well as add support for new ones in the future, also add
infrastructure for making delay estimator flavour selectable
at runtime.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1566>
2024-05-14 16:25:48 +00:00
Martin Nordholts 71e9c2bb04 rtpgccbwe: Also log self.measure in overuse_filter()
Also log `self.measure` in overuse_filter() since tracking
`self.measure` over time help a lot in making sense of
`self.estimate` (and `amplified_estimate`).

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1566>
2024-05-14 16:25:48 +00:00
Martin Nordholts d9aa0731f4 rtpgccbwe: Rename variable t to amplified_estimate
We normally multiply `self.estimate` with `MAX_DELTAS` (60).
Rename the variables that holds the result of this
calculation to `amplified_estimate` to make the distinction
clearer.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1566>
2024-05-14 16:25:48 +00:00
Sebastian Dröge 49d3dd17a2 gtk4: Clean up Python example
It's not more or less equivalent to the Rust example.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1573>
2024-05-13 10:06:32 +03:00
Tamas Levai 71cd80f204 net/quinn: Enable client to keep QUIC conn alive
Co-authored-by: Felician Nemeth <nemethf@tmit.bme.hu>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1568>
2024-05-11 08:51:00 +02:00
Rafael Caricio 5549dc7a15 fmp4mux: Support AV1 packaging in the fragmented mp4 plugin
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1544>
2024-05-10 20:59:49 +00:00
Sebastian Dröge 613ed56675 webrtcsink: Add a custom signaller example in Python
This re-implements the default webrtcsink/src signalling protocol in
Python for demonstration purposes.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1569>
2024-05-10 15:59:12 +00:00
Martin Nordholts a719cbfcc6 rtp: Change RtpBasePay2::ssrc_collision from AtomicU64 to Option<u32>
Rust targets without support for `AtomicU64` is still
somewhat common. Running

    git grep -i 'max_atomic_width: Some(32)' | wc -l

in the Rust compiler repo currently counts to 34 targets.

Change the `RtpBasePay2::ssrc_collision` from `AtomicU64` to
`Mutex<Option<u32>>`. This way we keep support for these
targets.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1562>
2024-05-10 14:23:41 +00:00
Rafael Caricio 7d75e263f8 fmp4mux: Add language from tags
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1557>
2024-05-10 13:01:58 +00:00
Martin Nordholts aabb011f5a rtpgccbwe: Log effective bitrate in more places
While monitoring and debugging rtpgccbwe, it is very helpful
to get continuous values of what it considers the effective
bitrate. Right now such prints will stop coming once the
algorithm stabilizes. Print it in more places so it keeps
coming. Use the same format to make it simpler to extract
the values by parsing the logs.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1567>
2024-05-10 11:56:51 +00:00
Martin Nordholts e845e3575c rtpgccbwe: Add mising 'ps' suffix to 'kbps' of 'effective bitrate'
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1567>
2024-05-10 11:56:51 +00:00
Sebastian Dröge f265c3197b Update plugins cache JSON for new CI GStreamer version
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1570>
2024-05-10 14:14:51 +03:00
Sebastian Dröge e8e173d0d0 webrtc: Update Signallable interface to new interface definition API
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1570>
2024-05-10 14:13:55 +03:00
Sebastian Dröge f842aff6df Update Cargo.lock
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1570>
2024-05-10 14:09:27 +03:00
Sebastian Dröge 7e09481adc rtp: Add JPEG RTP payloader/depayloader
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1543>
2024-05-10 11:12:49 +03:00
Sebastian Dröge 1b48fb7ae7 deny: Re-add rustls override because the AWS SDK still uses an old version 2024-05-10 09:41:19 +03:00
Sanchayan Maity fe55acb4c9 net/hlssink3: Refactor out HlsBaseSink & hlscmafsink from hlssink3
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1564>
2024-05-09 21:50:32 +05:30
Tamas Levai fe3607bd14 net/quinn: Remove dependency locks
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1565>
2024-05-09 16:45:38 +02:00
Tamas Levai 5884c00bd0 net/quinn: Improve stream shutdown process
Co-authored-by: Sanchayan Maity <sanchayan@asymptotic.io>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1565>
2024-05-09 16:43:26 +02:00
Tamas Levai 13c3db7857 net/quinn: Port to quinn 0.11 and rustls 0.23
Co-authored-by: Felician Nemeth <nemethf@tmit.bme.hu>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1565>
2024-05-09 13:49:33 +02:00
Martin Nordholts 2b7488a4c8 rtpgccbwe: Log delay and loss target bitrates separately
When debugging rtpgccbwe it is helpful to know if it is
delay based or loss based band-width estimation that puts a
bound on the current target bitrate, so add logs for that.

To minimize the time we need to hold the state lock, perform
the logging after we have released the state lock.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1561>
2024-05-08 19:12:44 +00:00
Sebastian Dröge b4576a0074 gtk4: Fix description of the plugin
A paintable is not a widget and that aspect does not belong in the short
description anyway.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1563>
2024-05-07 20:21:03 +03:00
Mathieu Duponchelle 8861fc493b webrtcsink: improve error when no discovery pipeline runs
If for instance no encoder was found or the RTP plugin was missing,
it is possible that no discovery pipeline will run for a given stream.

Provide a more helpful error message for that case.

Fixes: https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/534
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1560>
2024-05-06 11:39:48 +00:00
Sanchayan Maity 2bfb6ee016 Add quinn to default-members
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1558>
2024-05-02 16:39:29 +00:00
Sanchayan Maity edd7c258c8 Add quinn plugin to README
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1558>
2024-05-02 16:39:29 +00:00
Sanchayan Maity 3a3cec96ff net/quinn: Add pipeline example
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1558>
2024-05-02 16:39:29 +00:00
Sanchayan Maity 80f8664564 net/quinn: Use camel case acronym
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1558>
2024-05-02 16:39:29 +00:00
Sebastian Dröge be3ae583bc Fix new Rust 1.78 clippy warnings
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1559>
2024-05-02 18:36:23 +03:00
Sebastian Dröge 58106a42a9 quinn: Fix up dependencies 2024-05-02 09:59:55 +03:00
Sanchayan Maity 096538989b docs: Add documentation for gst-plugin-quinn
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1036>
2024-05-01 22:30:23 +05:30
Sanchayan Maity 150ad7a545 net/quinn: Use separate property for certificate & private key file
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1036>
2024-05-01 22:30:23 +05:30
Sanchayan Maity 0d2f054c15 Move net/quic to net/quinn
While at it, add this to meson.build.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1036>
2024-05-01 22:30:23 +05:30
Sanchayan Maity 18cf5292b7 net/quic: Fix inconsistencies around secure connection handling
This set of changes implements the below fixes:

- Allow certificates to be specified for client/quicsink
- Secure connection being true on server/quicsrc and false on
  client/quicsink still resulted in a successful connection
  instead of server rejecting the connection
- Using secure connection with ALPN was not working

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1036>
2024-05-01 18:09:16 +05:30
Sanchayan Maity 97d8a79d36 net/quic: Drop private key type property
Use read_all helper from rustls_pemfile and drop the requirement for the
user having to specify the private key type.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1036>
2024-05-01 18:09:16 +05:30
Sanchayan Maity a306b1ce94 net/quic: Use a custom ALPN string
`h3` does not make sense as the default ALPN, as there likely isn't
going to be a HTTP/3 application layer, especially as our transport
is unidirectional for now. Use a custom string `gst-quinn` for now.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1036>
2024-05-01 18:09:16 +05:30
Sanchayan Maity 22c6a98914 net/quic: Rename to quinnquicsink/src
There might be other QUIC elements in the future based on other
libraries. To prevent namespace collision, namespace the elements
with `quinn` prefix.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1036>
2024-05-01 18:09:16 +05:30
Sanchayan Maity 8b64c734e7 net/quic: Use separate property for address and port
While at it, do not duplicate call to settings lock in property
getter and setter for every property.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1036>
2024-05-01 18:01:49 +05:30
Tamas Levai befd8d4bd2 net/quic: Allow SSL keylog file for debugging
rustls has a KeyLog implementation that opens a file whose name is
given by the `SSLKEYLOGFILE` environment variable, and writes keys
into it. If SSLKEYLOGFILE is not set, this does nothing.

See
https://docs.rs/rustls/latest/rustls/struct.KeyLogFile.html
https://docs.rs/rustls/latest/rustls/trait.KeyLog.html

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1036>
2024-05-01 18:01:49 +05:30
Sanchayan Maity ce930eab5f net/quic: Allow setting multiple ALPN transport parameters
For reference, see
https://datatracker.ietf.org/doc/html/rfc9000#section-7.4
https://datatracker.ietf.org/doc/html/rfc7301

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1036>
2024-05-01 18:01:49 +05:30
Tamas Levai 75b25d011f net/quic: Allow specifying an ALPN transport parameter
See https://datatracker.ietf.org/doc/html/rfc9000#section-7.4.

This controls the Transport Layer Security (TLS) extension for
application-layer protocol negotiation within the TLS handshake.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1036>
2024-05-01 18:01:49 +05:30
Sanchayan Maity 953f6a3fd7 net: Add QUIC source and sink
To test, run receiver as

```bash
gst-launch-1.0 -v -e quicsrc caps=audio/x-opus use-datagram=true ! opusparse ! opusdec ! audio/x-raw,format=S16LE,rate=48000,channels=2,layout=interleaved ! audioconvert ! autoaudiosink
```

run sender as

```bash
gst-launch-1.0 -v -e audiotestsrc num-buffers=512 ! audio/x-raw,format=S16LE,rate=48000,channels=2,layout=interleaved ! opusenc ! quicsink use-datagram=true
```

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1036>
2024-05-01 18:01:49 +05:30
Robert Mader 8e675de690 gtk4paintablesink: Add some documentation
And sync with `README.md` in order to make the environment variables
`GST_GTK4_WINDOW` and `GST_GTK4_WINDOW_FULLSCREEN` discoverable - and
because it's generally useful.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1555>
2024-04-30 09:59:49 +03:00
Robert Mader 4326c3bfce gtk4paintablesink: Also create window for gst-play
So it can be easily tested with
```
gst-play-1.0 --videosink=gtk4paintablesink ...
```

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1555>
2024-04-29 22:44:56 +02:00
Robert Mader 47b788d44b gtk4paintablesink: Add env var to fullscreen window
For testing purposes with e.g. gst-launch.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1555>
2024-04-29 20:44:05 +00:00
François Laignel 16b0a4d762 rtp: add mp4gpay
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1551>
2024-04-29 13:33:42 +00:00
François Laignel b588ee59bc rtp: add mp4gdepay
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1551>
2024-04-29 13:33:42 +00:00
François Laignel 5466cafc24 rtp: add mp4apay
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1551>
2024-04-29 13:33:42 +00:00
François Laignel 812fe0a9bd rtp: add mp4adepay
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1551>
2024-04-29 13:33:42 +00:00
Sebastian Dröge c55c4ca42a Update CHANGELOG.md for 0.12.5 2024-04-29 13:49:17 +03:00
Sebastian Dröge 83bd7be92a deny: Remove syn override 2024-04-29 11:54:38 +03:00
Sebastian Dröge 70397a9f05 Update CHANGELOG.md for 0.12.4 2024-04-29 11:46:19 +03:00
Maksym Khomenko a87eaa4b79 hrtfrender: use bitmask, not int, to prevent a capsnego failure
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1549>
2024-04-26 20:24:19 +00:00
Philippe Normand 88cbc93338 dav1ddec: Negotiate bt709 colorimetry when values from seq header are unspecified
With unknown range colorimetry validation would fail in video-info. As our
decoder outputs only YUV formats Bt709 should be a reasonable default.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1548>
2024-04-26 19:35:41 +00:00
Sebastian Dröge 927c3fcdb6 gtk4paintablesink: Update README.md with all the new features
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1547>
2024-04-26 12:29:10 +03:00
Sebastian Dröge 5803904deb gtk4paintablesink: meson: Add auto-detection of GTK4 versions and dmabuf feature
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1547>
2024-04-26 12:29:10 +03:00
Sebastian Dröge c95e07a897 gtk4paintablesink: Improve scaling logic
If force-aspect-ratio=false then make sure to fully fill the given
width/height with the video frame and avoid rounding errors. This makes
sure that the video is rendered in the exact position selected by the
caller and that graphics offloading is going to work more likely.

In other cases and for all overlays, make sure that the calculated
positions are staying inside (0, 0, width, height) as rendering outside
is not allowed by GTK.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1547>
2024-04-26 12:29:10 +03:00
Sebastian Dröge b42bd3d026 gtk4paintablesink: Add force-aspect-ratio property like in other video sinks
Unlike in other sinks this defaults to false as generally every user of
GDK paintables already ensures that the aspect ratio is kept and the
paintable is layed out in the most optimal way based on the context.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1547>
2024-04-26 12:29:10 +03:00
Sebastian Dröge 3dd800ac77 gtk4paintablesink: Implement child proxy interface
This allows setting properties on the paintable from gst-launch-1.0.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1547>
2024-04-26 12:29:10 +03:00
Sebastian Dröge c92462b240 gtk4: Implement support for directly importing dmabufs
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/441

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1547>
2024-04-26 12:29:10 +03:00
Sebastian Dröge 7573caa8e9 rtpgccbwe: Move away from deprecated time::Instant to std::time::Instant
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1554>
2024-04-25 15:37:28 +03:00
Sebastian Dröge c12585377c Update Cargo.lock
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1554>
2024-04-25 14:46:45 +03:00
Mathieu Duponchelle 17d7997137 transcriberbin: add support for consuming secondary audio streams
In some situations, a translated alternate audio stream for a content
might be available.

Instead of going through transcription and translation of the original
audio stream, it may be preferrable for accuracy purposes to simply
transcribe the secondary audio stream.

This MR adds support for doing just that:

* Secondary audio sink pads can be requested as "sink_audio_%u"

* Sometimes audio source pads are added at that point to pass through
  the audio, as "src_audio_%u"

* The main transcription bin now contains per-input stream transcription
  bins. Those can be individually controlled through properties on the
  sink pads, for instance translation-languages can be dynamically set
  per audio stream

* Some properties that originally existed on the main element still
  remain, but are now simply mapped to the always audio sink pad

* Releasing of secondary sink pads is nominally implemented, but not
  tested in states other than NULL

An example launch line for this would be:

```
$ gst-launch-1.0 transcriberbin name=transcriberbin latency=8000 accumulate-time=0 \
      cc-caps="closedcaption/x-cea-708, format=cc_data" sink_audio_0::language-code="es-US" \
      sink_audio_0::translation-languages="languages, transcript=cc3"
    uridecodebin uri=file:///home/meh/Music/chaplin.mkv name=d
      d. ! videoconvert ! transcriberbin.sink_video
      d. ! clocksync ! audioconvert ! transcriberbin.sink_audio
      transcriberbin.src_video ! cea608overlay field=1 ! videoconvert ! autovideosink \
      transcriberbin.src_audio ! audioconvert ! fakesink \
    uridecodebin uri=file:///home/meh/Music/chaplin-spanish.webm name=d2 \
      d2. ! audioconvert ! transcriberbin.sink_audio_0 \
      transcriberbin.src_audio_0 ! fakesink
```

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1546>
2024-04-25 11:56:01 +02:00
Sebastian Dröge 66030f36ad tracers: Add a pad push durations tracer
This tracer measures the time it takes for a buffer/buffer list push to return.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1506>
2024-04-17 16:20:43 +03:00
Seungha Yang b3d3895ae7 cea608overlay: Fix black-background setting
Apply the property to newly created renderer

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1542>
2024-04-15 15:38:31 +00:00
Sebastian Dröge d6a855ff1b rtp: Add VP8/9 RTP payloader/depayloader
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1487>
2024-04-15 14:03:56 +00:00
François Laignel 542030fd82 webrtcsink: don't panic if input CAPS are not supported
If a user constrained the supported CAPS, for instance using `video-caps`:

```shell
gst-launch-1.0 videotestsrc ! video/x-raw,format=I420 ! x264enc \
    ! webrtcsink video-caps=video/x-vp8
```

... a panic would occur which was internally caught without the user being
informed except for the following message which was written to stderr:

> thread 'tokio-runtime-worker' panicked at net/webrtc/src/webrtcsink/imp.rs:3533:22:
>   expected audio or video raw caps: video/x-h264, [...] <br>
> note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

The pipeline kept running.

This commit converts the panic into an `Error` which bubbles up as an element
`StreamError::CodecNotFound` which can be handled by the application.
With the above `gst-launch`, this terminates the pipeline with:

> [...] ERROR  webrtcsink net/webrtc/src/webrtcsink/imp.rs:3771:gstrswebrtc::
>   webrtcsink:👿:BaseWebRTCSink::start_stream_discovery_if_needed::{{closure}}:<webrtcsink0>
> Error running discovery: Unsupported caps: video/x-h264, [...] <br>
> ERROR: from element /GstPipeline:pipeline0/GstWebRTCSink:webrtcsink0:
>   There is no codec present that can handle the stream's type. <br>
> Additional debug info: <br>
> net/webrtc/src/webrtcsink/imp.rs(3772): gstrswebrtc::webrtcsink:👿:BaseWebRTCSink::
> start_stream_discovery_if_needed::{{closure}} (): /GstPipeline:pipeline0/GstWebRTCSink:webrtcsink0:
> Failed to look up output caps: Unsupported caps: video/x-h264, [...] <br>
> Execution ended after 0:00:00.055716661 <br>
> Setting pipeline to NULL ...

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1540>
2024-04-14 23:09:09 +02:00
François Laignel 3fc38be5c4 webrtc: add missing tokio feature for precise sync examples
Clippy caught the missing feature `signal` which is used by the WebRTC precise
synchronization examples. When running `cargo` `check`, `build` or `clippy`
without `no-default-dependencies`, this feature was already present due to
dependents crates.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1541>
2024-04-14 16:50:33 +02:00
François Laignel 168af88eda webrtc: add features for specific signallers
When swapping between several development branches, compilation times can be
frustrating. This commit proposes adding features to control which signaller
to include when building the webrtc plugin. By default, all signallers are
included, just like before.

Compiling the `webrtc-precise-sync` examples with `--no-default-features`
reduces compilation to 267 crates instead of 429 when all signallers are
compiled in.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1539>
2024-04-12 19:10:42 +02:00
François Laignel 83d70d3471 webrtc: add RFC 7273 support
This commit implements [RFC 7273] (NTP & PTP clock signalling & synchronization)
for `webrtcsink` by adding the "ts-refclk" & "mediaclk" SDP media attributes to
identify the clock. These attributes are handled by `rtpjitterbuffer` on the
consumer side. They MUST be part of the SDP offer.

When used with an NTP or PTP clock, "mediaclk" indicates the RTP offset at the
clock's origin. Because the payloaders are not instantiated when the offer is
sent to the consumer, the RTP offset is set to 0 and the payloader
`timstamp-offset`s are set accordingly when they are created.

The `webrtc-precise-sync` examples were updated to be able to start with an NTP
(default), a PTP or the system clock (on the receiver only). The rtp jitter
buffer will synchronize with the clock signalled in the SDP offer provided the
sender is started with `--do-clock-signalling` & the receiver with
`--expect-clock-signalling`.

[RFC 7273]: https://datatracker.ietf.org/doc/html/rfc7273

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1500>
2024-04-12 14:18:09 +02:00
Guillaume Desmottes 596a9177ce uriplaylistbin: disable racy test
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1537>
2024-04-12 10:17:40 +00:00
Philippe Normand 2341ee6935 dav1d: Set colorimetry parameters on src pad caps
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1514>
2024-04-12 09:14:34 +00:00
Guillaume Desmottes 61c9cbdc8f uriplaylistbin: allow to change 'iterations' property while playing
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1492>
2024-04-11 11:13:20 +02:00
Guillaume Desmottes 00b56ca845 uriplaylistbin: stop using an iterator to manage the playlist
Will make it easier to update the playlist while playing.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1492>
2024-04-11 10:48:50 +02:00
François Laignel 42158cbcb0 gccbwe: don't log an error when handling a buffer list while stopping
When `webrtcsink` was stopped, `gccbwe` could log an error if it was handling a
buffer list. This commit logs an error only if `push_list()` returned an error
other than `Flushing`.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1535>
2024-04-11 01:29:53 +00:00
Matthew Waters 4dcc44687a cea608overlay: move Send impl lower in the stack
Try to avoid hiding another non-Send object in the State struct.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1519>
2024-04-10 06:55:34 +00:00
Matthew Waters fbce73f6fc closedcaption: implement cea708overlay element
Can overlay any single CEA-708 service or any single CEA-608 channel.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1519>
2024-04-10 06:55:34 +00:00
Matthew Waters f0c38621c1 cea608overlay: also print bytes that failed to decode
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1519>
2024-04-10 06:55:34 +00:00
Sanchayan Maity a3e30b499f aws: Introduce a property to use path-style addressing
AWS SDK switched to virtual addressing as default instead of path
style earlier. While MinIO supports virtual host style requests,
path style requests are the default.

Introduce a property to allow the use of path style addressing if
required.

For more information, see
https://github.com/minio/minio/blob/master/docs/config/README.md#domain
https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1527>
2024-04-10 00:23:22 +00:00
François Laignel 2ad452ee89 webrtcsink: don't panic with bitrate handling unsupported encoders
When an encoder was not supported by the `VideoEncoder` `bitrate` accessors, an
`unimplemented` panic would occur which would poison `state` & `settings`
`Mutex`s resulting in other threads panicking, notably entering `end_session()`,
which lead to many failures in `BinImplExt::parent_remove_element()` until a
segmentation fault ended the process. This was observed using `vaapivp9enc`.

This commit logs a warning if an encoder isn't supported by the `bitrate`
accessors and silently by-passes `bitrate`-related operations when unsupported.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1534>
2024-04-09 15:48:59 +00:00
Simonas Kazlauskas 5d939498f1 mp4/fmp4: support flac inside the iso (f)mp4 container
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1401>
2024-04-09 14:37:05 +03:00
Taruntej Kanakamalla f4b086738b webrtcsrc: change the producer-id type for request-encoded-filter
With https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1477
the producer id used while emitting the request-encoded-filter
can be a None if the msid of the webrtcbin's pad is None.
This might not affect the signal handler written in C but
can panic in an existing Rust application with signal
handler which can only handle valid String type as its param
for the producer id.

So change the param type to Option<String> in the signal builder
for request-encoded-fiter signal

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1528>
2024-04-09 06:01:15 +00:00
Tim-Philipp Müller 6b30266145 ci: tag linter and sanity check jobs as a "placeholder" jobs
They hardly use any resources and almost finish immediately.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1533>
2024-04-08 23:42:23 +00:00
Tim-Philipp Müller c8180e714e ci: make sure version Cargo.toml matches version in meson.build
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1532>
2024-04-08 14:46:00 +01:00
Sebastian Dröge 0b356ee203 deny: Update
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1530>
2024-04-06 11:12:16 +03:00
Sebastian Dröge c2ebb3083a Update Cargo.lock
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1530>
2024-04-06 11:12:16 +03:00
Sebastian Dröge 921938fd20 fmp4mux: Require gstreamer-pbutils 1.20 for the examples
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1530>
2024-04-06 11:10:58 +03:00
Sebastian Dröge fab246f82e webrtchttp: Update to reqwest 0.12
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1530>
2024-04-06 11:07:16 +03:00
Sebastian Dröge 7757e06e36 onvifmetadataparse: Reset state in PAUSED->READY after pad deactivation
Otherwise the clock id will simply be overridden instead of unscheduling
it, and if the streaming thread of the source pad currently waits on it
then it will wait potentially for a very long time and deactivating the
pad would wait for that to happen.

Also unschedule the clock id on `Drop` of the state to be one the safe
side and not simply forget about it.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1526>
2024-04-05 15:19:37 +00:00
Taruntej Kanakamalla 70adedb142 net/webrtc: fix inconsistencies in documentation of object names
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1529>
2024-04-05 14:10:35 +00:00
Matthew Waters 7f6929b98d closedcaption: remove libcaption code entirely
It is now unused.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1517>
2024-04-05 19:29:24 +11:00
Matthew Waters 2575013faa cea608tott: use our own CEA-608 frame handling instead of libcaption
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1517>
2024-04-05 19:29:24 +11:00
Matthew Waters d8fe1c64f1 cea608overlay: use or own CEA-608 caption frame handling instead of libcaption
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1517>
2024-04-05 19:29:24 +11:00
Matthew Waters fea85ff9c8 closedcaption: use cea608-types for parsing 608 captions instead of libcaption
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1517>
2024-04-05 19:29:24 +11:00
François Laignel cc43935036 webrtc: add precise synchronization example
This example demonstrates a sender / receiver setup which ensures precise
synchronization of multiple streams in a single session.

[RFC 6051]-style rapid synchronization of RTP streams is available as an option.
See the [Instantaneous RTP synchronization...] blog post for details about this
mode and an example based on RTSP instead of WebRTC.

[RFC 6051]: https://datatracker.ietf.org/doc/html/rfc6051
[Instantaneous RTP synchronization...]: https://coaxion.net/blog/2022/05/instantaneous-rtp-synchronization-retrieval-of-absolute-sender-clock-times-with-gstreamer/

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1463>
2024-04-03 19:10:40 +02:00
Guillaume Desmottes b5cbc47cf7 web: webrtcsink: improve panic message on unexpected caps during discovery
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1524>
2024-04-02 14:25:58 +02:00
Guillaume Desmottes 35b84d219f webrtc: webrtcsink: set perfect-timestamp=true on audio encoders
Chrome audio decoder doesn't cope well with not perfect ts, generating
noises in the audio.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1524>
2024-04-02 14:25:51 +02:00
Sebastian Dröge 0aabbb3186 fmp4: Update to dash-mpd 0.16
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1523>
2024-03-31 09:36:53 +03:00
Sebastian Dröge 4dd6b102c4 Update Cargo.lock
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1523>
2024-03-31 09:35:46 +03:00
Sebastian Dröge 0dd03da91f ci: Ignore env_logger for cargo-outdated
It requires Rust >= 1.71.
2024-03-29 11:03:04 +02:00
Matthew Waters e1cd52178e transcriberbin: also support 608 inside 708
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1406>
2024-03-28 13:46:28 +11:00
Matthew Waters 55b4de779c tttocea708: add support for writing 608 compatibility bytes
608 compatibility bytes are generated using the same functionality as
tttocea608.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1406>
2024-03-28 13:46:28 +11:00
Matthew Waters 9db4290d2d tttocea608: move functionality to a separate object
Will be used by tttocea708 later.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1406>
2024-03-28 13:46:28 +11:00
Matthew Waters df30d2fbd3 transcriberbin: add support for generating cea708 captions
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1406>
2024-03-28 13:46:28 +11:00
Matthew Waters b0cf7e5c81 cea708mux: add element muxing multiple 708 caption services together
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1406>
2024-03-28 13:46:28 +11:00
Matthew Waters 756abbf807 tttocea708: add element converting from text to cea708 captions
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1406>
2024-03-28 13:46:28 +11:00
Martin Nordholts 5d7e068a8b rtpgccbwe: Add increasing_duration and counter to existing gst::log!()
Add `self.increasing_duration` and `self.increasing_counter`
to logs to provide more details of why `overuse_filter()`
determines overuse of network.

To get access to the latest values of those fields we need
to move down the log call. But that is fine, since no other
logged data is modified between the old and new location of
`gst::log!()`.

We do not bother logging `self.last_overuse_estimate` since
that is simply the previously logged value of `estimate`. We
must put the log call before we write the latest value to it
though, in case we want to log it in the future.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1522>
2024-03-27 15:08:23 +00:00
François Laignel a870d60621 aws: improve error message logs
The `Display` and `Debug` trait for the AWS error messages are not very useful.

- `Display` only prints the high level error, e.g.: "service error".
- `Debug` prints all the fields in the error stack, resulting in hard to read
  messages with redudant or unnecessary information. E.g.:

> ServiceError(ServiceError { source: BadRequestException(BadRequestException {
> message: Some("1 validation error detected: Value 'test' at 'languageCode'
> failed to satisfy constraint: Member must satisfy enum value set: [ar-AE,
> zh-HK, en-US, ar-SA, zh-CN, fi-FI, pl-PL, no-NO, nl-NL, pt-PT, es-ES, th-TH,
> de-DE, it-IT, fr-FR, ko-KR, hi-IN, en-AU, pt-BR, sv-SE, ja-JP, ca-ES, es-US,
> fr-CA, en-GB]"), meta: ErrorMetadata { code: Some("BadRequestException"),
> message: Some("1 validation error detected: Value 'test' at 'languageCode'
> failed to satisfy constraint: Member must satisfy enum value set: [ar-AE,
> zh-HK, en-US, ar-SA, zh-CN, fi-FI, pl-PL, no-NO, nl-NL, pt-PT, es-ES, th-TH,
> de-DE, it-IT, fr-FR, ko-KR, hi-IN, en-AU, pt-BR, sv-SE, ja-JP, ca-ES, es-US,
> fr-CA, en-GB]"), extras: Some({"aws_request_id": "1b8bbafd-5b71-4ba5-8676-28432381e6a9"}) } }),
> raw: Response { status: StatusCode(400), headers: Headers { headers:
> {"x-amzn-requestid": HeaderValue { _private: H0("1b8bbafd-5b71-4ba5-8676-28432381e6a9") },
> "x-amzn-errortype": HeaderValue { _private:
> H0("BadRequestException:http://internal.amazon.com/coral/com.amazonaws.transcribe.streaming/") },
> "date": HeaderValue { _private: H0("Tue, 26 Mar 2024 17:41:31 GMT") },
> "content-type": HeaderValue { _private: H0("application/x-amz-json-1.1") },
> "content-length": HeaderValue { _private: H0("315") }} }, body: SdkBody {
> inner: Once(Some(b"{\"Message\":\"1 validation error detected: Value 'test'
> at 'languageCode' failed to satisfy constraint: Member must satisfy enum value
> set: [ar-AE, zh-HK, en-US, ar-SA, zh-CN, fi-FI, pl-PL, no-NO, nl-NL, pt-PT,
> es-ES, th-TH, de-DE, it-IT, fr-FR, ko-KR, hi-IN, en-AU, pt-BR, sv-SE, ja-JP,
> ca-ES, es-US, fr-CA, en-GB]\"}")), retryable: true }, extensions: Extensions {
> extensions_02x: Extensions, extensions_1x: Extensions } } })

This commit adopts the most informative and concise solution I could come up
with to log AWS errors. With the above error case, this results in:

> service error: Error { code: "BadRequestException", message: "1 validation
> error detected: Value 'test' at 'languageCode' failed to satisfy constraint:
> Member must satisfy enum value set: [ar-AE, zh-HK, en-US, ar-SA, zh-CN, fi-FI,
> pl-PL, no-NO, nl-NL, pt-PT, es-ES, th-TH, de-DE, it-IT, fr-FR, ko-KR, hi-IN,
> en-AU, pt-BR, sv-SE, ja-JP, ca-ES, es-US, fr-CA, en-GB]",
> aws_request_id: "a40a32a8-7b0b-4228-a348-f8502087a9f0" }

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1521>
2024-03-26 20:05:32 +01:00
François Laignel 9f27bde36a aws: use fixed BehaviorVersion
Quoting [`BehaviorVersion` documentation]:

> Over time, new best-practice behaviors are introduced. However, these
> behaviors might not be backwards compatible. For example, a change which
> introduces new default timeouts or a new retry-mode for all operations might
> be the ideal behavior but could break existing applications.

This commit uses `BehaviorVersion::v2023_11_09()`, which is the latest
major version at the moment. When a new major version is released, the method
will be deprecated, which will warn us of the new version and let us decide
when to upgrade, after any changes if required. This is safer that using
`latest()` which would silently use a different major version, possibly
breaking existing code.

[`BehaviorVersion` documentation]: https://docs.rs/aws-config/1.1.8/aws_config/struct.BehaviorVersion.html

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1520>
2024-03-26 17:44:16 +01:00
Matthew Waters e868f81189 gopbuffer: implement element buffering of an entire GOP
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1349>
2024-03-26 15:29:48 +11:00
Sebastian Dröge bac2e02160 deny: Add overrides for duplicates hyper / reqwest dependencies 2024-03-24 11:30:30 +02:00
Nirbheek Chauhan ae7c68dbf8 ci: Add a job to trigger a cerbero build, similar to the monorepo
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1513>
2024-03-23 23:02:27 +00:00
Sebastian Dröge 0b11209674 Update Cargo.lock
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1510>
2024-03-23 14:33:07 +02:00
Sebastian Dröge f97150aa58 reqwest: Update to reqwest 0.12
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1510>
2024-03-23 14:30:31 +02:00
Philippe Normand 7e1ab086de dav1d: Require dav1d-rs 0.10
This version depends on libdav1d >= 1.3.0. Older versions are no longer
supported, due to an ABI/API break introduced in 1.3.0.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1512>
2024-03-21 17:33:32 +00:00
Philippe Normand be12c0a5f7 Fix clippy warnings after upgrade to Rust 1.77
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1512>
2024-03-21 17:33:32 +00:00
Sebastian Dröge 317f46ad97 Update CHANGELOG.md for 0.12.3 2024-03-21 18:55:29 +02:00
François Laignel c5e7e76e4d webrtcsrc: add do-retransmission property
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1509>
2024-03-21 07:25:30 +00:00
Sebastian Dröge 6556d31ab8 livesync: Ignore another racy test
Same problem as https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/328
2024-03-21 09:27:09 +02:00
François Laignel 5476e3d759 webrtcsink: prevent video-info error log for audio streams
The following error is logged when `webrtcsink` is feeded with an audio stream:

> ERROR video-info video-info.c:540:gst_video_info_from_caps:
>       wrong name 'audio/x-raw', expected video/ or image/

This commit bypasses `VideoInfo::from_caps` for audio streams.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1511>
2024-03-20 19:57:45 +01:00
François Laignel cc7b7d508d rtp: gccbwe: don't break downstream assumptions pushing buffer lists
Some elements in the RTP stack assume all buffers in a `gst::BufferList`
correspond to the same timestamp. See in [`rtpsession`] for instance.
This also had the effect that `rtpsession` did not create correct RTCP as it
only saw some of the SSRCs in the stream.

`rtpgccbwe` formed a packet group by gathering buffers in a `gst::BufferList`,
regardless of whether they corresponded to the same timestamp, which broke
synchronization under certain circonstances.

This commit makes `rtpgccbwe` push the buffers as they were received: one by one.

[`rtpsession`]: bc858976db/subprojects/gst-plugins-good/gst/rtpmanager/gstrtpsession.c (L2462)

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1502>
2024-03-20 18:19:14 +00:00
Sebastian Dröge 2b9272c7eb fmp4mux: Move away from deprecated chrono function
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1503>
2024-03-20 15:37:18 +02:00
Sebastian Dröge cca3ebf520 rtp: Switch from chrono to time
Which allows to simplify quite a bit of code and avoids us having to
handle some API deprecations.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1503>
2024-03-20 15:05:39 +02:00
Sebastian Dröge 428f670753 version-helper: Use non-deprecated type alias from toml_edit
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1503>
2024-03-19 18:16:42 +02:00
Sebastian Dröge fadb7d0a26 deny: Add override for heck 0.4
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1503>
2024-03-19 17:52:32 +02:00
Sebastian Dröge 2a88e29454 originalbufferstore: Update for VideoMetaTransform -> VideoMetaTransformScale rename
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1503>
2024-03-19 17:51:41 +02:00
Sebastian Dröge bfff0f7d87 Update Cargo.lock
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1503>
2024-03-19 17:50:32 +02:00
Guillaume Desmottes 96337d5234 webrtc: allow resolution and framerate input changes
Some changes do not require a WebRTC renegotiation so we can allow
those.

Fix #515

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1498>
2024-03-18 14:52:01 +01:00
Tim-Philipp Müller eb49459937 rtp: m2pt: add some unit tests
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1493>
2024-03-16 10:07:37 +00:00
Tim-Philipp Müller ce3960f37f rtp: Add MPEG-TS RTP payloader
Pushes out pending TS packets on EOS.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1493>
2024-03-16 10:07:37 +00:00
Tim-Philipp Müller 9f07ec35e6 rtp: Add MPEG-TS RTP depayloader
Can handle different packet sizes, also see:
https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1310

Has clock-rate=90000 as spec prescribes, see:
https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/issues/691

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1493>
2024-03-16 10:07:37 +00:00
Mathieu Duponchelle f4366f8b2e gstregex: add support for switches exposed by RegexBuilder
The builder allows for instance for switching off case-sensitiveness for
the entire pattern, instead of having to do so inline with `(?i)`.

All the options exposed by the builder at
<https://docs.rs/regex/latest/regex/struct.RegexBuilder.html> can now be
passed as fields of invidual commands, snake-cased.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1497>
2024-03-15 17:41:39 +00:00
Guillaume Desmottes 523a46b4f5 gtk4: scale texture position
Fix regression in 0.12 introduced by 3423d05f77

Code from Ivan Molodetskikh suggested on Matrix.

Fix #519

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1499>
2024-03-15 13:43:32 +01:00
Nirbheek Chauhan 6f8fc5f178 meson: Disable docs completely when the option is disabled
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1496>
2024-03-14 15:30:17 +05:30
Guillaume Desmottes 8f997ea4e3 webrtc: janus: handle 'hangup' messages from Janus
Fix error about this message not being handled:

{
   "janus": "hangup",
   "session_id": 4758817463851315,
   "sender": 4126342934227009,
   "reason": "Close PC"
}

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1481>
2024-03-13 10:14:38 +00:00
Guillaume Desmottes 992f8d9a5d webrtc: janus: handle 'destroyed' messages from Janus
Fix this error when the room is destroyed:

ERROR   webrtc-janusvr-signaller imp.rs:413:gstrswebrtc::janusvr_signaller:👿:Signaller::handle_msg:<GstJanusVRWebRTCSignallerU64@0x55b166a3fe40> Unknown message from server: {
   "janus": "event",
   "session_id": 6667171862739941,
   "sender": 1964690595468240,
   "plugindata": {
      "plugin": "janus.plugin.videoroom",
      "data": {
         "videoroom": "destroyed",
         "room": 8320333573294267
      }
   }
}

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1481>
2024-03-13 10:14:38 +00:00
Guillaume Desmottes 9c6a39d692 webrtc: janus: handle (stopped-)talking events
Expose those events using a signal.

Fix those errors when joining a Janus room configured with
'audiolevel_event: true'.

ERROR   webrtc-janusvr-signaller imp.rs:408:gstrswebrtc::janusvr_signaller:👿:Signaller::handle_msg:<GstJanusVRWebRTCSignaller@0x560cf2a55100> Unknown message from server: {
   "janus": "event",
   "session_id": 2384862538500481,
   "sender": 1867822625190966,
   "plugindata": {
      "plugin": "janus.plugin.videoroom",
      "data": {
         "videoroom": "talking",
         "room": 7564250471742314,
         "id": 6815475717947398,
         "mindex": 0,
         "mid": "0",
         "audio-level-dBov-avg": 37.939998626708984
      }
   }
}
ERROR   webrtc-janusvr-signaller imp.rs:408:gstrswebrtc::janusvr_signaller:👿:Signaller::handle_msg:<GstJanusVRWebRTCSignaller@0x560cf2a55100> Unknown message from server: {
   "janus": "event",
   "session_id": 2384862538500481,
   "sender": 1867822625190966,
   "plugindata": {
      "plugin": "janus.plugin.videoroom",
      "data": {
         "videoroom": "stopped-talking",
         "room": 7564250471742314,
         "id": 6815475717947398,
         "mindex": 0,
         "mid": "0",
         "audio-level-dBov-avg": 40.400001525878906
      }
   }
}

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1481>
2024-03-13 10:14:38 +00:00
Guillaume Desmottes b29a739fb2 uriplaylistbin: disable racy test
https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/514

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1494>
2024-03-12 16:57:22 +01:00
Guillaume Desmottes 1dea8f60a8 threadshare: disable racy tests
https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/250

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1494>
2024-03-12 16:54:21 +01:00
Guillaume Desmottes 2629719b4e livesync: disable racy tests
https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/328
https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/357

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1494>
2024-03-12 16:32:47 +01:00
Guillaume Desmottes 9e6e8c618e togglerecord: disable racy test_two_stream_close_open_nonlivein_liveout test
See https://gitlab.freedesktop.org/gdesmott/gst-plugins-rs/-/jobs/56183085

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1494>
2024-03-12 16:21:52 +01:00
François Laignel 995f64513d Update Cargo.lock to use latest gstreamer-rs
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1491>
2024-03-11 14:42:36 +01:00
François Laignel 5b01e43a12 webrtc: update further to WebRTCSessionDescription sdp accessor changes
See: https://gitlab.freedesktop.org/gstreamer/gstreamer-rs/-/merge_requests/1406
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1491>
2024-03-11 13:39:19 +01:00
Guillaume Desmottes 03abb5c681 spotify: document how to use with non Facebook accounts
See discussion on #203.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1490>
2024-03-11 09:46:40 +01:00
Zhao, Gang 7a46377627 rtp: tests: Simplify loop
All buffers can be added in 100 outer loops. Add buffer less than 200 in the last (i = 99) loop.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1489>
2024-03-10 16:47:30 +08:00
Olivier Crête 15e7a63e7b originalbuffer: Pair of elements to keep and restore original buffer
The goal is to be able to get back the original buffer
after performing analysis on a transformed version. Then put the
various GstMeta back on the original buffer.

An example pipeline would be
.. ! originalbuffersave ! videoscale ! analysis ! originalbufferestore ! draw_overlay ! sink

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1428>
2024-03-08 15:15:13 -05:00
Guillaume Desmottes 612f863ee9 webrtc: janusvrwebrtcsink: add 'use-string-ids' property
Instead of exposing all ids properties as strings, we now have two
signaller implementations exposing those properties using their actual
type. This API is more natural and save the element and application
conversions when using numerical ids (Janus's default).

I also removed the 'joined-id' property as it's actually the same id as
'feed-id'. I think it would be better to have a 'janus-state' property or
something like that for applications willing to know when the room has
been joined.
This id is also no longer generated by the element by default, as Janus
will take care of generating one if not provided.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1486>
2024-03-07 09:34:58 +01:00
Seungha Yang 237f22d131 sccparse: Ignore invalid timecode during seek as well
sccparse holds last timecode in order to ignore invalid timecode
and fallback to the previous timecode. That should happen
when sccparse is handling seek event too. Otherwise single invalid
timecode before the target seek position will cause flow error.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1485>
2024-03-06 11:12:04 +00:00
Sebastian Dröge 2839e0078b rtp: Port RTP AV1 payloader/depayloader to new base classes
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1472>
2024-03-06 09:40:35 +00:00
Jordan Yelloz 0414f468c6 livekit_signaller: Added missing getter for excluded-producer-peer-ids
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1484>
2024-03-04 10:08:11 -07:00
Jordan Yelloz 8b0731b5a2 webrtcsrc: Removed incorrect URIHandler from LiveKit source
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1484>
2024-03-04 09:44:01 -07:00
Guillaume Desmottes 7d0397e1ad uriplaylistbin: re-enable all tests
They now seem to work reliably. \o/

Fix #194

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1471>
2024-03-04 12:00:13 +01:00
Guillaume Desmottes f6476f1e8f uriplaylistbin: use vp9 in test media
The Windows CI runner does not have a Theora decoder so those tests were
failing there.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1471>
2024-03-04 12:00:13 +01:00
Guillaume Desmottes cfebc32b82 uriplaylistbin: tests: use fakesink sync=true
Tests is more reliable when using sync sink.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1471>
2024-03-04 11:17:11 +01:00
Guillaume Desmottes 721b7e9c8c uriplaylistbin: rely on new uridecodebin3 gapless logic
uridecodebin3 can now properly handle gapless switches so use that
instead of our own very complicated logic.

Fix #268
Fix #193

Depends on gst 1.23.90 as the plugin requires recent fixes to work properly.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1471>
2024-03-04 11:17:11 +01:00
Guillaume Desmottes 1e88971ec8 uriplaylistbin: pass valid URI in tests
Fix critical raised by libsoup,
see https://gitlab.gnome.org/GNOME/libsoup/-/merge_requests/346

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1471>
2024-03-04 11:06:19 +01:00
Sebastian Dröge 8a6bcb712f Remove empty line from the CHANGELOG.md that confuses the GitLab renderer 2024-03-01 16:46:21 +02:00
Jordan Yelloz 002dc36ab9 livekit_signaller: Improved shutdown behavior
Without sending a Leave request to the server before disconnecting, the
disconnected client will appear present and stuck in the room for a little
while until the server removes it due to inactivity.

After this change, the disconnecting client will immediately leave the room.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1482>
2024-02-29 08:21:13 -07:00
Sebastian Dröge 9c590f4223 Update Cargo.lock
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1483>
2024-02-29 10:09:09 +00:00
Jordan Yelloz f0b408d823 webrtcsrc: Removed flag setup from WhipServerSrc
It's already done in the base class

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1461>
2024-02-28 11:25:58 -07:00
Jordan Yelloz 17b2640237 webrtcsrc: Updated readme for LiveKit source
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1461>
2024-02-28 11:25:58 -07:00
Jordan Yelloz fa006b9fc9 webrtcsrc: Added LiveKit source element
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1461>
2024-02-28 11:25:58 -07:00
Jordan Yelloz 96037fbcc5 webrtcsink: Updated livekitwebrtcsink for new signaller constructor
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1461>
2024-02-28 11:25:58 -07:00
Jordan Yelloz 730b3459f1 livekit_signaller: Added dual-role support
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1461>
2024-02-28 11:25:49 -07:00
Guillaume Desmottes 60bb72ddc3 webrtc: janus: add joined-id property to the signaller
Fix #504

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1480>
2024-02-28 15:05:11 +01:00
Guillaume Desmottes eabf31e6d0 webrtc: janus: rename RoomId to JanusId
Those weird ids are used in multiple places, not only for the room id,
so best to have a more generic name.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1480>
2024-02-28 15:05:11 +01:00
Guillaume Desmottes 550018c917 webrtc: janus: room id not optional in 'joined' message
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1480>
2024-02-28 14:16:46 +01:00
Guillaume Desmottes 0829898d73 webrtc: janus: remove 'audio' and 'video' from publish messages
Those are deprecated and no longer used.

See https://janus.conf.meetecho.com/docs/videoroom and
https://github.com/meetecho/janus-gateway/blob/master/src/plugins/janus_videoroom.c#L9894

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1480>
2024-02-28 13:39:04 +01:00
Guillaume Desmottes ec17c58dee webrtc: janus: numerical room ids are u64
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1478>
2024-02-28 11:56:44 +01:00
Yorick Smilda 563eff1193 Implement GstWebRTCAPI as class instead of global instance
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1373>
2024-02-27 12:30:13 +00:00
Jordan Yelloz 594400a7f5 webrtcsrc: Made producer-peer-id optional
It may be necessary for some signalling clients but the source element
doesn't need to depend on it.

Also, the value will fall back to the pad's MSID for the first argument
to the request-encoded-filter gobject signal when it isn't available
from the signalling client.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1477>
2024-02-26 13:41:40 -07:00
Sebastian Dröge 47ba068966 Update CHANGELOG.md for 0.12.2 2024-02-26 14:58:58 +02:00
Sebastian Dröge 5df7c01cb5 closedcaption: Port from nom to winnow
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1475>
2024-02-26 14:00:08 +02:00
Xavier Claessens f7ffa13543 janusvr: Add string-ids property
It forces usage of strings even if it can be parsed into an integer.
This allows joining room `"133"` in a server configured with string
room ids.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1466>
2024-02-26 11:10:00 +00:00
Xavier Claessens 23955d2dbb janusvr: Room IDs can be strings
Sponsored-by: Netflix Inc.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1466>
2024-02-26 11:10:00 +00:00
Sebastian Dröge 340d65d7a4 Update Cargo.lock
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1474>
2024-02-26 11:14:01 +02:00
Sebastian Dröge b9195ed309 fmp4mux: Update to dash-mpd 0.15
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1474>
2024-02-26 11:14:01 +02:00
Sebastian Dröge fc1c017fc6 Update CHANGELOG.md for 0.12.1 2024-02-23 19:19:17 +02:00
Sebastian Dröge f563f8334b rtp: Add PCMU/PCMA RTP payloader / depayloader elements
These come with new generic RTP payloader, RTP raw-ish audio payloader
and RTP depayloader base classes.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1424>
2024-02-23 14:43:45 +02:00
Xavier Claessens e09f9e9540 meson: Fix error when default_library=both
Skip duplicated plugin_name when we have both the static and shared
plugin in the plugins list.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1470>
2024-02-22 12:31:23 -05:00
Maksym Khomenko da21dc853d webrtcsink: extensions: separate API and signal checks
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1469>
2024-02-20 19:29:46 +02:00
Maksym Khomenko 2228f882d8 webrtcsink: apply rustfmt
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1469>
2024-02-20 19:29:28 +02:00
Mathieu Duponchelle 8f3a6171ac textwrap: don't split on all whitespaces ..
but only on ASCII whitespaces, as we want to honor non-breaking
whitespaces (\u{a0})

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1468>
2024-02-16 19:38:38 +01:00
Xavier Claessens 2572afbf15 janusvr: Add secret-key property
Every API calls have an optional "apisecret" argument.

Sponsored-by: Netflix Inc.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1465>
2024-02-16 14:04:59 +00:00
Sebastian Dröge 0faac3b875 deny: Add winnow 0.5 override
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1467>
2024-02-16 14:27:29 +02:00
Sebastian Dröge cb0cc764ba Update Cargo.lock
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1467>
2024-02-16 14:26:44 +02:00
Sebastian Dröge 45f55423fb Remove Cargo.lock from .gitignore
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1467>
2024-02-16 14:25:54 +02:00
Sebastian Dröge 8ef12a72e8 rtpgccbwe: Don't reset PTS/DTS to None
The element is usually placed before `rtpsession`, and `rtpsession`
needs the PTS/DTS for correctly determining the running time. The
running time is then used to produce correct RTCP SR, and to potentially
update an NTP-64 RTP header extension if existing on the packets.

Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/496

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1462>
2024-02-14 08:05:54 +00:00
Sebastian Dröge 05884de50c textwrap: Remove unnecessary to_string() in debug output of a string
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1458>
2024-02-12 19:09:06 +02:00
Jordan Yelloz 67b7cf9764 webrtcsink: Added sinkpad with "msid" property
This forwards to the webrtcbin sinkpad's msid when specified.

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1442>
2024-02-12 15:04:44 +00:00
Sebastian Dröge 9827106961 Update Cargo.lock
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1455>
2024-02-11 11:55:37 +02:00
Sebastian Dröge b2d5ee48cd Update to async-tungstenite 0.25
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1455>
2024-02-11 11:31:24 +02:00
Sebastian Dröge 7274c725a6 gtk4: Create a window if running from gst-launch-1.0 or GST_GTK4_WINDOW=1 is set
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/482

Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1454>
2024-02-09 15:05:45 +02:00
Sebastian Dröge 66ad059a47 deny: Add zerocopy 0.6 duplicate override for librespot 2024-02-09 10:05:45 +02:00
Sebastian Dröge c0d111e2c1 utils: Update for renamed clippy lint in 1.76 2024-02-08 21:37:17 +02:00
Sebastian Dröge 9116853e6d Update Cargo.lock
Downgrade clap_derive to 4.4.7 to not require Rust 1.74 or newer.
2024-02-08 20:50:44 +02:00
Sebastian Dröge 21aa61b69c Update Cargo.lock 2024-02-08 19:41:00 +02:00
Sebastian Dröge 119d905805 Update version to 0.13.0-alpha.1 2024-02-08 19:41:00 +02:00
277 changed files with 47419 additions and 12922 deletions

1
.gitignore vendored
View file

@ -1,4 +1,3 @@
Cargo.lock
target
*~
*.bk

View file

@ -20,6 +20,8 @@ variables:
# to ensure that we are testing against the same thing as GStreamer itself.
# The tag name is included above from the main repo.
GSTREAMER_DOC_IMAGE: "registry.freedesktop.org/gstreamer/gstreamer/amd64/fedora:$FEDORA_TAG-main"
# Use the gstreamer image to trigger the cerbero job, same as the monorepo
CERBERO_TRIGGER_IMAGE: "registry.freedesktop.org/gstreamer/gstreamer/amd64/fedora:$FEDORA_TAG-main"
WINDOWS_BASE: "registry.freedesktop.org/gstreamer/gstreamer-rs/windows"
WINDOWS_RUST_MINIMUM_IMAGE: "$WINDOWS_BASE:$GST_RS_IMG_TAG-main-$GST_RS_MSRV"
WINDOWS_RUST_STABLE_IMAGE: "$WINDOWS_BASE:$GST_RS_IMG_TAG-main-$GST_RS_STABLE"
@ -50,6 +52,7 @@ trigger:
stage: 'trigger'
variables:
GIT_STRATEGY: none
tags: [ 'placeholder-job' ]
script:
- echo "Trigger job done, now running the pipeline."
rules:
@ -287,6 +290,7 @@ test windows stable:
rustfmt:
extends: '.debian:12-stable'
stage: "lint"
tags: [ 'placeholder-job' ]
needs: []
script:
- cargo fmt --version
@ -295,6 +299,7 @@ rustfmt:
typos:
extends: '.debian:12-stable'
stage: "lint"
tags: [ 'placeholder-job' ]
needs: []
script:
- typos
@ -302,6 +307,7 @@ typos:
gstwebrtc-api lint:
image: node:lts
stage: "lint"
tags: [ 'placeholder-job' ]
needs: []
script:
- cd net/webrtc/gstwebrtc-api
@ -311,10 +317,12 @@ gstwebrtc-api lint:
check commits:
extends: '.debian:12-stable'
stage: "lint"
tags: [ 'placeholder-job' ]
needs: []
script:
- ci-fairy check-commits --textwidth 0 --no-signed-off-by
- ci/check-for-symlinks.sh
- ci/check-meson-version.sh
clippy:
extends: '.debian:12-stable'
@ -353,7 +361,8 @@ outdated:
- if: '$CI_PIPELINE_SOURCE == "schedule"'
script:
- cargo update --color=always
- cargo outdated --color=always --root-deps-only --exit-code 1 -v
# env_logger is ignored because it requires Rust >= 1.71
- cargo outdated --color=always --root-deps-only --ignore env_logger --exit-code 1 -v
coverage:
allow_failure: true
@ -384,3 +393,34 @@ coverage:
coverage_report:
coverage_format: cobertura
path: coverage.xml
cerbero trigger:
image: $CERBERO_TRIGGER_IMAGE
needs: [ "trigger" ]
variables:
# We will build this cerbero branch in the cerbero trigger CI
CERBERO_UPSTREAM_BRANCH: 'main'
script:
- ci/cerbero/trigger_cerbero_pipeline.py
rules:
# Never run post merge
- if: '$CI_PROJECT_NAMESPACE == "gstreamer"'
when: never
# Don't run if the only changes are files that cargo-c does not read
- if:
changes:
- "CHANGELOG.md"
- "README.md"
- "deny.toml"
- "rustfmt.toml"
- "typos.toml"
- "*.py"
- "*.sh"
- "Makefile"
- "meson.build"
- "meson_options.txt"
- "**/meson.build"
- "ci/*.sh"
- "ci/*.py"
when: never
- when: always

View file

@ -5,6 +5,74 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/)
and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html),
specifically the [variant used by Rust](http://doc.crates.io/manifest.html#the-version-field).
## [0.12.5] - 2024-04-29
### Fixed
- hrtfrender: Use a bitmask instead of an int in the caps for the channel-mask.
- rtpgccbwe: Don't log an error when pushing a buffer list fails while stopping.
- webrtcsink: Don't panic in bitrate handling with unsupported encoders.
- webrtcsink: Don't panic if unsupported input caps are used.
- webrtcsrc: Allow a `None` producer-id in `request-encoded-filter` signal.
### Added
- aws: New property to support path-style addressing.
- fmp4mux / mp4mux: Support FLAC instead (f)MP4.
- gtk4: Support directly importing dmabufs with GTK 4.14.
- gtk4: Add force-aspect-ratio property similar to other video sinks.
## [0.12.4] - 2024-04-08
### Fixed
- aws: Use fixed behaviour version to ensure that updates to the AWS SDK don't
change any defaults configurations in unexpected ways.
- onvifmetadataparse: Fix possible deadlock on shutdown.
- webrtcsink: Set `perfect-timestamp=true` on audio encoders to work around
bugs in Chrome's audio decoders.
- Various clippy warnings.
### Changed
- reqwest: Update to reqwest 0.12.
- webrtchttp: Update to reqwest 0.12.
## [0.12.3] - 2024-03-21
### Fixed
- gtk4paintablesink: Fix scaling of texture position.
- janusvrwebrtcsink: Handle 64 bit numerical room ids.
- janusvrwebrtcsink: Don't include deprecated audio/video fields in publish
messages.
- janusvrwebrtcsink: Handle various other messages to avoid printing errors.
- livekitwebrtc: Fix shutdown behaviour.
- rtpgccbwe: Don't forward buffer lists with buffers from different SSRCs to
avoid breaking assumptions in rtpsession.
- sccparse: Ignore invalid timecodes during seeking.
- webrtcsink: Don't try parsing audio caps as video caps.
### Changed
- webrtc: Allow resolution and framerate changes.
- webrtcsrc: Make produce-peer-id optional.
### Added
- livekitwebrtcsrc: Add new LiveKit source element.
- regex: Add support for configuring regex behaviour.
- spotifyaudiosrc: Document how to use with non-Facebook accounts.
- webrtcsrc: Add `do-retransmission` property.
## [0.12.2] - 2024-02-26
### Fixed
- rtpgccbwe: Don't reset PTS/DTS to `None` as otherwise `rtpsession` won't be
able to generate valid RTCP.
- webrtcsink: Fix usage with 1.22.
### Added
- janusvrwebrtcsink: Add `secret-key` property.
- janusvrwebrtcsink: Allow for string room ids and add `string-ids` property.
- textwrap: Don't split on all whitespaces, especially not on non-breaking
whitespace.
## [0.12.1] - 2024-02-13
### Added
- gtk4: Create a window for testing purposes when running in `gst-launch-1.0`
or if `GST_GTK4_WINDOW=1` is set.
- webrtcsink: Add `msid` property.
## [0.12.0] - 2024-02-08
### Changed
- ndi: `ndisrc` passes received data downstream without an additional copy, if
@ -36,7 +104,6 @@ specifically the [variant used by Rust](http://doc.crates.io/manifest.html#the-v
- New `janusvrwebrtcsink` element for the Janus VideoRoom API.
- New `rtspsrc2` element.
- New `whipserversrc` element.
- gtk4: New `background-color` property for setting the color of the
background of the frame and the borders, if any.
- gtk4: New `scale-filter` property for defining how to scale the frames.
@ -344,7 +411,12 @@ specifically the [variant used by Rust](http://doc.crates.io/manifest.html#the-v
- webrtcsink: Make the `turn-server` property a `turn-servers` list
- webrtcsink: Move from async-std to tokio
[Unreleased]: https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/compare/0.12.0...HEAD
[Unreleased]: https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/compare/0.12.5...HEAD
[0.12.5]: https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/compare/0.12.4...0.12.5
[0.12.4]: https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/compare/0.12.3...0.12.4
[0.12.3]: https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/compare/0.12.2...0.12.3
[0.12.2]: https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/compare/0.12.1...0.12.2
[0.12.1]: https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/compare/0.12.0...0.12.1
[0.12.0]: https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/compare/0.11.3...0.12.0
[0.11.3]: https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/compare/0.11.2...0.11.3
[0.11.2]: https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/compare/0.11.1...0.11.2

2132
Cargo.lock generated

File diff suppressed because it is too large Load diff

View file

@ -12,9 +12,11 @@ members = [
"audio/spotify",
"generic/file",
"generic/originalbuffer",
"generic/sodium",
"generic/threadshare",
"generic/inter",
"generic/gopbuffer",
"mux/flavors",
"mux/fmp4",
@ -32,6 +34,7 @@ members = [
"net/webrtc",
"net/webrtc/protocol",
"net/webrtc/signalling",
"net/quinn",
"text/ahead",
"text/json",
@ -65,8 +68,10 @@ default-members = [
"audio/claxon",
"audio/lewton",
"generic/originalbuffer",
"generic/threadshare",
"generic/inter",
"generic/gopbuffer",
"mux/fmp4",
"mux/mp4",
@ -83,6 +88,7 @@ default-members = [
"net/webrtc/protocol",
"net/webrtc/signalling",
"net/ndi",
"net/quinn",
"text/ahead",
"text/json",
@ -113,7 +119,7 @@ panic = 'unwind'
opt-level = 1
[workspace.package]
version = "0.12.0-alpha.1"
version = "0.13.0-alpha.1"
repository = "https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs"
edition = "2021"
rust-version = "1.70"
@ -130,6 +136,7 @@ gdk-wayland = { package = "gdk4-wayland", git = "https://github.com/gtk-rs/gtk4-
gdk-x11 = { package = "gdk4-x11", git = "https://github.com/gtk-rs/gtk4-rs", branch = "master"}
gdk-win32 = { package = "gdk4-win32", git = "https://github.com/gtk-rs/gtk4-rs", branch = "master"}
gst = { package = "gstreamer", git = "https://gitlab.freedesktop.org/gstreamer/gstreamer-rs", branch = "main" }
gst-allocators = { package = "gstreamer-allocators", git = "https://gitlab.freedesktop.org/gstreamer/gstreamer-rs", branch = "main" }
gst-app = { package = "gstreamer-app", git = "https://gitlab.freedesktop.org/gstreamer/gstreamer-rs", branch = "main" }
gst-audio = { package = "gstreamer-audio", git = "https://gitlab.freedesktop.org/gstreamer/gstreamer-rs", branch = "main" }
gst-base = { package = "gstreamer-base", git = "https://gitlab.freedesktop.org/gstreamer/gstreamer-rs", branch = "main" }

View file

@ -33,6 +33,9 @@ You will find the following plugins in this repository:
- `onvif`: Various elements for parsing, RTP (de)payloading, overlaying of ONVIF timed metadata.
- `quinn`: Transfer data over the network using QUIC
- `quinnquicsink`/`quinnquicsrc`: Send and receive data using QUIC
- `raptorq`: Encoder/decoder element for RaptorQ RTP FEC mechanism.
- `reqwest`: An HTTP source element based on the [reqwest](https://github.com/seanmonstar/reqwest) library.

View file

@ -649,7 +649,7 @@ impl BaseTransformImpl for HrtfRender {
if direction == gst::PadDirection::Sink {
s.set("channels", 2);
s.set("channel-mask", 0x3);
s.set("channel-mask", gst::Bitmask(0x3));
} else {
let settings = self.settings.lock().unwrap();
if let Some(objs) = &settings.spatial_objects {

View file

@ -8,10 +8,11 @@ to respect their legal/licensing restrictions.
## Spotify Credentials
This plugin requires a [Spotify Premium](https://www.spotify.com/premium/) account configured
with a [device password](https://www.spotify.com/us/account/set-device-password/).
This plugin requires a [Spotify Premium](https://www.spotify.com/premium/) account.
If your account is linked with Facebook, you'll need to setup
a [device username and password](https://www.spotify.com/us/account/set-device-password/).
You can then set the device username and password using the `username` and `password` properties.
Those username and password are then set using the `username` and `password` properties.
You may also want to cache credentials and downloaded files, see the `cache-` properties on the element.

View file

@ -30,13 +30,13 @@ impl Settings {
pub fn properties() -> Vec<glib::ParamSpec> {
vec![glib::ParamSpecString::builder("username")
.nick("Username")
.blurb("Spotify device username from https://www.spotify.com/us/account/set-device-password/")
.blurb("Spotify username, Facebook accounts need a device username from https://www.spotify.com/us/account/set-device-password/")
.default_value(Some(""))
.mutable_ready()
.build(),
glib::ParamSpecString::builder("password")
.nick("Password")
.blurb("Spotify device password from https://www.spotify.com/us/account/set-device-password/")
.blurb("Spotify password, Facebook accounts need a device password from https://www.spotify.com/us/account/set-device-password/")
.default_value(Some(""))
.mutable_ready()
.build(),

View file

@ -0,0 +1,103 @@
#!/usr/bin/python3
#
# Copied from gstreamer.git/ci/gitlab/trigger_cerbero_pipeline.py
import time
import os
import sys
import gitlab
CERBERO_PROJECT = 'gstreamer/cerbero'
class Status:
FAILED = 'failed'
MANUAL = 'manual'
CANCELED = 'canceled'
SUCCESS = 'success'
SKIPPED = 'skipped'
CREATED = 'created'
@classmethod
def is_finished(cls, state):
return state in [
cls.FAILED,
cls.MANUAL,
cls.CANCELED,
cls.SUCCESS,
cls.SKIPPED,
]
def fprint(msg):
print(msg, end="")
sys.stdout.flush()
if __name__ == "__main__":
server = os.environ['CI_SERVER_URL']
gl = gitlab.Gitlab(server,
private_token=os.environ.get('GITLAB_API_TOKEN'),
job_token=os.environ.get('CI_JOB_TOKEN'))
def get_matching_user_project(project, branch):
cerbero = gl.projects.get(project)
# Search for matching branches, return only if the branch name matches
# exactly
for b in cerbero.branches.list(search=cerbero_branch, iterator=True):
if branch == b.name:
return cerbero
return None
cerbero = None
# We do not want to run on (often out of date) user upstream branch
if os.environ["CI_COMMIT_REF_NAME"] != os.environ['CERBERO_UPSTREAM_BRANCH']:
try:
cerbero_name = f'{os.environ["CI_PROJECT_NAMESPACE"]}/cerbero'
cerbero_branch = os.environ["CI_COMMIT_REF_NAME"]
cerbero = get_matching_user_project(cerbero_name, cerbero_branch)
except gitlab.exceptions.GitlabGetError:
pass
if cerbero is None:
cerbero_name = CERBERO_PROJECT
cerbero_branch = os.environ["CERBERO_UPSTREAM_BRANCH"]
cerbero = gl.projects.get(cerbero_name)
fprint(f"-> Triggering on branch {cerbero_branch} in {cerbero_name}\n")
# CI_PROJECT_URL is not necessarily the project where the branch we need to
# build resides, for instance merge request pipelines can be run on
# 'gstreamer' namespace. Fetch the branch name in the same way, just in
# case it breaks in the future.
if 'CI_MERGE_REQUEST_SOURCE_PROJECT_URL' in os.environ:
project_url = os.environ['CI_MERGE_REQUEST_SOURCE_PROJECT_URL']
project_branch = os.environ['CI_MERGE_REQUEST_SOURCE_BRANCH_NAME']
else:
project_url = os.environ['CI_PROJECT_URL']
project_branch = os.environ['CI_COMMIT_REF_NAME']
variables = {
"CI_GST_PLUGINS_RS_URL": project_url,
"CI_GST_PLUGINS_RS_REF_NAME": project_branch,
# This tells cerbero CI that this is a pipeline started via the
# trigger API, which means it can use a deps cache instead of
# building from scratch.
"CI_GSTREAMER_TRIGGERED": "true",
}
pipe = cerbero.trigger_pipeline(
token=os.environ['CI_JOB_TOKEN'],
ref=cerbero_branch,
variables=variables,
)
fprint(f'Cerbero pipeline running at {pipe.web_url} ')
while True:
time.sleep(15)
pipe.refresh()
if Status.is_finished(pipe.status):
fprint(f": {pipe.status}\n")
sys.exit(0 if pipe.status == Status.SUCCESS else 1)
else:
fprint(".")

14
ci/check-meson-version.sh Executable file
View file

@ -0,0 +1,14 @@
#!/bin/bash
MESON_VERSION=`head -n5 meson.build | grep ' version\s*:' | sed -e "s/.*version\s*:\s*'//" -e "s/',.*//"`
CARGO_VERSION=`cat Cargo.toml | grep -A1 workspace.package | grep ^version | sed -e 's/^version = "\(.*\)"/\1/'`
echo "gst-plugins-rs version (meson.build) : $MESON_VERSION"
echo "gst-plugins-rs version (Cargo.toml) : $CARGO_VERSION"
if test "x$MESON_VERSION" != "x$CARGO_VERSION"; then
echo
echo "===> Version mismatch between meson.build and Cargo.toml! <==="
echo
exit 1;
fi

View file

@ -70,6 +70,12 @@ version = "0.9"
[[bans.skip]]
name = "hmac"
version = "0.11"
[[bans.skip]]
name = "zerocopy"
version = "0.6"
[[bans.skip]]
name = "multimap"
version = "0.8"
# field-offset and nix depend on an older memoffset
# https://github.com/Diggsey/rust-field-offset/pull/23
@ -87,17 +93,15 @@ version = "0.1"
[[bans.skip]]
name = "base64"
version = "0.13"
[[bans.skip]]
name = "base64"
version = "0.21"
# Various crates depend on an older version of socket2
[[bans.skip]]
name = "socket2"
version = "0.4"
# Various crates depend on an older version of syn
[[bans.skip]]
name = "syn"
version = "1.0"
# Various crates depend on an older version of bitflags
[[bans.skip]]
name = "bitflags"
@ -184,6 +188,54 @@ version = "0.2"
[[bans.skip]]
name = "toml_edit"
version = "0.21"
[[bans.skip]]
name = "winnow"
version = "0.5"
# Various crates depend on an older version of heck
[[bans.skip]]
name = "heck"
version = "0.4"
# Various crates depend on an older version of hyper / reqwest / headers / etc
[[bans.skip]]
name = "hyper"
version = "0.14"
[[bans.skip]]
name = "hyper-tls"
version = "0.5"
[[bans.skip]]
name = "http-body"
version = "0.4"
[[bans.skip]]
name = "headers-core"
version = "0.2"
[[bans.skip]]
name = "headers"
version = "0.3"
[[bans.skip]]
name = "h2"
version = "0.3"
[[bans.skip]]
name = "reqwest"
version = "0.11"
[[bans.skip]]
name = "rustls-pemfile"
version = "1.0"
[[bans.skip]]
name = "winreg"
version = "0.50"
# The AWS SDK uses old versions of rustls and related crates
[[bans.skip]]
name = "rustls"
version = "0.21"
[[bans.skip]]
name = "rustls-native-certs"
version = "0.6"
[[bans.skip]]
name = "rustls-webpki"
version = "0.101"
[sources]
unknown-registry = "deny"

View file

@ -1,5 +1,9 @@
build_hotdoc = false
if get_option('doc').disabled()
subdir_done()
endif
if meson.is_cross_build()
if get_option('doc').enabled()
error('Documentation enabled but building the doc while cross building is not supported yet.')

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,44 @@
[package]
name = "gst-plugin-gopbuffer"
version.workspace = true
authors = ["Matthew Waters <matthew@centricular.com>"]
license = "MPL-2.0"
description = "Store complete groups of pictures at a time"
repository.workspace = true
edition.workspace = true
rust-version.workspace = true
[dependencies]
anyhow = "1"
gst = { workspace = true, features = ["v1_18"] }
gst-video = { workspace = true, features = ["v1_18"] }
once_cell.workspace = true
[lib]
name = "gstgopbuffer"
crate-type = ["cdylib", "rlib"]
path = "src/lib.rs"
[dev-dependencies]
gst-app = { workspace = true, features = ["v1_18"] }
gst-check = { workspace = true, features = ["v1_18"] }
[build-dependencies]
gst-plugin-version-helper = { path="../../version-helper" }
[features]
static = []
capi = []
[package.metadata.capi]
min_version = "0.8.0"
[package.metadata.capi.header]
enabled = false
[package.metadata.capi.library]
install_subdir = "gstreamer-1.0"
versioning = false
[package.metadata.capi.pkg_config]
requires_private = "gstreamer-1.0, gstreamer-base-1.0, gstreamer-audio-1.0, gstreamer-video-1.0, gobject-2.0, glib-2.0, gmodule-2.0"

View file

@ -0,0 +1,373 @@
Mozilla Public License Version 2.0
==================================
1. Definitions
--------------
1.1. "Contributor"
means each individual or legal entity that creates, contributes to
the creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used
by a Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached
the notice in Exhibit A, the Executable Form of such Source Code
Form, and Modifications of such Source Code Form, in each case
including portions thereof.
1.5. "Incompatible With Secondary Licenses"
means
(a) that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or
(b) that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the
terms of a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in
a separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible,
whether at the time of the initial grant or subsequently, any and
all of the rights conveyed by this License.
1.10. "Modifications"
means any of the following:
(a) any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered
Software; or
(b) any new file in Source Code Form that contains any Covered
Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the
License, by the making, using, selling, offering for sale, having
made, import, or transfer of either its Contributions or its
Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU
Lesser General Public License, Version 2.1, the GNU Affero General
Public License, Version 3.0, or any later versions of those
licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that
controls, is controlled by, or is under common control with You. For
purposes of this definition, "control" means (a) the power, direct
or indirect, to cause the direction or management of such entity,
whether by contract or otherwise, or (b) ownership of more than
fifty percent (50%) of the outstanding shares or beneficial
ownership of such entity.
2. License Grants and Conditions
--------------------------------
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
(a) under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
(b) under Patent Claims of such Contributor to make, use, sell, offer
for sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
(a) for any code that a Contributor has removed from Covered Software;
or
(b) for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
(c) under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights
to grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted
in Section 2.1.
3. Responsibilities
-------------------
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
(a) such Covered Software must also be made available in Source Code
Form, as described in Section 3.1, and You must inform recipients of
the Executable Form how they can obtain a copy of such Source Code
Form by reasonable means in a timely manner, at a charge no more
than the cost of distribution to the recipient; and
(b) You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter
the recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty,
or limitations of liability) contained within the Source Code Form of
the Covered Software, except that You may alter any license notices to
the extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
---------------------------------------------------
If it is impossible for You to comply with any of the terms of this
License with respect to some or all of the Covered Software due to
statute, judicial order, or regulation then You must: (a) comply with
the terms of this License to the maximum extent possible; and (b)
describe the limitations and the code they affect. Such description must
be placed in a text file included with all distributions of the Covered
Software under this License. Except to the extent prohibited by statute
or regulation, such description must be sufficiently detailed for a
recipient of ordinary skill to be able to understand it.
5. Termination
--------------
5.1. The rights granted under this License will terminate automatically
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the
first time You have received notice of non-compliance with this License
from such Contributor, and You become compliant prior to 30 days after
Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all
end user license agreements (excluding distributors and resellers) which
have been validly granted by You or Your distributors under this License
prior to termination shall survive termination.
************************************************************************
* *
* 6. Disclaimer of Warranty *
* ------------------------- *
* *
* Covered Software is provided under this License on an "as is" *
* basis, without warranty of any kind, either expressed, implied, or *
* statutory, including, without limitation, warranties that the *
* Covered Software is free of defects, merchantable, fit for a *
* particular purpose or non-infringing. The entire risk as to the *
* quality and performance of the Covered Software is with You. *
* Should any Covered Software prove defective in any respect, You *
* (not any Contributor) assume the cost of any necessary servicing, *
* repair, or correction. This disclaimer of warranty constitutes an *
* essential part of this License. No use of any Covered Software is *
* authorized under this License except under this disclaimer. *
* *
************************************************************************
************************************************************************
* *
* 7. Limitation of Liability *
* -------------------------- *
* *
* Under no circumstances and under no legal theory, whether tort *
* (including negligence), contract, or otherwise, shall any *
* Contributor, or anyone who distributes Covered Software as *
* permitted above, be liable to You for any direct, indirect, *
* special, incidental, or consequential damages of any character *
* including, without limitation, damages for lost profits, loss of *
* goodwill, work stoppage, computer failure or malfunction, or any *
* and all other commercial damages or losses, even if such party *
* shall have been informed of the possibility of such damages. This *
* limitation of liability shall not apply to liability for death or *
* personal injury resulting from such party's negligence to the *
* extent applicable law prohibits such limitation. Some *
* jurisdictions do not allow the exclusion or limitation of *
* incidental or consequential damages, so this exclusion and *
* limitation may not apply to You. *
* *
************************************************************************
8. Litigation
-------------
Any litigation relating to this License may be brought only in the
courts of a jurisdiction where the defendant maintains its principal
place of business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions.
Nothing in this Section shall prevent a party's ability to bring
cross-claims or counter-claims.
9. Miscellaneous
----------------
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides
that the language of a contract shall be construed against the drafter
shall not be used to construe this License against a Contributor.
10. Versions of the License
---------------------------
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
-------------------------------------------
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular
file, then You may include the notice in a location (such as a LICENSE
file in a relevant directory) where a recipient would be likely to look
for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
---------------------------------------------------------
This Source Code Form is "Incompatible With Secondary Licenses", as
defined by the Mozilla Public License, v. 2.0.

View file

@ -0,0 +1,3 @@
fn main() {
gst_plugin_version_helper::info()
}

View file

@ -0,0 +1,880 @@
// Copyright (C) 2023 Matthew Waters <matthew@centricular.com>
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
/**
* SECTION:element-gopbuffer
*
* #gopbuffer is an element that can be used to store a minimum duration of data delimited by
* discrete GOPs (Group of Picture). It does this in by differentiation on the DELTA_UNIT
* flag on each input buffer.
*
* One example of the usefulness of #gopbuffer is its ability to store a backlog of data starting
* on a key frame boundary if say the previous 10s seconds of a stream would like to be recorded to
* disk.
*
* ## Example pipeline
*
* |[
* gst-launch videotestsrc ! vp8enc ! gopbuffer minimum-duration=10000000000 ! fakesink
* ]|
*
* Since: plugins-rs-0.13.0
*/
use gst::glib;
use gst::prelude::*;
use gst::subclass::prelude::*;
use std::collections::VecDeque;
use std::sync::Mutex;
use once_cell::sync::Lazy;
static CAT: Lazy<gst::DebugCategory> = Lazy::new(|| {
gst::DebugCategory::new(
"gopbuffer",
gst::DebugColorFlags::empty(),
Some("GopBuffer Element"),
)
});
const DEFAULT_MIN_TIME: gst::ClockTime = gst::ClockTime::from_seconds(1);
const DEFAULT_MAX_TIME: Option<gst::ClockTime> = None;
#[derive(Debug, Clone)]
struct Settings {
min_time: gst::ClockTime,
max_time: Option<gst::ClockTime>,
}
impl Default for Settings {
fn default() -> Self {
Settings {
min_time: DEFAULT_MIN_TIME,
max_time: DEFAULT_MAX_TIME,
}
}
}
#[derive(Debug, Copy, Clone)]
pub(crate) enum DeltaFrames {
/// Only single completely decodable frames
IntraOnly,
/// Frames may depend on past frames
PredictiveOnly,
/// Frames may depend on past or future frames
Bidirectional,
}
impl DeltaFrames {
/// Whether dts is required to order buffers differently from presentation order
pub(crate) fn requires_dts(&self) -> bool {
matches!(self, Self::Bidirectional)
}
/// Whether this coding structure does not allow delta flags on buffers
pub(crate) fn intra_only(&self) -> bool {
matches!(self, Self::IntraOnly)
}
pub(crate) fn from_caps(caps: &gst::CapsRef) -> Option<Self> {
let s = caps.structure(0)?;
Some(match s.name().as_str() {
"video/x-h264" | "video/x-h265" => DeltaFrames::Bidirectional,
"video/x-vp8" | "video/x-vp9" | "video/x-av1" => DeltaFrames::PredictiveOnly,
"image/jpeg" | "image/png" | "video/x-raw" => DeltaFrames::IntraOnly,
_ => return None,
})
}
}
// TODO: add buffer list support
#[derive(Debug)]
enum GopItem {
Buffer(gst::Buffer),
Event(gst::Event),
}
struct Gop {
// all times are in running time
start_pts: gst::ClockTime,
start_dts: Option<gst::Signed<gst::ClockTime>>,
earliest_pts: gst::ClockTime,
final_earliest_pts: bool,
end_pts: gst::ClockTime,
end_dts: Option<gst::Signed<gst::ClockTime>>,
final_end_pts: bool,
// Buffer or event
data: VecDeque<GopItem>,
}
impl Gop {
fn push_on_pad(mut self, pad: &gst::Pad) -> Result<gst::FlowSuccess, gst::FlowError> {
let mut iter = self.data.iter().filter_map(|item| match item {
GopItem::Buffer(buffer) => buffer.pts(),
_ => None,
});
let first_pts = iter.next();
let last_pts = iter.last();
gst::debug!(
CAT,
"pushing gop with start pts {} end pts {}",
first_pts.display(),
last_pts.display(),
);
for item in self.data.drain(..) {
match item {
GopItem::Buffer(buffer) => {
pad.push(buffer)?;
}
GopItem::Event(event) => {
pad.push_event(event);
}
}
}
Ok(gst::FlowSuccess::Ok)
}
}
struct Stream {
sinkpad: gst::Pad,
srcpad: gst::Pad,
sink_segment: Option<gst::FormattedSegment<gst::ClockTime>>,
delta_frames: DeltaFrames,
queued_gops: VecDeque<Gop>,
}
impl Stream {
fn queue_buffer(
&mut self,
buffer: gst::Buffer,
segment: &gst::FormattedSegment<gst::ClockTime>,
) -> Result<gst::FlowSuccess, gst::FlowError> {
let pts_position = buffer.pts().unwrap();
let end_pts_position = pts_position
.opt_add(buffer.duration())
.unwrap_or(pts_position);
let pts = segment
.to_running_time_full(pts_position)
.ok_or_else(|| {
gst::error!(CAT, obj: self.sinkpad, "Couldn't convert PTS to running time");
gst::FlowError::Error
})?
.positive()
.unwrap_or_else(|| {
gst::warning!(CAT, obj: self.sinkpad, "Negative PTSs are not supported");
gst::ClockTime::ZERO
});
let end_pts = segment
.to_running_time_full(end_pts_position)
.ok_or_else(|| {
gst::error!(
CAT,
obj: self.sinkpad,
"Couldn't convert end PTS to running time"
);
gst::FlowError::Error
})?
.positive()
.unwrap_or_else(|| {
gst::warning!(CAT, obj: self.sinkpad, "Negative PTSs are not supported");
gst::ClockTime::ZERO
});
let (dts, end_dts) = if !self.delta_frames.requires_dts() {
(None, None)
} else {
let dts_position = buffer.dts().expect("No dts");
let end_dts_position = buffer
.duration()
.opt_add(dts_position)
.unwrap_or(dts_position);
let dts = segment.to_running_time_full(dts_position).ok_or_else(|| {
gst::error!(CAT, obj: self.sinkpad, "Couldn't convert DTS to running time");
gst::FlowError::Error
})?;
let end_dts = segment
.to_running_time_full(end_dts_position)
.ok_or_else(|| {
gst::error!(
CAT,
obj: self.sinkpad,
"Couldn't convert end DTS to running time"
);
gst::FlowError::Error
})?;
let end_dts = std::cmp::max(end_dts, dts);
(Some(dts), Some(end_dts))
};
if !buffer.flags().contains(gst::BufferFlags::DELTA_UNIT) {
gst::debug!(
CAT,
"New GOP detected with buffer pts {} dts {}",
buffer.pts().display(),
buffer.dts().display()
);
let gop = Gop {
start_pts: pts,
start_dts: dts,
earliest_pts: pts,
final_earliest_pts: false,
end_pts: pts,
end_dts,
final_end_pts: false,
data: VecDeque::from([GopItem::Buffer(buffer)]),
};
self.queued_gops.push_front(gop);
if let Some(prev_gop) = self.queued_gops.get_mut(1) {
gst::debug!(
CAT,
obj: self.sinkpad,
"Updating previous GOP starting at PTS {} to end PTS {}",
prev_gop.earliest_pts,
pts,
);
prev_gop.end_pts = std::cmp::max(prev_gop.end_pts, pts);
prev_gop.end_dts = std::cmp::max(prev_gop.end_dts, dts);
if !self.delta_frames.requires_dts() {
prev_gop.final_end_pts = true;
}
if !prev_gop.final_earliest_pts {
// Don't bother logging this for intra-only streams as it would be for every
// single buffer.
if self.delta_frames.requires_dts() {
gst::debug!(
CAT,
obj: self.sinkpad,
"Previous GOP has final earliest PTS at {}",
prev_gop.earliest_pts
);
}
prev_gop.final_earliest_pts = true;
if let Some(prev_prev_gop) = self.queued_gops.get_mut(2) {
prev_prev_gop.final_end_pts = true;
}
}
}
} else if let Some(gop) = self.queued_gops.front_mut() {
gop.end_pts = std::cmp::max(gop.end_pts, end_pts);
gop.end_dts = gop.end_dts.opt_max(end_dts);
gop.data.push_back(GopItem::Buffer(buffer));
if self.delta_frames.requires_dts() {
let dts = dts.unwrap();
if gop.earliest_pts > pts && !gop.final_earliest_pts {
gst::debug!(
CAT,
obj: self.sinkpad,
"Updating current GOP earliest PTS from {} to {}",
gop.earliest_pts,
pts
);
gop.earliest_pts = pts;
if let Some(prev_gop) = self.queued_gops.get_mut(1) {
if prev_gop.end_pts < pts {
gst::debug!(
CAT,
obj: self.sinkpad,
"Updating previous GOP starting PTS {} end time from {} to {}",
pts,
prev_gop.end_pts,
pts
);
prev_gop.end_pts = pts;
}
}
}
let gop = self.queued_gops.front_mut().unwrap();
// The earliest PTS is known when the current DTS is bigger or equal to the first
// PTS that was observed in this GOP. If there was another frame later that had a
// lower PTS then it wouldn't be possible to display it in time anymore, i.e. the
// stream would be invalid.
if gop.start_pts <= dts && !gop.final_earliest_pts {
gst::debug!(
CAT,
obj: self.sinkpad,
"GOP has final earliest PTS at {}",
gop.earliest_pts
);
gop.final_earliest_pts = true;
if let Some(prev_gop) = self.queued_gops.get_mut(1) {
prev_gop.final_end_pts = true;
}
}
}
} else {
gst::debug!(
CAT,
"dropping buffer before first GOP with pts {} dts {}",
buffer.pts().display(),
buffer.dts().display()
);
}
if let Some((prev_gop, first_gop)) = Option::zip(
self.queued_gops.iter().find(|gop| gop.final_end_pts),
self.queued_gops.back(),
) {
gst::debug!(
CAT,
obj: self.sinkpad,
"Queued full GOPs duration updated to {}",
prev_gop.end_pts.saturating_sub(first_gop.earliest_pts),
);
}
gst::debug!(
CAT,
obj: self.sinkpad,
"Queued duration updated to {}",
Option::zip(self.queued_gops.front(), self.queued_gops.back())
.map(|(end, start)| end.end_pts.saturating_sub(start.start_pts))
.unwrap_or(gst::ClockTime::ZERO)
);
Ok(gst::FlowSuccess::Ok)
}
fn oldest_gop(&mut self) -> Option<Gop> {
self.queued_gops.pop_back()
}
fn peek_oldest_gop(&self) -> Option<&Gop> {
self.queued_gops.back()
}
fn peek_second_oldest_gop(&self) -> Option<&Gop> {
if self.queued_gops.len() <= 1 {
return None;
}
self.queued_gops.get(self.queued_gops.len() - 2)
}
fn drain_all(&mut self) -> impl Iterator<Item = Gop> + '_ {
self.queued_gops.drain(..)
}
fn flush(&mut self) {
self.queued_gops.clear();
}
}
#[derive(Default)]
struct State {
streams: Vec<Stream>,
}
impl State {
fn stream_from_sink_pad(&self, pad: &gst::Pad) -> Option<&Stream> {
self.streams.iter().find(|stream| &stream.sinkpad == pad)
}
fn stream_from_sink_pad_mut(&mut self, pad: &gst::Pad) -> Option<&mut Stream> {
self.streams
.iter_mut()
.find(|stream| &stream.sinkpad == pad)
}
fn stream_from_src_pad(&self, pad: &gst::Pad) -> Option<&Stream> {
self.streams.iter().find(|stream| &stream.srcpad == pad)
}
}
#[derive(Default)]
pub(crate) struct GopBuffer {
state: Mutex<State>,
settings: Mutex<Settings>,
}
impl GopBuffer {
fn sink_chain(
&self,
pad: &gst::Pad,
buffer: gst::Buffer,
) -> Result<gst::FlowSuccess, gst::FlowError> {
let obj = self.obj();
if buffer.pts().is_none() {
gst::error!(CAT, obj: obj, "Require timestamped buffers!");
return Err(gst::FlowError::Error);
}
let settings = self.settings.lock().unwrap().clone();
let mut state = self.state.lock().unwrap();
let stream = state
.stream_from_sink_pad_mut(pad)
.expect("pad without an internal Stream");
let Some(segment) = stream.sink_segment.clone() else {
gst::element_imp_error!(self, gst::CoreError::Clock, ["Got buffer before segment"]);
return Err(gst::FlowError::Error);
};
if stream.delta_frames.intra_only() && buffer.flags().contains(gst::BufferFlags::DELTA_UNIT)
{
gst::error!(CAT, obj: pad, "Intra-only stream with delta units");
return Err(gst::FlowError::Error);
}
if stream.delta_frames.requires_dts() && buffer.dts().is_none() {
gst::error!(CAT, obj: pad, "Require DTS for video streams");
return Err(gst::FlowError::Error);
}
let srcpad = stream.srcpad.clone();
stream.queue_buffer(buffer, &segment)?;
let mut gops_to_push = vec![];
let Some(newest_gop) = stream.queued_gops.front() else {
return Ok(gst::FlowSuccess::Ok);
};
// we are looking for the latest pts value here (which should be the largest)
let newest_ts = if stream.delta_frames.requires_dts() {
newest_gop.end_dts.unwrap()
} else {
gst::Signed::Positive(newest_gop.end_pts)
};
loop {
// check stored times as though the oldest GOP doesn't exist.
let Some(second_oldest_gop) = stream.peek_second_oldest_gop() else {
break;
};
// we are looking for the oldest pts here (with the largest value). This is our potentially
// new end time.
let oldest_ts = if stream.delta_frames.requires_dts() {
second_oldest_gop.start_dts.unwrap()
} else {
gst::Signed::Positive(second_oldest_gop.start_pts)
};
let stored_duration_without_oldest = newest_ts.saturating_sub(oldest_ts);
gst::trace!(
CAT,
obj: obj,
"newest_pts {}, second oldest_pts {}, stored_duration_without_oldest_gop {}, min-time {}",
newest_ts.display(),
oldest_ts.display(),
stored_duration_without_oldest.display(),
settings.min_time.display()
);
if stored_duration_without_oldest < settings.min_time {
break;
}
gops_to_push.push(stream.oldest_gop().unwrap());
}
if let Some(max_time) = settings.max_time {
while let Some(oldest_gop) = stream.peek_oldest_gop() {
let oldest_ts = oldest_gop.data.iter().rev().find_map(|item| match item {
GopItem::Buffer(buffer) => {
if stream.delta_frames.requires_dts() {
Some(gst::Signed::Positive(buffer.dts().unwrap()))
} else {
Some(gst::Signed::Positive(buffer.pts().unwrap()))
}
}
_ => None,
});
if newest_ts
.opt_saturating_sub(oldest_ts)
.map_or(false, |diff| diff > gst::Signed::Positive(max_time))
{
gst::warning!(CAT, obj: obj, "Stored data has overflowed the maximum allowed stored time {}, pushing oldest GOP", max_time.display());
gops_to_push.push(stream.oldest_gop().unwrap());
} else {
break;
}
}
}
drop(state);
for gop in gops_to_push.into_iter() {
gop.push_on_pad(&srcpad)?;
}
Ok(gst::FlowSuccess::Ok)
}
fn sink_event(&self, pad: &gst::Pad, event: gst::Event) -> bool {
let obj = self.obj();
let mut state = self.state.lock().unwrap();
let stream = state
.stream_from_sink_pad_mut(pad)
.expect("pad without an internal Stream!");
match event.view() {
gst::EventView::Caps(caps) => {
let Some(delta_frames) = DeltaFrames::from_caps(caps.caps()) else {
return false;
};
stream.delta_frames = delta_frames;
}
gst::EventView::FlushStop(_flush) => {
gst::debug!(CAT, obj: obj, "flushing stored data");
stream.flush();
}
gst::EventView::Eos(_eos) => {
gst::debug!(CAT, obj: obj, "draining data at EOS");
let gops = stream.drain_all().collect::<Vec<_>>();
let srcpad = stream.srcpad.clone();
drop(state);
for gop in gops.into_iter() {
let _ = gop.push_on_pad(&srcpad);
}
// once we've pushed all the data, we can push the corresponding eos
gst::Pad::event_default(pad, Some(&*obj), event);
return true;
}
gst::EventView::Segment(segment) => {
let Ok(segment) = segment.segment().clone().downcast::<gst::ClockTime>() else {
gst::error!(CAT, "Non TIME segments are not supported");
return false;
};
stream.sink_segment = Some(segment);
}
_ => (),
};
if event.is_serialized() {
if stream.peek_oldest_gop().is_none() {
// if there is nothing queued, the event can go straight through
gst::trace!(CAT, obj: obj, "nothing queued, event {:?} passthrough", event.structure().map(|s| s.name().as_str()));
drop(state);
return gst::Pad::event_default(pad, Some(&*obj), event);
}
let gop = stream.queued_gops.front_mut().unwrap();
gop.data.push_back(GopItem::Event(event));
true
} else {
// non-serialized events can be pushed directly
drop(state);
gst::Pad::event_default(pad, Some(&*obj), event)
}
}
fn sink_query(&self, pad: &gst::Pad, query: &mut gst::QueryRef) -> bool {
let obj = self.obj();
if query.is_serialized() {
// TODO: serialized queries somehow?
gst::warning!(CAT, obj: pad, "Serialized queries are currently not supported");
return false;
}
gst::Pad::query_default(pad, Some(&*obj), query)
}
fn src_query(&self, pad: &gst::Pad, query: &mut gst::QueryRef) -> bool {
let obj = self.obj();
match query.view_mut() {
gst::QueryViewMut::Latency(latency) => {
let mut upstream_query = gst::query::Latency::new();
let otherpad = {
let state = self.state.lock().unwrap();
let Some(stream) = state.stream_from_src_pad(pad) else {
return false;
};
stream.sinkpad.clone()
};
let ret = otherpad.peer_query(&mut upstream_query);
if ret {
let (live, mut min, mut max) = upstream_query.result();
let settings = self.settings.lock().unwrap();
min += settings.max_time.unwrap_or(settings.min_time);
max = max.opt_max(settings.max_time);
latency.set(live, min, max);
gst::debug!(
CAT,
obj: pad,
"Latency query response: live {} min {} max {}",
live,
min,
max.display()
);
}
ret
}
_ => gst::Pad::query_default(pad, Some(&*obj), query),
}
}
fn iterate_internal_links(&self, pad: &gst::Pad) -> gst::Iterator<gst::Pad> {
let state = self.state.lock().unwrap();
let otherpad = match pad.direction() {
gst::PadDirection::Src => state
.stream_from_src_pad(pad)
.map(|stream| stream.sinkpad.clone()),
gst::PadDirection::Sink => state
.stream_from_sink_pad(pad)
.map(|stream| stream.srcpad.clone()),
_ => unreachable!(),
};
if let Some(otherpad) = otherpad {
gst::Iterator::from_vec(vec![otherpad])
} else {
gst::Iterator::from_vec(vec![])
}
}
}
#[glib::object_subclass]
impl ObjectSubclass for GopBuffer {
const NAME: &'static str = "GstGopBuffer";
type Type = super::GopBuffer;
type ParentType = gst::Element;
}
impl ObjectImpl for GopBuffer {
fn properties() -> &'static [glib::ParamSpec] {
static PROPERTIES: Lazy<Vec<glib::ParamSpec>> = Lazy::new(|| {
vec![
glib::ParamSpecUInt64::builder("minimum-duration")
.nick("Minimum Duration")
.blurb("The minimum duration to store")
.default_value(DEFAULT_MIN_TIME.nseconds())
.mutable_ready()
.build(),
glib::ParamSpecUInt64::builder("max-size-time")
.nick("Maximum Duration")
.blurb("The maximum duration to store (0=disable)")
.default_value(0)
.mutable_ready()
.build(),
]
});
&PROPERTIES
}
fn set_property(&self, _id: usize, value: &glib::Value, pspec: &glib::ParamSpec) {
match pspec.name() {
"minimum-duration" => {
let mut settings = self.settings.lock().unwrap();
let min_time = value.get().expect("type checked upstream");
if settings.min_time != min_time {
settings.min_time = min_time;
drop(settings);
self.post_message(gst::message::Latency::builder().src(&*self.obj()).build());
}
}
"max-size-time" => {
let mut settings = self.settings.lock().unwrap();
let max_time = value
.get::<Option<gst::ClockTime>>()
.expect("type checked upstream");
let max_time = if matches!(max_time, Some(gst::ClockTime::ZERO) | None) {
None
} else {
max_time
};
if settings.max_time != max_time {
settings.max_time = max_time;
drop(settings);
self.post_message(gst::message::Latency::builder().src(&*self.obj()).build());
}
}
_ => unimplemented!(),
}
}
fn property(&self, _id: usize, pspec: &glib::ParamSpec) -> glib::Value {
match pspec.name() {
"minimum-duration" => {
let settings = self.settings.lock().unwrap();
settings.min_time.to_value()
}
"max-size-time" => {
let settings = self.settings.lock().unwrap();
settings.max_time.unwrap_or(gst::ClockTime::ZERO).to_value()
}
_ => unimplemented!(),
}
}
fn constructed(&self) {
self.parent_constructed();
let obj = self.obj();
let class = obj.class();
let templ = class.pad_template("video_sink").unwrap();
let sinkpad = gst::Pad::builder_from_template(&templ)
.name("video_sink")
.chain_function(|pad, parent, buffer| {
GopBuffer::catch_panic_pad_function(
parent,
|| Err(gst::FlowError::Error),
|gopbuffer| gopbuffer.sink_chain(pad, buffer),
)
})
.event_function(|pad, parent, event| {
GopBuffer::catch_panic_pad_function(
parent,
|| false,
|gopbuffer| gopbuffer.sink_event(pad, event),
)
})
.query_function(|pad, parent, query| {
GopBuffer::catch_panic_pad_function(
parent,
|| false,
|gopbuffer| gopbuffer.sink_query(pad, query),
)
})
.iterate_internal_links_function(|pad, parent| {
GopBuffer::catch_panic_pad_function(
parent,
|| gst::Pad::iterate_internal_links_default(pad, parent),
|gopbuffer| gopbuffer.iterate_internal_links(pad),
)
})
.flags(gst::PadFlags::PROXY_CAPS)
.build();
obj.add_pad(&sinkpad).unwrap();
let templ = class.pad_template("video_src").unwrap();
let srcpad = gst::Pad::builder_from_template(&templ)
.name("video_src")
.query_function(|pad, parent, query| {
GopBuffer::catch_panic_pad_function(
parent,
|| false,
|gopbuffer| gopbuffer.src_query(pad, query),
)
})
.iterate_internal_links_function(|pad, parent| {
GopBuffer::catch_panic_pad_function(
parent,
|| gst::Pad::iterate_internal_links_default(pad, parent),
|gopbuffer| gopbuffer.iterate_internal_links(pad),
)
})
.build();
obj.add_pad(&srcpad).unwrap();
let mut state = self.state.lock().unwrap();
state.streams.push(Stream {
sinkpad,
srcpad,
sink_segment: None,
delta_frames: DeltaFrames::IntraOnly,
queued_gops: VecDeque::new(),
});
}
}
impl GstObjectImpl for GopBuffer {}
impl ElementImpl for GopBuffer {
fn metadata() -> Option<&'static gst::subclass::ElementMetadata> {
static ELEMENT_METADATA: Lazy<gst::subclass::ElementMetadata> = Lazy::new(|| {
gst::subclass::ElementMetadata::new(
"GopBuffer",
"Video",
"GOP Buffer",
"Matthew Waters <matthew@centricular.com>",
)
});
Some(&*ELEMENT_METADATA)
}
fn pad_templates() -> &'static [gst::PadTemplate] {
static PAD_TEMPLATES: Lazy<Vec<gst::PadTemplate>> = Lazy::new(|| {
// This element is designed to implement multiple streams but it has not been
// implemented.
//
// The things missing for multiple (audio or video) streams are:
// 1. More pad templates
// 2. Choosing a main stream to drive the timestamp logic between all input streams
// 3. Allowing either the main stream to cause other streams to push data
// regardless of it's GOP state, or allow each stream to be individually delimited
// by GOP but all still within the minimum duration.
let video_caps = [
gst::Structure::builder("video/x-h264")
.field("stream-format", gst::List::new(["avc", "avc3"]))
.field("alignment", "au")
.build(),
gst::Structure::builder("video/x-h265")
.field("stream-format", gst::List::new(["hvc1", "hev1"]))
.field("alignment", "au")
.build(),
gst::Structure::builder("video/x-vp8").build(),
gst::Structure::builder("video/x-vp9").build(),
gst::Structure::builder("video/x-av1")
.field("stream-format", "obu-stream")
.field("alignment", "tu")
.build(),
]
.into_iter()
.collect::<gst::Caps>();
let src_pad_template = gst::PadTemplate::new(
"video_src",
gst::PadDirection::Src,
gst::PadPresence::Always,
&video_caps,
)
.unwrap();
let sink_pad_template = gst::PadTemplate::new(
"video_sink",
gst::PadDirection::Sink,
gst::PadPresence::Always,
&video_caps,
)
.unwrap();
vec![src_pad_template, sink_pad_template]
});
PAD_TEMPLATES.as_ref()
}
fn change_state(
&self,
transition: gst::StateChange,
) -> Result<gst::StateChangeSuccess, gst::StateChangeError> {
#[allow(clippy::single_match)]
match transition {
gst::StateChange::NullToReady => {
let settings = self.settings.lock().unwrap();
if let Some(max_time) = settings.max_time {
if max_time < settings.min_time {
gst::element_imp_error!(
self,
gst::CoreError::StateChange,
["Configured maximum time is less than the minimum time"]
);
return Err(gst::StateChangeError);
}
}
}
_ => (),
}
self.parent_change_state(transition)?;
Ok(gst::StateChangeSuccess::Success)
}
}

View file

@ -0,0 +1,27 @@
// Copyright (C) 2022 Matthew Waters <matthew@centricular.com>
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
use gst::glib;
use gst::prelude::*;
mod imp;
glib::wrapper! {
pub(crate) struct GopBuffer(ObjectSubclass<imp::GopBuffer>) @extends gst::Element, gst::Object;
}
pub fn register(plugin: &gst::Plugin) -> Result<(), glib::BoolError> {
gst::Element::register(
Some(plugin),
"gopbuffer",
gst::Rank::PRIMARY,
GopBuffer::static_type(),
)?;
Ok(())
}

View file

@ -0,0 +1,34 @@
// Copyright (C) 2022 Matthew Waters <matthew@centricular.com>
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
#![allow(clippy::non_send_fields_in_send_ty, unused_doc_comments)]
/**
* plugin-gopbuffer:
*
* Since: plugins-rs-0.13.0
*/
use gst::glib;
mod gopbuffer;
fn plugin_init(plugin: &gst::Plugin) -> Result<(), glib::BoolError> {
gopbuffer::register(plugin)
}
gst::plugin_define!(
gopbuffer,
env!("CARGO_PKG_DESCRIPTION"),
plugin_init,
concat!(env!("CARGO_PKG_VERSION"), "-", env!("COMMIT_ID")),
// FIXME: MPL-2.0 is only allowed since 1.18.3 (as unknown) and 1.20 (as known)
"MPL",
env!("CARGO_PKG_NAME"),
env!("CARGO_PKG_NAME"),
env!("CARGO_PKG_REPOSITORY"),
env!("BUILD_REL_DATE")
);

View file

@ -0,0 +1,128 @@
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
//
use gst::prelude::*;
fn init() {
use std::sync::Once;
static INIT: Once = Once::new();
INIT.call_once(|| {
gst::init().unwrap();
gstgopbuffer::plugin_register_static().unwrap();
});
}
macro_rules! check_buffer {
($buf1:expr, $buf2:expr) => {
assert_eq!($buf1.pts(), $buf2.pts());
assert_eq!($buf1.dts(), $buf2.dts());
assert_eq!($buf1.flags(), $buf2.flags());
};
}
#[test]
fn test_min_one_gop_held() {
const OFFSET: gst::ClockTime = gst::ClockTime::from_seconds(10);
init();
let mut h =
gst_check::Harness::with_padnames("gopbuffer", Some("video_sink"), Some("video_src"));
// 200ms min buffer time
let element = h.element().unwrap();
element.set_property("minimum-duration", gst::ClockTime::from_mseconds(200));
h.set_src_caps(
gst::Caps::builder("video/x-h264")
.field("width", 320i32)
.field("height", 240i32)
.field("framerate", gst::Fraction::new(10, 1))
.field("stream-format", "avc")
.field("alignment", "au")
.field("codec_data", gst::Buffer::with_size(1).unwrap())
.build(),
);
let mut in_segment = gst::Segment::new();
in_segment.set_format(gst::Format::Time);
in_segment.set_base(10.seconds());
assert!(h.push_event(gst::event::Segment::builder(&in_segment).build()));
h.play();
// Push 10 buffers of 100ms each, 2nd and 5th buffer without DELTA_UNIT flag
let in_buffers: Vec<_> = (0..6)
.map(|i| {
let mut buffer = gst::Buffer::with_size(1).unwrap();
{
let buffer = buffer.get_mut().unwrap();
buffer.set_pts(OFFSET + gst::ClockTime::from_mseconds(i * 100));
buffer.set_dts(OFFSET + gst::ClockTime::from_mseconds(i * 100));
buffer.set_duration(gst::ClockTime::from_mseconds(100));
if i != 1 && i != 4 {
buffer.set_flags(gst::BufferFlags::DELTA_UNIT);
}
}
assert_eq!(h.push(buffer.clone()), Ok(gst::FlowSuccess::Ok));
buffer
})
.collect();
// pull mandatory events
let ev = h.pull_event().unwrap();
assert_eq!(ev.type_(), gst::EventType::StreamStart);
let ev = h.pull_event().unwrap();
assert_eq!(ev.type_(), gst::EventType::Caps);
// GstHarness pushes its own segment event that we need to eat
let ev = h.pull_event().unwrap();
assert_eq!(ev.type_(), gst::EventType::Segment);
let ev = h.pull_event().unwrap();
let gst::event::EventView::Segment(recv_segment) = ev.view() else {
unreachable!()
};
let recv_segment = recv_segment.segment();
assert_eq!(recv_segment, &in_segment);
// check that at least the first GOP has been output already as it exceeds the minimum-time
// value
let mut in_iter = in_buffers.iter();
// the first buffer is dropped because it was not preceded by a keyframe
let _buffer = in_iter.next().unwrap();
// a keyframe
let out = h.pull().unwrap();
let buffer = in_iter.next().unwrap();
check_buffer!(buffer, out);
// not a keyframe
let out = h.pull().unwrap();
let buffer = in_iter.next().unwrap();
check_buffer!(buffer, out);
// not a keyframe
let out = h.pull().unwrap();
let buffer = in_iter.next().unwrap();
check_buffer!(buffer, out);
// no more buffers
assert_eq!(h.buffers_in_queue(), 0);
// push eos to drain out the rest of the data
assert!(h.push_event(gst::event::Eos::new()));
for buffer in in_iter {
let out = h.pull().unwrap();
check_buffer!(buffer, out);
}
// no more buffers
assert_eq!(h.buffers_in_queue(), 0);
let ev = h.pull_event().unwrap();
assert_eq!(ev.type_(), gst::EventType::Eos);
}

View file

@ -0,0 +1,43 @@
[package]
name = "gst-plugin-originalbuffer"
version.workspace = true
authors = ["Olivier Crête <olivier.crete@collabora.com>"]
repository.workspace = true
license = "MPL-2.0"
description = "GStreamer Origin buffer meta Plugin"
edition.workspace = true
rust-version.workspace = true
[dependencies]
glib.workspace = true
gst.workspace = true
gst-video.workspace = true
atomic_refcell = "0.1"
once_cell.workspace = true
[lib]
name = "gstoriginalbuffer"
crate-type = ["cdylib", "rlib"]
path = "src/lib.rs"
[build-dependencies]
gst-plugin-version-helper.workspace = true
[features]
static = []
capi = []
doc = ["gst/v1_16"]
[package.metadata.capi]
min_version = "0.9.21"
[package.metadata.capi.header]
enabled = false
[package.metadata.capi.library]
install_subdir = "gstreamer-1.0"
versioning = false
import_library = false
[package.metadata.capi.pkg_config]
requires_private = "gstreamer-1.0, gstreamer-base-1.0, gobject-2.0, glib-2.0, gmodule-2.0"

View file

@ -0,0 +1,3 @@
fn main() {
gst_plugin_version_helper::info()
}

View file

@ -0,0 +1,38 @@
// Copyright (C) 2024 Collabora Ltd
// @author: Olivier Crête <olivier.crete@collabora.com>
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
#![allow(clippy::non_send_fields_in_send_ty, unused_doc_comments)]
/**
* plugin-originalbuffer:
*
* Since: plugins-rs-0.12 */
use gst::glib;
mod originalbuffermeta;
mod originalbufferrestore;
mod originalbuffersave;
fn plugin_init(plugin: &gst::Plugin) -> Result<(), glib::BoolError> {
originalbuffersave::register(plugin)?;
originalbufferrestore::register(plugin)?;
Ok(())
}
gst::plugin_define!(
originalbuffer,
env!("CARGO_PKG_DESCRIPTION"),
plugin_init,
concat!(env!("CARGO_PKG_VERSION"), "-", env!("COMMIT_ID")),
"MPL",
env!("CARGO_PKG_NAME"),
env!("CARGO_PKG_NAME"),
env!("CARGO_PKG_REPOSITORY"),
env!("BUILD_REL_DATE")
);

View file

@ -0,0 +1,199 @@
// Copyright (C) 2024 Collabora Ltd
// @author: Olivier Crête <olivier.crete@collabora.com>
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
use gst::prelude::*;
use std::fmt;
use std::mem;
#[repr(transparent)]
pub struct OriginalBufferMeta(imp::OriginalBufferMeta);
unsafe impl Send for OriginalBufferMeta {}
unsafe impl Sync for OriginalBufferMeta {}
impl OriginalBufferMeta {
pub fn add(
buffer: &mut gst::BufferRef,
original: gst::Buffer,
caps: Option<gst::Caps>,
) -> gst::MetaRefMut<'_, Self, gst::meta::Standalone> {
unsafe {
// Manually dropping because gst_buffer_add_meta() takes ownership of the
// content of the struct
let mut params =
mem::ManuallyDrop::new(imp::OriginalBufferMetaParams { original, caps });
let meta = gst::ffi::gst_buffer_add_meta(
buffer.as_mut_ptr(),
imp::original_buffer_meta_get_info(),
&mut *params as *mut imp::OriginalBufferMetaParams as gst::glib::ffi::gpointer,
) as *mut imp::OriginalBufferMeta;
Self::from_mut_ptr(buffer, meta)
}
}
pub fn replace(&mut self, original: gst::Buffer, caps: Option<gst::Caps>) {
self.0.original = Some(original);
self.0.caps = caps;
}
pub fn original(&self) -> &gst::Buffer {
self.0.original.as_ref().unwrap()
}
pub fn caps(&self) -> &gst::Caps {
self.0.caps.as_ref().unwrap()
}
}
unsafe impl MetaAPI for OriginalBufferMeta {
type GstType = imp::OriginalBufferMeta;
fn meta_api() -> gst::glib::Type {
imp::original_buffer_meta_api_get_type()
}
}
impl fmt::Debug for OriginalBufferMeta {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
f.debug_struct("OriginalBufferMeta")
.field("buffer", &self.original())
.finish()
}
}
mod imp {
use gst::glib::translate::*;
use once_cell::sync::Lazy;
use std::mem;
use std::ptr;
pub(super) struct OriginalBufferMetaParams {
pub original: gst::Buffer,
pub caps: Option<gst::Caps>,
}
#[repr(C)]
pub struct OriginalBufferMeta {
parent: gst::ffi::GstMeta,
pub(super) original: Option<gst::Buffer>,
pub(super) caps: Option<gst::Caps>,
}
pub(super) fn original_buffer_meta_api_get_type() -> glib::Type {
static TYPE: Lazy<glib::Type> = Lazy::new(|| unsafe {
let t = from_glib(gst::ffi::gst_meta_api_type_register(
b"GstOriginalBufferMetaAPI\0".as_ptr() as *const _,
[ptr::null::<std::os::raw::c_char>()].as_ptr() as *mut *const _,
));
assert_ne!(t, glib::Type::INVALID);
t
});
*TYPE
}
unsafe extern "C" fn original_buffer_meta_init(
meta: *mut gst::ffi::GstMeta,
params: glib::ffi::gpointer,
_buffer: *mut gst::ffi::GstBuffer,
) -> glib::ffi::gboolean {
assert!(!params.is_null());
let meta = &mut *(meta as *mut OriginalBufferMeta);
let params = ptr::read(params as *const OriginalBufferMetaParams);
let OriginalBufferMetaParams { original, caps } = params;
ptr::write(&mut meta.original, Some(original));
ptr::write(&mut meta.caps, caps);
true.into_glib()
}
unsafe extern "C" fn original_buffer_meta_free(
meta: *mut gst::ffi::GstMeta,
_buffer: *mut gst::ffi::GstBuffer,
) {
let meta = &mut *(meta as *mut OriginalBufferMeta);
meta.original = None;
meta.caps = None;
}
unsafe extern "C" fn original_buffer_meta_transform(
dest: *mut gst::ffi::GstBuffer,
meta: *mut gst::ffi::GstMeta,
_buffer: *mut gst::ffi::GstBuffer,
_type_: glib::ffi::GQuark,
_data: glib::ffi::gpointer,
) -> glib::ffi::gboolean {
let dest = gst::BufferRef::from_mut_ptr(dest);
let meta = &*(meta as *const OriginalBufferMeta);
if dest.meta::<super::OriginalBufferMeta>().is_some() {
return true.into_glib();
}
// We don't store a ref in the meta if it's self-refencing, but we add it
// when copying the meta to another buffer.
super::OriginalBufferMeta::add(
dest,
meta.original.as_ref().unwrap().clone(),
meta.caps.clone(),
);
true.into_glib()
}
pub(super) fn original_buffer_meta_get_info() -> *const gst::ffi::GstMetaInfo {
struct MetaInfo(ptr::NonNull<gst::ffi::GstMetaInfo>);
unsafe impl Send for MetaInfo {}
unsafe impl Sync for MetaInfo {}
static META_INFO: Lazy<MetaInfo> = Lazy::new(|| unsafe {
MetaInfo(
ptr::NonNull::new(gst::ffi::gst_meta_register(
original_buffer_meta_api_get_type().into_glib(),
b"OriginalBufferMeta\0".as_ptr() as *const _,
mem::size_of::<OriginalBufferMeta>(),
Some(original_buffer_meta_init),
Some(original_buffer_meta_free),
Some(original_buffer_meta_transform),
) as *mut gst::ffi::GstMetaInfo)
.expect("Failed to register meta API"),
)
});
META_INFO.0.as_ptr()
}
}
#[test]
fn test() {
gst::init().unwrap();
let mut b = gst::Buffer::with_size(10).unwrap();
let caps = gst::Caps::new_empty_simple("video/x-raw");
let copy = b.copy();
let m = OriginalBufferMeta::add(b.make_mut(), copy, Some(caps.clone()));
assert_eq!(m.caps(), caps.as_ref());
assert_eq!(m.original().clone(), b);
let b2: gst::Buffer = b.copy_deep().unwrap();
let m = b.meta::<OriginalBufferMeta>().unwrap();
assert_eq!(m.caps(), caps.as_ref());
assert_eq!(m.original(), &b);
let m = b2.meta::<OriginalBufferMeta>().unwrap();
assert_eq!(m.caps(), caps.as_ref());
assert_eq!(m.original(), &b);
let b3: gst::Buffer = b2.copy_deep().unwrap();
drop(b2);
let m = b3.meta::<OriginalBufferMeta>().unwrap();
assert_eq!(m.caps(), caps.as_ref());
assert_eq!(m.original(), &b);
}

View file

@ -0,0 +1,315 @@
// Copyright (C) 2024 Collabora Ltd
// @author: Olivier Crête <olivier.crete@collabora.com>
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
use gst::glib;
use gst::subclass::prelude::*;
use gst_video::prelude::*;
use atomic_refcell::AtomicRefCell;
use crate::originalbuffermeta;
use crate::originalbuffermeta::OriginalBufferMeta;
struct CapsState {
caps: gst::Caps,
vinfo: Option<gst_video::VideoInfo>,
}
impl Default for CapsState {
fn default() -> Self {
CapsState {
caps: gst::Caps::new_empty(),
vinfo: None,
}
}
}
#[derive(Default)]
struct State {
sinkpad_caps: CapsState,
meta_caps: CapsState,
sinkpad_segment: Option<gst::Event>,
}
pub struct OriginalBufferRestore {
state: AtomicRefCell<State>,
src_pad: gst::Pad,
sink_pad: gst::Pad,
}
use once_cell::sync::Lazy;
#[cfg(unused_code)]
static CAT: Lazy<gst::DebugCategory> = Lazy::new(|| {
gst::DebugCategory::new(
"originalbufferrestore",
gst::DebugColorFlags::empty(),
Some("Restore Original buffer as meta"),
)
});
#[glib::object_subclass]
impl ObjectSubclass for OriginalBufferRestore {
const NAME: &'static str = "GstOriginalBufferRestore";
type Type = super::OriginalBufferRestore;
type ParentType = gst::Element;
fn with_class(klass: &Self::Class) -> Self {
let sink_templ = klass.pad_template("sink").unwrap();
let src_templ = klass.pad_template("src").unwrap();
let sink_pad = gst::Pad::builder_from_template(&sink_templ)
.chain_function(|pad, parent, buffer| {
OriginalBufferRestore::catch_panic_pad_function(
parent,
|| Err(gst::FlowError::Error),
|obj| obj.sink_chain(pad, buffer),
)
})
.event_function(|pad, parent, event| {
OriginalBufferRestore::catch_panic_pad_function(
parent,
|| false,
|obj| obj.sink_event(pad, parent, event),
)
})
.query_function(|pad, parent, query| {
OriginalBufferRestore::catch_panic_pad_function(
parent,
|| false,
|obj| obj.sink_query(pad, parent, query),
)
})
.build();
let src_pad = gst::Pad::builder_from_template(&src_templ)
.event_function(|pad, parent, event| {
OriginalBufferRestore::catch_panic_pad_function(
parent,
|| false,
|obj| obj.src_event(pad, parent, event),
)
})
.build();
Self {
src_pad,
sink_pad,
state: Default::default(),
}
}
}
impl ObjectImpl for OriginalBufferRestore {
fn constructed(&self) {
self.parent_constructed();
let obj = self.obj();
obj.add_pad(&self.sink_pad).unwrap();
obj.add_pad(&self.src_pad).unwrap();
}
}
impl GstObjectImpl for OriginalBufferRestore {}
impl ElementImpl for OriginalBufferRestore {
fn metadata() -> Option<&'static gst::subclass::ElementMetadata> {
static ELEMENT_METADATA: Lazy<gst::subclass::ElementMetadata> = Lazy::new(|| {
gst::subclass::ElementMetadata::new(
"Original Buffer Restore",
"Generic",
"Restores a reference to the buffer in a meta",
"Olivier Crête <olivier.crete@collabora.com>",
)
});
Some(&*ELEMENT_METADATA)
}
fn pad_templates() -> &'static [gst::PadTemplate] {
static PAD_TEMPLATES: Lazy<Vec<gst::PadTemplate>> = Lazy::new(|| {
let caps = gst::Caps::new_any();
let src_pad_template = gst::PadTemplate::new(
"src",
gst::PadDirection::Src,
gst::PadPresence::Always,
&caps,
)
.unwrap();
let sink_pad_template = gst::PadTemplate::new(
"sink",
gst::PadDirection::Sink,
gst::PadPresence::Always,
&caps,
)
.unwrap();
vec![src_pad_template, sink_pad_template]
});
PAD_TEMPLATES.as_ref()
}
fn change_state(
&self,
transition: gst::StateChange,
) -> Result<gst::StateChangeSuccess, gst::StateChangeError> {
let ret = self.parent_change_state(transition)?;
if transition == gst::StateChange::PausedToReady {
let mut state = self.state.borrow_mut();
*state = State::default();
}
Ok(ret)
}
}
impl OriginalBufferRestore {
fn sink_event(
&self,
pad: &gst::Pad,
parent: Option<&impl IsA<gst::Object>>,
event: gst::Event,
) -> bool {
match event.view() {
gst::EventView::Caps(e) => {
let mut state = self.state.borrow_mut();
let caps = e.caps_owned();
let vinfo = gst_video::VideoInfo::from_caps(&caps).ok();
state.sinkpad_caps = CapsState { caps, vinfo };
true
}
gst::EventView::Segment(_) => {
let mut state = self.state.borrow_mut();
state.sinkpad_segment = Some(event);
true
}
_ => gst::Pad::event_default(pad, parent, event),
}
}
fn src_event(
&self,
pad: &gst::Pad,
parent: Option<&impl IsA<gst::Object>>,
event: gst::Event,
) -> bool {
if event.type_() == gst::EventType::Reconfigure
|| event.has_name("gst-original-buffer-forward-upstream-event")
{
let s = gst::Structure::builder("gst-original-buffer-forward-upstream-event")
.field("event", event)
.build();
let event = gst::event::CustomUpstream::new(s);
self.sink_pad.push_event(event)
} else {
gst::Pad::event_default(pad, parent, event)
}
}
fn sink_query(
&self,
pad: &gst::Pad,
parent: Option<&impl IsA<gst::Object>>,
query: &mut gst::QueryRef,
) -> bool {
if let gst::QueryViewMut::Custom(_) = query.view_mut() {
let s = query.structure_mut();
if s.has_name("gst-original-buffer-forward-query") {
if let Ok(mut q) = s.get::<gst::Query>("query") {
s.remove_field("query");
assert!(q.is_writable());
let res = self.src_pad.peer_query(q.get_mut().unwrap());
s.set("query", q);
s.set("result", res);
return true;
}
}
}
gst::Pad::query_default(pad, parent, query)
}
fn sink_chain(
&self,
_pad: &gst::Pad,
inbuf: gst::Buffer,
) -> Result<gst::FlowSuccess, gst::FlowError> {
let Some(ometa) = inbuf.meta::<OriginalBufferMeta>() else {
//gst::element_warning!(self, gst::StreamError::Failed, ["Buffer {} is missing the GstOriginalBufferMeta, put originalbuffersave upstream in your pipeline", buffer]);
return Ok(gst::FlowSuccess::Ok);
};
let mut state = self.state.borrow_mut();
let meta_caps = &mut state.meta_caps;
if &meta_caps.caps != ometa.caps() {
if !self.src_pad.push_event(gst::event::Caps::new(ometa.caps())) {
return Err(gst::FlowError::NotNegotiated);
}
meta_caps.caps = ometa.caps().clone();
meta_caps.vinfo = gst_video::VideoInfo::from_caps(&meta_caps.caps).ok();
}
let mut outbuf = ometa.original().copy();
inbuf
.copy_into(
outbuf.make_mut(),
gst::BufferCopyFlags::TIMESTAMPS | gst::BufferCopyFlags::FLAGS,
..,
)
.unwrap();
for meta in inbuf.iter_meta::<gst::Meta>() {
if meta.api() == originalbuffermeta::OriginalBufferMeta::meta_api() {
continue;
}
if meta.has_tag::<gst::meta::tags::Memory>()
|| meta.has_tag::<gst::meta::tags::MemoryReference>()
{
continue;
}
if meta.has_tag::<gst_video::video_meta::tags::Size>() {
if let (Some(ref meta_vinfo), Some(ref sink_vinfo)) =
(&state.meta_caps.vinfo, &state.sinkpad_caps.vinfo)
{
if (meta_vinfo.width() != sink_vinfo.width()
|| meta_vinfo.height() != sink_vinfo.height())
&& meta
.transform(
outbuf.make_mut(),
&gst_video::video_meta::VideoMetaTransformScale::new(
sink_vinfo, meta_vinfo,
),
)
.is_ok()
{
continue;
}
}
}
let _ = meta.transform(
outbuf.make_mut(),
&gst::meta::MetaTransformCopy::new(false, ..),
);
}
if let Some(event) = state.sinkpad_segment.take() {
if !self.src_pad.push_event(event) {
return Err(gst::FlowError::Error);
}
}
self.src_pad.push(outbuf)
}
}

View file

@ -0,0 +1,31 @@
// Copyright (C) 2024 Collabora Ltd
// @author: Olivier Crête <olivier.crete@collabora.com>
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
/**
* SECTION:element-originalbufferrestore
*
* See originalbuffersave for details
*/
use gst::glib;
use gst::prelude::*;
mod imp;
glib::wrapper! {
pub struct OriginalBufferRestore(ObjectSubclass<imp::OriginalBufferRestore>) @extends gst::Element, gst::Object;
}
pub fn register(plugin: &gst::Plugin) -> Result<(), glib::BoolError> {
gst::Element::register(
Some(plugin),
"originalbufferrestore",
gst::Rank::NONE,
OriginalBufferRestore::static_type(),
)
}

View file

@ -0,0 +1,205 @@
// Copyright (C) 2024 Collabora Ltd
// @author: Olivier Crête <olivier.crete@collabora.com>
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
use gst::glib;
use gst::prelude::*;
use gst::subclass::prelude::*;
use crate::originalbuffermeta::OriginalBufferMeta;
pub struct OriginalBufferSave {
src_pad: gst::Pad,
sink_pad: gst::Pad,
}
use once_cell::sync::Lazy;
#[cfg(unused_code)]
static CAT: Lazy<gst::DebugCategory> = Lazy::new(|| {
gst::DebugCategory::new(
"originalbuffersave",
gst::DebugColorFlags::empty(),
Some("Save Original buffer as meta"),
)
});
#[glib::object_subclass]
impl ObjectSubclass for OriginalBufferSave {
const NAME: &'static str = "GstOriginalBufferSave";
type Type = super::OriginalBufferSave;
type ParentType = gst::Element;
fn with_class(klass: &Self::Class) -> Self {
let sink_templ = klass.pad_template("sink").unwrap();
let src_templ = klass.pad_template("src").unwrap();
let sink_pad = gst::Pad::builder_from_template(&sink_templ)
.chain_function(|pad, parent, buffer| {
OriginalBufferSave::catch_panic_pad_function(
parent,
|| Err(gst::FlowError::Error),
|obj| obj.sink_chain(pad, buffer),
)
})
.query_function(|pad, parent, query| {
OriginalBufferSave::catch_panic_pad_function(
parent,
|| false,
|obj| obj.sink_query(pad, parent, query),
)
})
.flags(gst::PadFlags::PROXY_CAPS | gst::PadFlags::PROXY_ALLOCATION)
.build();
let src_pad = gst::Pad::builder_from_template(&src_templ)
.event_function(|pad, parent, event| {
OriginalBufferSave::catch_panic_pad_function(
parent,
|| false,
|obj| obj.src_event(pad, parent, event),
)
})
.build();
Self { src_pad, sink_pad }
}
}
impl ObjectImpl for OriginalBufferSave {
fn constructed(&self) {
self.parent_constructed();
let obj = self.obj();
obj.add_pad(&self.sink_pad).unwrap();
obj.add_pad(&self.src_pad).unwrap();
}
}
impl GstObjectImpl for OriginalBufferSave {}
impl ElementImpl for OriginalBufferSave {
fn metadata() -> Option<&'static gst::subclass::ElementMetadata> {
static ELEMENT_METADATA: Lazy<gst::subclass::ElementMetadata> = Lazy::new(|| {
gst::subclass::ElementMetadata::new(
"Original Buffer Save",
"Generic",
"Saves a reference to the buffer in a meta",
"Olivier Crête <olivier.crete@collabora.com>",
)
});
Some(&*ELEMENT_METADATA)
}
fn pad_templates() -> &'static [gst::PadTemplate] {
static PAD_TEMPLATES: Lazy<Vec<gst::PadTemplate>> = Lazy::new(|| {
let caps = gst::Caps::new_any();
let src_pad_template = gst::PadTemplate::new(
"src",
gst::PadDirection::Src,
gst::PadPresence::Always,
&caps,
)
.unwrap();
let sink_pad_template = gst::PadTemplate::new(
"sink",
gst::PadDirection::Sink,
gst::PadPresence::Always,
&caps,
)
.unwrap();
vec![src_pad_template, sink_pad_template]
});
PAD_TEMPLATES.as_ref()
}
}
impl OriginalBufferSave {
fn forward_query(&self, query: gst::Query) -> Option<gst::Query> {
let mut s = gst::Structure::new_empty("gst-original-buffer-forward-query");
s.set("query", query);
let mut query = gst::query::Custom::new(s);
if self.src_pad.peer_query(&mut query) {
let s = query.structure_mut();
if let (Ok(true), Ok(q)) = (s.get("result"), s.get::<gst::Query>("query")) {
Some(q)
} else {
None
}
} else {
None
}
}
fn sink_chain(
&self,
pad: &gst::Pad,
inbuf: gst::Buffer,
) -> Result<gst::FlowSuccess, gst::FlowError> {
let mut buf = inbuf.copy();
let caps = pad.current_caps();
if let Some(mut meta) = buf.make_mut().meta_mut::<OriginalBufferMeta>() {
meta.replace(inbuf, caps);
} else {
OriginalBufferMeta::add(buf.make_mut(), inbuf, caps);
}
self.src_pad.push(buf)
}
fn sink_query(
&self,
pad: &gst::Pad,
parent: Option<&impl IsA<gst::Object>>,
query: &mut gst::QueryRef,
) -> bool {
let ret = gst::Pad::query_default(pad, parent, query);
if !ret {
return ret;
}
if let gst::QueryViewMut::Caps(q) = query.view_mut() {
if let Some(caps) = q.result_owned() {
let forwarding_q = gst::query::Caps::new(Some(&caps)).into();
if let Some(forwarding_q) = self.forward_query(forwarding_q) {
if let gst::QueryView::Caps(c) = forwarding_q.view() {
let res = c
.result_owned()
.map(|c| c.intersect_with_mode(&caps, gst::CapsIntersectMode::First));
q.set_result(&res);
}
}
}
}
// We should also do allocation queries, but that requires supporting the same
// intersection semantics as gsttee, which should be in a helper function.
true
}
fn src_event(
&self,
pad: &gst::Pad,
parent: Option<&impl IsA<gst::Object>>,
event: gst::Event,
) -> bool {
let event = if event.has_name("gst-original-buffer-forward-upstream-event") {
event.structure().unwrap().get("event").unwrap()
} else {
event
};
gst::Pad::event_default(pad, parent, event)
}
}

View file

@ -0,0 +1,41 @@
// Copyright (C) 2024 Collabora Ltd
// @author: Olivier Crête <olivier.crete@collabora.com>
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
/**
* SECTION:element-originalbuffersave
*
* GStreamer elements to store the original buffer and restore it later
*
* In many analysis scenario (for example machine learning), it is desirable to
* use a pre-processed buffer, for example by lowering the resolution, but we may
* want to take the output of this analysis, and apply it to the original buffer.
*
* These elements do just this, the typical usage would be a pipeline like:
*
* `... ! originalbuffersave ! videoconvertscale ! video/x-raw, width=100, height=100 ! analysiselement ! originalbufferrestore ! ...`
*
* The originalbufferrestore element will "restore" the buffer that was entered to the "save" element, but will keep any metadata that was added later.
*/
use gst::glib;
use gst::prelude::*;
mod imp;
glib::wrapper! {
pub struct OriginalBufferSave(ObjectSubclass<imp::OriginalBufferSave>) @extends gst::Element, gst::Object;
}
pub fn register(plugin: &gst::Plugin) -> Result<(), glib::BoolError> {
gst::Element::register(
Some(plugin),
"originalbuffersave",
gst::Rank::NONE,
OriginalBufferSave::static_type(),
)
}

View file

@ -417,8 +417,8 @@ impl ObjectImpl for InputSelector {
let pads = self.pads.lock().unwrap();
let mut old_pad = None;
if let Some(ref pad) = pad {
if pads.sink_pads.get(pad).is_some() {
old_pad = state.active_sinkpad.clone();
if pads.sink_pads.contains_key(pad) {
old_pad.clone_from(&state.active_sinkpad);
state.active_sinkpad = Some(pad.clone());
state.switched_pad = true;
}

View file

@ -57,7 +57,7 @@ const READ: usize = 0;
const WRITE: usize = 1;
thread_local! {
static CURRENT_REACTOR: RefCell<Option<Reactor>> = RefCell::new(None);
static CURRENT_REACTOR: RefCell<Option<Reactor>> = const { RefCell::new(None) };
}
#[derive(Debug)]

View file

@ -27,7 +27,7 @@ use super::{CallOnDrop, JoinHandle, Reactor};
use crate::runtime::RUNTIME_CAT;
thread_local! {
static CURRENT_SCHEDULER: RefCell<Option<HandleWeak>> = RefCell::new(None);
static CURRENT_SCHEDULER: RefCell<Option<HandleWeak>> = const { RefCell::new(None) };
}
#[derive(Debug)]

View file

@ -24,7 +24,7 @@ use super::CallOnDrop;
use crate::runtime::RUNTIME_CAT;
thread_local! {
static CURRENT_TASK_ID: Cell<Option<TaskId>> = Cell::new(None);
static CURRENT_TASK_ID: Cell<Option<TaskId>> = const { Cell::new(None) };
}
#[derive(Clone, Copy, Eq, PartialEq, Hash, Debug)]

View file

@ -609,6 +609,8 @@ fn premature_shutdown() {
}
#[test]
// FIXME: racy: https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/250
#[ignore]
fn socket_play_null_play() {
use gio::{
prelude::SocketExt, InetAddress, InetSocketAddress, SocketFamily, SocketProtocol,

View file

@ -76,6 +76,8 @@ fn test_client_management() {
}
#[test]
// FIXME: racy: https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/250
#[ignore]
fn test_chain() {
init();

View file

@ -1,7 +1,7 @@
project('gst-plugins-rs',
'rust',
'c',
version: '0.12.0-alpha.1',
version: '0.13.0-alpha.1',
meson_version : '>= 1.1')
# dependencies.py needs a toml parsing module
@ -118,6 +118,7 @@ plugins = {
'spotify': {'library': 'libgstspotify'},
'file': {'library': 'libgstrsfile'},
'originalbuffer': {'library': 'libgstoriginalbuffer'},
# sodium can have an external dependency, see below
'threadshare': {
'library': 'libgstthreadshare',
@ -170,6 +171,7 @@ plugins = {
'library': 'libgsturiplaylistbin',
'examples': ['playlist'],
'features': ['clap'],
'gst-version': '>=1.23.90',
},
'cdg': {'library': 'libgstcdg'},
@ -183,7 +185,7 @@ plugins = {
},
'dav1d': {
'library': 'libgstdav1d',
'extra-deps': {'dav1d': ['>=1.0', '<1.3']},
'extra-deps': {'dav1d': ['>=1.3']},
},
'ffv1': {'library': 'libgstffv1'},
'flavors': {'library': 'libgstrsflv'},
@ -202,6 +204,8 @@ plugins = {
'library': 'libgstrsvideofx',
'extra-deps': {'cairo-gobject': []},
},
'gopbuffer': {'library': 'libgstgopbuffer'},
'quinn': {'library': 'libgstquinn'},
}
if get_option('examples').allowed()
@ -301,6 +305,23 @@ if get_option('gtk4').allowed()
gtk4_features += 'winegl'
endif
endif
gst_allocators_dep = dependency('gstreamer-allocators-1.0', version: '>=1.24', required: false)
gtk_dep = dependency('gtk4', version: '>=4.6', required: get_option('gtk4'))
if gtk_dep.found()
if host_system == 'linux' and gtk_dep.version().version_compare('>=4.14') and gst_allocators_dep.found()
gtk4_features += 'dmabuf'
endif
if gtk_dep.version().version_compare('>=4.14')
gtk4_features += 'gtk_v4_14'
elif gtk_dep.version().version_compare('>=4.12')
gtk4_features += 'gtk_v4_12'
elif gtk_dep.version().version_compare('>=4.10')
gtk4_features += 'gtk_v4_10'
endif
endif
plugins += {
'gtk4': {
'library': 'libgstgtk4',
@ -397,6 +418,18 @@ foreach plugin_name, details: plugins
endif
endforeach
endif
if details.has_key('gst-version')
# Check if we have the required GStreamer version
gst_version = details.get('gst-version', '')
dep = dependency('gstreamer-1.0', required: false, version: gst_version)
if not dep.found()
opt = get_option(plugin_name)
if opt.enabled()
error('Required GStreamer version not found to build ' + plugin_name)
endif
plugin_deps_found = false
endif
endif
if plugin_deps_found
packages += f'gst-plugin-@plugin_name@'
features += plugin_features
@ -489,6 +522,16 @@ foreach plugin : plugins
plugin_name = plugin_name.substring(3)
endif
plugin_display_name = plugin_name
if plugin_name.startswith('gst')
plugin_display_name = plugin_name.substring(3)
endif
if plugin_display_name in plugin_names
# When default_library=both plugins are duplicated.
continue
endif
plugin_names += plugin_display_name
option_name = plugin_name.substring(3)
if option_name.startswith('rs')
option_name = option_name.substring(2)
@ -533,13 +576,7 @@ foreach plugin : plugins
warning('Static plugin @0@ is known to fail. It will not be included in libgstreamer-full.'.format(plugin_name))
else
gst_plugins += dep
pc_files += [plugin_name + '.pc']
if plugin_name.startswith('gst')
plugin_names += [plugin_name.substring(3)]
else
plugin_names += [plugin_name]
endif
endif
endforeach

View file

@ -9,6 +9,8 @@ option('spotify', type: 'feature', value: 'auto', description: 'Build spotify pl
# generic
option('file', type: 'feature', value: 'auto', description: 'Build file plugin')
option('originalbuffer', type: 'feature', value: 'auto', description: 'Build originalbuffer plugin')
option('gopbuffer', type: 'feature', value: 'auto', description: 'Build gopbuffer plugin')
option('sodium', type: 'feature', value: 'auto', description: 'Build sodium plugin')
option('sodium-source', type: 'combo',
choices: ['system', 'built-in'], value: 'built-in',
@ -32,6 +34,7 @@ option('rtsp', type: 'feature', value: 'auto', description: 'Build rtsp plugin')
option('rtp', type: 'feature', value: 'auto', description: 'Build rtp plugin')
option('webrtc', type: 'feature', value: 'auto', yield: true, description: 'Build webrtc plugin')
option('webrtchttp', type: 'feature', value: 'auto', description: 'Build webrtchttp plugin')
option('quinn', type: 'feature', value: 'auto', description: 'Build quinn plugin')
# text
option('textahead', type: 'feature', value: 'auto', description: 'Build textahead plugin')

View file

@ -11,6 +11,7 @@
pub use byteorder::{BigEndian, LittleEndian, ReadBytesExt, WriteBytesExt};
use std::io;
#[allow(unused)]
pub trait ReadBytesExtShort: io::Read {
fn read_u16le(&mut self) -> io::Result<u16> {
self.read_u16::<LittleEndian>()
@ -76,6 +77,7 @@ pub trait ReadBytesExtShort: io::Read {
impl<T> ReadBytesExtShort for T where T: ReadBytesExt {}
#[allow(unused)]
pub trait WriteBytesExtShort: WriteBytesExt {
fn write_u16le(&mut self, n: u16) -> io::Result<()> {
self.write_u16::<LittleEndian>(n)

View file

@ -14,8 +14,9 @@ gst = { workspace = true, features = ["v1_18"] }
gst-base = { workspace = true, features = ["v1_18"] }
gst-audio = { workspace = true, features = ["v1_18"] }
gst-video = { workspace = true, features = ["v1_18"] }
gst-pbutils = { workspace = true, features = ["v1_18"] }
gst-pbutils = { workspace = true, features = ["v1_20"] }
once_cell.workspace = true
bitstream-io = "2.3"
[lib]
name = "gstfmp4"
@ -25,9 +26,10 @@ path = "src/lib.rs"
[dev-dependencies]
gst-app = { workspace = true, features = ["v1_18"] }
gst-check = { workspace = true, features = ["v1_18"] }
gst-pbutils = { workspace = true, features = ["v1_20"] }
m3u8-rs = "5.0"
chrono = "0.4"
dash-mpd = { version = "0.14", default-features = false }
chrono = "0.4.35"
dash-mpd = { version = "0.16", default-features = false }
quick-xml = { version = "0.31", features = ["serialize"] }
serde = "1"

View file

@ -179,19 +179,18 @@ fn main() -> Result<(), Error> {
// Write the whole segment timeline out here, compressing multiple segments with
// the same duration to a repeated segment.
let mut segments = vec![];
let mut write_segment =
|start: gst::ClockTime, duration: gst::ClockTime, repeat: usize| {
let mut s = dash_mpd::S {
t: Some(start.mseconds() as i64),
d: duration.mseconds() as i64,
..Default::default()
};
if repeat > 0 {
s.r = Some(repeat as i64);
}
segments.push(s);
let mut write_segment = |start: gst::ClockTime, duration: u64, repeat: usize| {
let mut s = dash_mpd::S {
t: Some(start.mseconds()),
d: duration,
..Default::default()
};
if repeat > 0 {
s.r = Some(repeat as i64);
}
segments.push(s);
};
let mut start = None;
let mut num_segments = 0;
@ -201,15 +200,15 @@ fn main() -> Result<(), Error> {
start = Some(segment.start_time);
}
if last_duration.is_none() {
last_duration = Some(segment.duration);
last_duration = Some(segment.duration.mseconds());
}
// If the duration of this segment is different from the previous one then we
// have to write out the segment now.
if last_duration != Some(segment.duration) {
if last_duration != Some(segment.duration.mseconds()) {
write_segment(start.unwrap(), last_duration.unwrap(), num_segments - 1);
start = Some(segment.start_time);
last_duration = Some(segment.duration);
last_duration = Some(segment.duration.mseconds());
num_segments = 1;
} else {
num_segments += 1;

View file

@ -153,7 +153,7 @@ fn trim_segments(state: &mut StreamState) {
// safe side
removal_time: segment
.date_time
.checked_add_signed(Duration::seconds(20))
.checked_add_signed(Duration::try_seconds(20).unwrap())
.unwrap(),
path: segment.path.clone(),
});

View file

@ -360,6 +360,10 @@ impl AudioStream {
.property("samplesperbuffer", 4410)
.property_from_str("wave", &self.wave)
.build()?;
let taginject = gst::ElementFactory::make("taginject")
.property_from_str("tags", &format!("language-code={}", self.lang))
.property_from_str("scope", "stream")
.build()?;
let raw_capsfilter = gst::ElementFactory::make("capsfilter")
.property(
"caps",
@ -374,9 +378,23 @@ impl AudioStream {
.build()?;
let appsink = gst_app::AppSink::builder().buffer_list(true).build();
pipeline.add_many([&src, &raw_capsfilter, &enc, &mux, appsink.upcast_ref()])?;
pipeline.add_many([
&src,
&taginject,
&raw_capsfilter,
&enc,
&mux,
appsink.upcast_ref(),
])?;
gst::Element::link_many([&src, &raw_capsfilter, &enc, &mux, appsink.upcast_ref()])?;
gst::Element::link_many([
&src,
&taginject,
&raw_capsfilter,
&enc,
&mux,
appsink.upcast_ref(),
])?;
probe_encoder(state, enc);
@ -416,7 +434,7 @@ fn main() -> Result<(), Error> {
},
AudioStream {
name: "audio_1".to_string(),
lang: "fre".to_string(),
lang: "fra".to_string(),
default: false,
wave: "white-noise".to_string(),
},

View file

@ -9,6 +9,7 @@
use gst::prelude::*;
use anyhow::{anyhow, bail, Context, Error};
use std::convert::TryFrom;
use super::Buffer;
@ -160,6 +161,10 @@ fn cmaf_brands_from_caps(caps: &gst::CapsRef, compatible_brands: &mut Vec<&'stat
"audio/mpeg" => {
compatible_brands.push(b"caac");
}
"video/x-av1" => {
compatible_brands.push(b"av01");
compatible_brands.push(b"cmf2");
}
"video/x-h265" => {
let width = s.get::<i32>("width").ok();
let height = s.get::<i32>("height").ok();
@ -604,9 +609,8 @@ fn write_tkhd(
// Volume
let s = stream.caps.structure(0).unwrap();
match s.name().as_str() {
"audio/mpeg" | "audio/x-opus" | "audio/x-alaw" | "audio/x-mulaw" | "audio/x-adpcm" => {
v.extend((1u16 << 8).to_be_bytes())
}
"audio/mpeg" | "audio/x-opus" | "audio/x-flac" | "audio/x-alaw" | "audio/x-mulaw"
| "audio/x-adpcm" => v.extend((1u16 << 8).to_be_bytes()),
_ => v.extend(0u16.to_be_bytes()),
}
@ -700,7 +704,6 @@ fn write_tref(
fn language_code(lang: impl std::borrow::Borrow<[u8; 3]>) -> u16 {
let lang = lang.borrow();
// TODO: Need to relax this once we get the language code from tags
assert!(lang.iter().all(u8::is_ascii_lowercase));
(((lang[0] as u16 - 0x60) & 0x1F) << 10)
@ -710,7 +713,7 @@ fn language_code(lang: impl std::borrow::Borrow<[u8; 3]>) -> u16 {
fn write_mdhd(
v: &mut Vec<u8>,
_cfg: &super::HeaderConfiguration,
cfg: &super::HeaderConfiguration,
stream: &super::HeaderStream,
creation_time: u64,
) -> Result<(), Error> {
@ -724,8 +727,11 @@ fn write_mdhd(
v.extend(0u64.to_be_bytes());
// Language as ISO-639-2/T
// TODO: get actual language from the tags
v.extend(language_code(b"und").to_be_bytes());
if let Some(lang) = cfg.language_code {
v.extend(language_code(lang).to_be_bytes());
} else {
v.extend(language_code(b"und").to_be_bytes());
}
// Pre-defined
v.extend([0u8; 2]);
@ -745,9 +751,8 @@ fn write_hdlr(
let (handler_type, name) = match s.name().as_str() {
"video/x-h264" | "video/x-h265" | "video/x-vp8" | "video/x-vp9" | "video/x-av1"
| "image/jpeg" => (b"vide", b"VideoHandler\0".as_slice()),
"audio/mpeg" | "audio/x-opus" | "audio/x-alaw" | "audio/x-mulaw" | "audio/x-adpcm" => {
(b"soun", b"SoundHandler\0".as_slice())
}
"audio/mpeg" | "audio/x-opus" | "audio/x-flac" | "audio/x-alaw" | "audio/x-mulaw"
| "audio/x-adpcm" => (b"soun", b"SoundHandler\0".as_slice()),
"application/x-onvif-metadata" => (b"meta", b"MetadataHandler\0".as_slice()),
_ => unreachable!(),
};
@ -777,7 +782,8 @@ fn write_minf(
// Flags are always 1 for unspecified reasons
write_full_box(v, b"vmhd", FULL_BOX_VERSION_0, 1, |v| write_vmhd(v, cfg))?
}
"audio/mpeg" | "audio/x-opus" | "audio/x-alaw" | "audio/x-mulaw" | "audio/x-adpcm" => {
"audio/mpeg" | "audio/x-opus" | "audio/x-flac" | "audio/x-alaw" | "audio/x-mulaw"
| "audio/x-adpcm" => {
write_full_box(v, b"smhd", FULL_BOX_VERSION_0, FULL_BOX_FLAGS_NONE, |v| {
write_smhd(v, cfg)
})?
@ -886,9 +892,8 @@ fn write_stsd(
match s.name().as_str() {
"video/x-h264" | "video/x-h265" | "video/x-vp8" | "video/x-vp9" | "video/x-av1"
| "image/jpeg" => write_visual_sample_entry(v, cfg, stream)?,
"audio/mpeg" | "audio/x-opus" | "audio/x-alaw" | "audio/x-mulaw" | "audio/x-adpcm" => {
write_audio_sample_entry(v, cfg, stream)?
}
"audio/mpeg" | "audio/x-opus" | "audio/x-flac" | "audio/x-alaw" | "audio/x-mulaw"
| "audio/x-adpcm" => write_audio_sample_entry(v, cfg, stream)?,
"application/x-onvif-metadata" => write_xml_meta_data_sample_entry(v, cfg, stream)?,
_ => unreachable!(),
}
@ -1098,9 +1103,9 @@ fn write_visual_sample_entry(
"professional" => 2,
_ => unreachable!(),
};
let level = 1; // FIXME
let tier = 0; // FIXME
// TODO: Use `gst_codec_utils_av1_get_seq_level_idx` when exposed in bindings
let level = av1_seq_level_idx(s.get::<&str>("level").ok());
let tier = av1_tier(s.get::<&str>("tier").ok());
let (high_bitdepth, twelve_bit) =
match s.get::<u32>("bit-depth-luma").unwrap() {
8 => (false, false),
@ -1145,6 +1150,10 @@ fn write_visual_sample_entry(
v.extend_from_slice(&codec_data);
}
if let Some(extra_data) = &stream.extra_header_data {
// configOBUs
v.extend_from_slice(extra_data.as_slice());
}
Ok(())
})?;
}
@ -1253,6 +1262,44 @@ fn write_visual_sample_entry(
Ok(())
}
fn av1_seq_level_idx(level: Option<&str>) -> u8 {
match level {
Some("2.0") => 0,
Some("2.1") => 1,
Some("2.2") => 2,
Some("2.3") => 3,
Some("3.0") => 4,
Some("3.1") => 5,
Some("3.2") => 6,
Some("3.3") => 7,
Some("4.0") => 8,
Some("4.1") => 9,
Some("4.2") => 10,
Some("4.3") => 11,
Some("5.0") => 12,
Some("5.1") => 13,
Some("5.2") => 14,
Some("5.3") => 15,
Some("6.0") => 16,
Some("6.1") => 17,
Some("6.2") => 18,
Some("6.3") => 19,
Some("7.0") => 20,
Some("7.1") => 21,
Some("7.2") => 22,
Some("7.3") => 23,
_ => 1,
}
}
fn av1_tier(tier: Option<&str>) -> u8 {
match tier {
Some("main") => 0,
Some("high") => 1,
_ => 0,
}
}
fn write_audio_sample_entry(
v: &mut Vec<u8>,
_cfg: &super::HeaderConfiguration,
@ -1262,6 +1309,7 @@ fn write_audio_sample_entry(
let fourcc = match s.name().as_str() {
"audio/mpeg" => b"mp4a",
"audio/x-opus" => b"Opus",
"audio/x-flac" => b"fLaC",
"audio/x-alaw" => b"alaw",
"audio/x-mulaw" => b"ulaw",
"audio/x-adpcm" => {
@ -1280,6 +1328,10 @@ fn write_audio_sample_entry(
let bitrate = s.get::<i32>("bitrate").context("no ADPCM bitrate field")?;
(bitrate / 8000) as u16
}
"audio/x-flac" => with_flac_metadata(&stream.caps, |streaminfo, _| {
1 + (u16::from_be_bytes([streaminfo[16], streaminfo[17]]) >> 4 & 0b11111)
})
.context("FLAC metadata error")?,
_ => 16u16,
};
@ -1322,6 +1374,9 @@ fn write_audio_sample_entry(
"audio/x-opus" => {
write_dops(v, &stream.caps)?;
}
"audio/x-flac" => {
write_dfla(v, &stream.caps)?;
}
"audio/x-alaw" | "audio/x-mulaw" | "audio/x-adpcm" => {
// Nothing to do here
}
@ -1516,6 +1571,35 @@ fn write_dops(v: &mut Vec<u8>, caps: &gst::Caps) -> Result<(), Error> {
})
}
fn with_flac_metadata<R>(
caps: &gst::Caps,
cb: impl FnOnce(&[u8], &[gst::glib::SendValue]) -> R,
) -> Result<R, Error> {
let caps = caps.structure(0).unwrap();
let header = caps.get::<gst::ArrayRef>("streamheader").unwrap();
let (streaminfo, remainder) = header.as_ref().split_first().unwrap();
let streaminfo = streaminfo.get::<&gst::BufferRef>().unwrap();
let streaminfo = streaminfo.map_readable().unwrap();
// 13 bytes for the Ogg/FLAC prefix and 38 for the streaminfo itself.
match <&[_; 13 + 38]>::try_from(streaminfo.as_slice()) {
Ok(i) if i.starts_with(b"\x7FFLAC\x01\x00") => Ok(cb(&i[13..], remainder)),
Ok(_) | Err(_) => bail!("Unknown streamheader format"),
}
}
fn write_dfla(v: &mut Vec<u8>, caps: &gst::Caps) -> Result<(), Error> {
write_full_box(v, b"dfLa", 0, 0, move |v| {
with_flac_metadata(caps, |streaminfo, remainder| {
v.extend(streaminfo);
for metadata in remainder {
let metadata = metadata.get::<&gst::BufferRef>().unwrap();
let metadata = metadata.map_readable().unwrap();
v.extend(&metadata[..]);
}
})
})
}
fn write_xml_meta_data_sample_entry(
v: &mut Vec<u8>,
_cfg: &super::HeaderConfiguration,

View file

@ -16,6 +16,7 @@ use std::collections::VecDeque;
use std::mem;
use std::sync::Mutex;
use crate::fmp4mux::obu::read_seq_header_obu_bytes;
use once_cell::sync::Lazy;
use super::boxes;
@ -205,6 +206,8 @@ struct Stream {
caps: gst::Caps,
/// Whether this stream is intra-only and has frame reordering.
delta_frames: DeltaFrames,
/// Whether this stream might have header frames without timestamps that should be ignored.
discard_header_buffers: bool,
/// Currently queued GOPs, including incomplete ones.
queued_gops: VecDeque<Gop>,
@ -222,6 +225,8 @@ struct Stream {
/// Mapping between running time and UTC time in ONVIF mode.
running_time_utc_time_mapping: Option<(gst::Signed<gst::ClockTime>, gst::ClockTime)>,
extra_header_data: Option<Vec<u8>>,
}
#[derive(Default)]
@ -248,6 +253,8 @@ struct State {
end_pts: Option<gst::ClockTime>,
/// Start DTS of the whole stream
start_dts: Option<gst::ClockTime>,
/// Language code from tags
language_code: Option<[u8; 3]>,
/// Start PTS of the current fragment
fragment_start_pts: Option<gst::ClockTime>,
@ -271,11 +278,17 @@ pub(crate) struct FMP4Mux {
impl FMP4Mux {
/// Checks if a buffer is valid according to the stream configuration.
fn check_buffer(
buffer: &gst::BufferRef,
sinkpad: &super::FMP4MuxPad,
delta_frames: super::DeltaFrames,
) -> Result<(), gst::FlowError> {
fn check_buffer(buffer: &gst::BufferRef, stream: &Stream) -> Result<(), gst::FlowError> {
let Stream {
sinkpad,
delta_frames,
discard_header_buffers,
..
} = stream;
if *discard_header_buffers && buffer.flags().contains(gst::BufferFlags::HEADER) {
return Err(gst_base::AGGREGATOR_FLOW_NEED_DATA);
}
if delta_frames.requires_dts() && buffer.dts().is_none() {
gst::error!(CAT, obj: sinkpad, "Require DTS for video streams");
return Err(gst::FlowError::Error);
@ -314,12 +327,10 @@ impl FMP4Mux {
}
// Pop buffer here, it will be stored in the pre-queue after calculating its timestamps
let mut buffer = match stream.sinkpad.pop_buffer() {
None => return Ok(None),
Some(buffer) => buffer,
let Some(mut buffer) = stream.sinkpad.pop_buffer() else {
return Ok(None);
};
Self::check_buffer(&buffer, &stream.sinkpad, stream.delta_frames)?;
Self::check_buffer(&buffer, stream)?;
let segment = match stream.sinkpad.segment().downcast::<gst::ClockTime>().ok() {
Some(segment) => segment,
@ -792,6 +803,22 @@ impl FMP4Mux {
stream.dts_offset.display(),
);
// If the stream is AV1, we need to parse the SequenceHeader OBU to include in the
// extra data of the 'av1C' box. It makes the stream playable in some browsers.
let s = stream.caps.structure(0).unwrap();
if !buffer.flags().contains(gst::BufferFlags::DELTA_UNIT)
&& s.name().as_str() == "video/x-av1"
{
let buf_map = buffer.map_readable().map_err(|_| {
gst::error!(CAT, obj: stream.sinkpad, "Failed to map buffer");
gst::FlowError::Error
})?;
stream.extra_header_data = read_seq_header_obu_bytes(buf_map.as_slice()).map_err(|_| {
gst::error!(CAT, obj: stream.sinkpad, "Failed to parse AV1 SequenceHeader OBU");
gst::FlowError::Error
})?;
}
let gop = Gop {
start_pts: pts,
start_dts: dts,
@ -2555,6 +2582,7 @@ impl FMP4Mux {
let s = caps.structure(0).unwrap();
let mut delta_frames = DeltaFrames::IntraOnly;
let mut discard_header_buffers = false;
match s.name().as_str() {
"video/x-h264" | "video/x-h265" => {
if !s.has_field_with_type("codec_data", gst::Buffer::static_type()) {
@ -2598,6 +2626,13 @@ impl FMP4Mux {
return Err(gst::FlowError::NotNegotiated);
}
}
"audio/x-flac" => {
discard_header_buffers = true;
if let Err(e) = s.get::<gst::ArrayRef>("streamheader") {
gst::error!(CAT, obj: pad, "Muxing FLAC into MP4 needs streamheader: {}", e);
return Err(gst::FlowError::NotNegotiated);
};
}
"audio/x-alaw" | "audio/x-mulaw" => (),
"audio/x-adpcm" => (),
"application/x-onvif-metadata" => (),
@ -2608,6 +2643,7 @@ impl FMP4Mux {
sinkpad: pad,
caps,
delta_frames,
discard_header_buffers,
pre_queue: VecDeque::new(),
queued_gops: VecDeque::new(),
fragment_filled: false,
@ -2615,6 +2651,7 @@ impl FMP4Mux {
dts_offset: None,
current_position: gst::ClockTime::ZERO,
running_time_utc_time_mapping: None,
extra_header_data: None,
});
}
@ -2682,6 +2719,7 @@ impl FMP4Mux {
trak_timescale: s.sinkpad.imp().settings.lock().unwrap().trak_timescale,
delta_frames: s.delta_frames,
caps: s.caps.clone(),
extra_header_data: s.extra_header_data.clone(),
})
.collect::<Vec<_>>();
@ -2692,6 +2730,7 @@ impl FMP4Mux {
streams,
write_mehd: settings.write_mehd,
duration: if at_eos { duration } else { None },
language_code: state.language_code,
start_utc_time: if variant == super::Variant::ONVIF {
state
.earliest_pts
@ -3125,8 +3164,22 @@ impl AggregatorImpl for FMP4Mux {
self.parent_sink_event(aggregator_pad, event)
}
EventView::Tag(_ev) => {
// TODO: Maybe store for putting into the headers of the next fragment?
EventView::Tag(ev) => {
if let Some(tag_value) = ev.tag().get::<gst::tags::LanguageCode>() {
let lang = tag_value.get();
gst::trace!(CAT, imp: self, "Received language code from tags: {:?}", lang);
// Language as ISO-639-2/T
if lang.len() == 3 && lang.chars().all(|c| c.is_ascii_lowercase()) {
let mut state = self.state.lock().unwrap();
let mut language_code: [u8; 3] = [0; 3];
for (out, c) in Iterator::zip(language_code.iter_mut(), lang.chars()) {
*out = c as u8;
}
state.language_code = Some(language_code);
}
}
self.parent_sink_event(aggregator_pad, event)
}
@ -3465,6 +3518,11 @@ impl ElementImpl for ISOFMP4Mux {
.field("channels", gst::IntRange::new(1i32, 8))
.field("rate", gst::IntRange::new(1, i32::MAX))
.build(),
gst::Structure::builder("audio/x-flac")
.field("framed", true)
.field("channels", gst::IntRange::<i32>::new(1, 8))
.field("rate", gst::IntRange::<i32>::new(1, 10 * u16::MAX as i32))
.build(),
]
.into_iter()
.collect::<gst::Caps>(),
@ -3536,6 +3594,19 @@ impl ElementImpl for CMAFMux {
.field("width", gst::IntRange::new(1, u16::MAX as i32))
.field("height", gst::IntRange::new(1, u16::MAX as i32))
.build(),
gst::Structure::builder("video/x-av1")
.field("stream-format", "obu-stream")
.field("alignment", "tu")
.field("profile", gst::List::new(["main", "high", "professional"]))
.field(
"chroma-format",
gst::List::new(["4:0:0", "4:2:0", "4:2:2", "4:4:4"]),
)
.field("bit-depth-luma", gst::List::new([8u32, 10u32, 12u32]))
.field("bit-depth-chroma", gst::List::new([8u32, 10u32, 12u32]))
.field("width", gst::IntRange::new(1, u16::MAX as i32))
.field("height", gst::IntRange::new(1, u16::MAX as i32))
.build(),
gst::Structure::builder("video/x-h265")
.field("stream-format", gst::List::new(["hvc1", "hev1"]))
.field("alignment", "au")

View file

@ -12,6 +12,8 @@ use gst::prelude::*;
mod boxes;
mod imp;
mod obu;
glib::wrapper! {
pub(crate) struct FMP4MuxPad(ObjectSubclass<imp::FMP4MuxPad>) @extends gst_base::AggregatorPad, gst::Pad, gst::Object;
}
@ -85,6 +87,7 @@ pub(crate) struct HeaderConfiguration {
write_mehd: bool,
duration: Option<gst::ClockTime>,
language_code: Option<[u8; 3]>,
/// Start UTC time in ONVIF mode.
/// Since Jan 1 1601 in 100ns units.
@ -101,6 +104,9 @@ pub(crate) struct HeaderStream {
/// Pre-defined trak timescale if not 0.
trak_timescale: u32,
// More data to be included in the fragmented stream header
extra_header_data: Option<Vec<u8>>,
}
#[derive(Debug)]

303
mux/fmp4/src/fmp4mux/obu.rs Normal file
View file

@ -0,0 +1,303 @@
//
// Copyright (C) 2022 Vivienne Watermeier <vwatermeier@igalia.com>
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
#![allow(non_camel_case_types)]
use bitstream_io::{BigEndian, BitRead, BitReader, Endianness};
use std::io::{self, Cursor, Read, Seek, SeekFrom};
pub fn parse_leb128<R, E>(reader: &mut BitReader<R, E>) -> io::Result<(u32, u32)>
where
R: Read + Seek,
E: Endianness,
{
let mut value = 0;
let mut num_bytes = 0;
for i in 0..8 {
let byte = reader.read::<u32>(8)?;
value |= (byte & 0x7f) << (i * 7);
num_bytes += 1;
if byte & 0x80 == 0 {
break;
}
}
reader.byte_align();
Ok((value, num_bytes))
}
#[derive(Default, Debug, Clone, Copy, PartialEq, Eq)]
pub struct SizedObu {
pub obu_type: ObuType,
pub has_extension: bool,
/// If the OBU header is followed by a leb128 size field.
pub has_size_field: bool,
pub temporal_id: u8,
pub spatial_id: u8,
/// size of the OBU payload in bytes.
/// This may refer to different sizes in different contexts, not always
/// to the entire OBU payload as it is in the AV1 bitstream.
pub size: u32,
/// the number of bytes the leb128 size field will take up
/// when written with write_leb128().
/// This does not imply `has_size_field`, and does not necessarily match with
/// the length of the internal size field if present.
pub leb_size: u32,
pub header_len: u32,
/// indicates that only part of this OBU has been processed so far
pub is_fragment: bool,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum ObuType {
Reserved,
SequenceHeader,
TemporalDelimiter,
FrameHeader,
TileGroup,
Metadata,
Frame,
RedundantFrameHeader,
TileList,
Padding,
}
impl Default for ObuType {
fn default() -> Self {
Self::Reserved
}
}
impl SizedObu {
/// Parse an OBU header and size field. If the OBU is not expected to contain
/// a size field, but the size is known from external information,
/// parse as an `UnsizedObu` and use `to_sized`.
pub fn parse<R, E>(reader: &mut BitReader<R, E>) -> io::Result<Self>
where
R: Read + Seek,
E: Endianness,
{
// check the forbidden bit
if reader.read_bit()? {
return Err(io::Error::new(
io::ErrorKind::InvalidData,
"forbidden bit in OBU header is set",
));
}
let obu_type = reader.read::<u8>(4)?.into();
let has_extension = reader.read_bit()?;
// require a size field
if !reader.read_bit()? {
return Err(io::Error::new(
io::ErrorKind::InvalidData,
"expected a size field",
));
}
// ignore the reserved bit
let _ = reader.read_bit()?;
let (temporal_id, spatial_id) = if has_extension {
(reader.read::<u8>(3)?, reader.read::<u8>(2)?)
} else {
(0, 0)
};
reader.byte_align();
let (size, leb_size) = parse_leb128(reader)?;
Ok(Self {
obu_type,
has_extension,
has_size_field: true,
temporal_id,
spatial_id,
size,
leb_size,
header_len: has_extension as u32 + 1,
is_fragment: false,
})
}
/// The amount of bytes this OBU will take up, including the space needed for
/// its leb128 size field.
pub fn full_size(&self) -> u32 {
self.size + self.leb_size + self.header_len
}
}
pub fn read_seq_header_obu_bytes(data: &[u8]) -> io::Result<Option<Vec<u8>>> {
let mut cursor = Cursor::new(data);
while cursor.position() < data.len() as u64 {
let obu_start = cursor.position();
let Ok(obu) = SizedObu::parse(&mut BitReader::endian(&mut cursor, BigEndian)) else {
break;
};
// set reader to the beginning of the OBU
cursor.seek(SeekFrom::Start(obu_start))?;
if obu.obu_type != ObuType::SequenceHeader {
// Skip the full OBU
cursor.seek(SeekFrom::Current(obu.full_size() as i64))?;
continue;
};
// read the full OBU
let mut bytes = vec![0; obu.full_size() as usize];
cursor.read_exact(&mut bytes)?;
return Ok(Some(bytes));
}
Ok(None)
}
impl From<u8> for ObuType {
fn from(n: u8) -> Self {
assert!(n < 16);
match n {
1 => Self::SequenceHeader,
2 => Self::TemporalDelimiter,
3 => Self::FrameHeader,
4 => Self::TileGroup,
5 => Self::Metadata,
6 => Self::Frame,
7 => Self::RedundantFrameHeader,
8 => Self::TileList,
15 => Self::Padding,
_ => Self::Reserved,
}
}
}
impl From<ObuType> for u8 {
fn from(ty: ObuType) -> Self {
match ty {
ObuType::Reserved => 0,
ObuType::SequenceHeader => 1,
ObuType::TemporalDelimiter => 2,
ObuType::FrameHeader => 3,
ObuType::TileGroup => 4,
ObuType::Metadata => 5,
ObuType::Frame => 6,
ObuType::RedundantFrameHeader => 7,
ObuType::TileList => 8,
ObuType::Padding => 15,
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use bitstream_io::{BigEndian, BitReader};
use once_cell::sync::Lazy;
use std::io::Cursor;
#[allow(clippy::type_complexity)]
static OBUS: Lazy<Vec<(SizedObu, Vec<u8>)>> = Lazy::new(|| {
vec![
(
SizedObu {
obu_type: ObuType::TemporalDelimiter,
has_extension: false,
has_size_field: true,
temporal_id: 0,
spatial_id: 0,
size: 0,
leb_size: 1,
header_len: 1,
is_fragment: false,
},
vec![0b0001_0010, 0b0000_0000],
),
(
SizedObu {
obu_type: ObuType::Padding,
has_extension: false,
has_size_field: true,
temporal_id: 0,
spatial_id: 0,
size: 10,
leb_size: 1,
header_len: 1,
is_fragment: false,
},
vec![0b0111_1010, 0b0000_1010, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
),
(
SizedObu {
obu_type: ObuType::SequenceHeader,
has_extension: true,
has_size_field: true,
temporal_id: 4,
spatial_id: 3,
size: 5,
leb_size: 1,
header_len: 2,
is_fragment: false,
},
vec![0b0000_1110, 0b1001_1000, 0b0000_0101, 1, 2, 3, 4, 5],
),
(
SizedObu {
obu_type: ObuType::Frame,
has_extension: true,
has_size_field: true,
temporal_id: 4,
spatial_id: 3,
size: 5,
leb_size: 1,
header_len: 2,
is_fragment: false,
},
vec![0b0011_0110, 0b1001_1000, 0b0000_0101, 1, 2, 3, 4, 5],
),
]
});
#[test]
fn test_parse_rtp_obu() {
for (idx, (sized_obu, raw_bytes)) in (*OBUS).iter().enumerate() {
println!("running test {idx}...");
let mut reader = BitReader::endian(Cursor::new(&raw_bytes), BigEndian);
let obu_parsed = SizedObu::parse(&mut reader).unwrap();
assert_eq!(&obu_parsed, sized_obu);
if let Some(seq_header_obu_bytes) = read_seq_header_obu_bytes(raw_bytes).unwrap() {
println!("validation of sequence header obu read/write...");
assert_eq!(&seq_header_obu_bytes, raw_bytes);
}
}
}
#[test]
fn test_read_seq_header_from_bitstream() {
let mut bitstream = Vec::new();
let mut seq_header_bytes_raw = None;
for (obu, raw_bytes) in (*OBUS).iter() {
bitstream.extend(raw_bytes);
if obu.obu_type == ObuType::SequenceHeader {
seq_header_bytes_raw = Some(raw_bytes.clone());
}
}
let seq_header_obu_bytes = read_seq_header_obu_bytes(&bitstream).unwrap().unwrap();
assert_eq!(seq_header_obu_bytes, seq_header_bytes_raw.unwrap());
}
}

View file

@ -19,6 +19,33 @@ fn init() {
});
}
fn to_completion(pipeline: &gst::Pipeline) {
pipeline
.set_state(gst::State::Playing)
.expect("Unable to set the pipeline to the `Playing` state");
for msg in pipeline.bus().unwrap().iter_timed(gst::ClockTime::NONE) {
use gst::MessageView;
match msg.view() {
MessageView::Eos(..) => break,
MessageView::Error(err) => {
panic!(
"Error from {:?}: {} ({:?})",
err.src().map(|s| s.path_string()),
err.error(),
err.debug()
);
}
_ => (),
}
}
pipeline
.set_state(gst::State::Null)
.expect("Unable to set the pipeline to the `Null` state");
}
fn test_buffer_flags_single_stream(cmaf: bool, set_dts: bool, caps: gst::Caps) {
let mut h = if cmaf {
gst_check::Harness::new("cmafmux")
@ -209,6 +236,26 @@ fn test_buffer_flags_single_vp9_stream_iso() {
test_buffer_flags_single_stream(false, false, caps);
}
#[test]
fn test_buffer_flags_single_av1_stream_cmaf() {
init();
let caps = gst::Caps::builder("video/x-av1")
.field("width", 1920i32)
.field("height", 1080i32)
.field("framerate", gst::Fraction::new(30, 1))
.field("profile", "main")
.field("tier", "main")
.field("level", "4.1")
.field("chroma-format", "4:2:0")
.field("bit-depth-luma", 8u32)
.field("bit-depth-chroma", 8u32)
.field("colorimetry", "bt709")
.build();
test_buffer_flags_single_stream(true, false, caps);
}
#[test]
fn test_buffer_flags_multi_stream() {
init();
@ -1993,3 +2040,21 @@ fn test_chunking_single_stream_gops_after_fragment_end_after_next_chunk_end() {
let ev = h.pull_event().unwrap();
assert_eq!(ev.type_(), gst::EventType::Eos);
}
#[test]
fn test_roundtrip_vp9_flac() {
init();
let pipeline = gst::parse::launch(
r#"
videotestsrc num-buffers=99 ! vp9enc ! vp9parse ! mux.
audiotestsrc num-buffers=149 ! flacenc ! flacparse ! mux.
isofmp4mux name=mux ! qtdemux name=demux
demux.audio_0 ! queue ! flacdec ! fakesink
demux.video_0 ! queue ! vp9dec ! fakesink
"#,
)
.unwrap();
let pipeline = pipeline.downcast().unwrap();
to_completion(&pipeline);
}

View file

@ -16,6 +16,7 @@ gst-audio = { workspace = true, features = ["v1_18"] }
gst-video = { workspace = true, features = ["v1_18"] }
gst-pbutils = { workspace = true, features = ["v1_18"] }
once_cell.workspace = true
bitstream-io = "2.3"
[lib]
name = "gstmp4"

View file

@ -9,7 +9,7 @@
use gst::prelude::*;
use anyhow::{anyhow, bail, Context, Error};
use std::convert::TryFrom;
use std::str::FromStr;
fn write_box<T, F: FnOnce(&mut Vec<u8>) -> Result<T, Error>>(
@ -56,18 +56,31 @@ fn write_full_box<T, F: FnOnce(&mut Vec<u8>) -> Result<T, Error>>(
}
/// Creates `ftyp` box
pub(super) fn create_ftyp(variant: super::Variant) -> Result<gst::Buffer, Error> {
pub(super) fn create_ftyp(
variant: super::Variant,
content_caps: &[&gst::CapsRef],
) -> Result<gst::Buffer, Error> {
let mut v = vec![];
let mut minor_version = 0u32;
let (brand, compatible_brands) = match variant {
let (brand, mut compatible_brands) = match variant {
super::Variant::ISO | super::Variant::ONVIF => (b"iso4", vec![b"mp41", b"mp42", b"isom"]),
};
for caps in content_caps {
let s = caps.structure(0).unwrap();
if let (super::Variant::ISO, "video/x-av1") = (variant, s.name().as_str()) {
minor_version = 1;
compatible_brands = vec![b"iso4", b"av01"];
break;
}
}
write_box(&mut v, b"ftyp", |v| {
// major brand
v.extend(brand);
// minor version
v.extend(0u32.to_be_bytes());
v.extend(minor_version.to_be_bytes());
// compatible brands
v.extend(compatible_brands.into_iter().flatten());
@ -382,9 +395,8 @@ fn write_tkhd(
// Volume
let s = stream.caps.structure(0).unwrap();
match s.name().as_str() {
"audio/mpeg" | "audio/x-opus" | "audio/x-alaw" | "audio/x-mulaw" | "audio/x-adpcm" => {
v.extend((1u16 << 8).to_be_bytes())
}
"audio/mpeg" | "audio/x-opus" | "audio/x-flac" | "audio/x-alaw" | "audio/x-mulaw"
| "audio/x-adpcm" => v.extend((1u16 << 8).to_be_bytes()),
_ => v.extend(0u16.to_be_bytes()),
}
@ -460,7 +472,6 @@ fn write_mdia(
fn language_code(lang: impl std::borrow::Borrow<[u8; 3]>) -> u16 {
let lang = lang.borrow();
// TODO: Need to relax this once we get the language code from tags
assert!(lang.iter().all(u8::is_ascii_lowercase));
(((lang[0] as u16 - 0x60) & 0x1F) << 10)
@ -470,7 +481,7 @@ fn language_code(lang: impl std::borrow::Borrow<[u8; 3]>) -> u16 {
fn write_mdhd(
v: &mut Vec<u8>,
_header: &super::Header,
header: &super::Header,
stream: &super::Stream,
creation_time: u64,
) -> Result<(), Error> {
@ -493,8 +504,11 @@ fn write_mdhd(
v.extend(duration.to_be_bytes());
// Language as ISO-639-2/T
// TODO: get actual language from the tags
v.extend(language_code(b"und").to_be_bytes());
if let Some(lang) = header.language_code {
v.extend(language_code(lang).to_be_bytes());
} else {
v.extend(language_code(b"und").to_be_bytes());
}
// Pre-defined
v.extend([0u8; 2]);
@ -514,9 +528,8 @@ fn write_hdlr(
let (handler_type, name) = match s.name().as_str() {
"video/x-h264" | "video/x-h265" | "video/x-vp8" | "video/x-vp9" | "video/x-av1"
| "image/jpeg" => (b"vide", b"VideoHandler\0".as_slice()),
"audio/mpeg" | "audio/x-opus" | "audio/x-alaw" | "audio/x-mulaw" | "audio/x-adpcm" => {
(b"soun", b"SoundHandler\0".as_slice())
}
"audio/mpeg" | "audio/x-opus" | "audio/x-flac" | "audio/x-alaw" | "audio/x-mulaw"
| "audio/x-adpcm" => (b"soun", b"SoundHandler\0".as_slice()),
"application/x-onvif-metadata" => (b"meta", b"MetadataHandler\0".as_slice()),
_ => unreachable!(),
};
@ -546,7 +559,8 @@ fn write_minf(
// Flags are always 1 for unspecified reasons
write_full_box(v, b"vmhd", FULL_BOX_VERSION_0, 1, |v| write_vmhd(v, header))?
}
"audio/mpeg" | "audio/x-opus" | "audio/x-alaw" | "audio/x-mulaw" | "audio/x-adpcm" => {
"audio/mpeg" | "audio/x-opus" | "audio/x-flac" | "audio/x-alaw" | "audio/x-mulaw"
| "audio/x-adpcm" => {
write_full_box(v, b"smhd", FULL_BOX_VERSION_0, FULL_BOX_FLAGS_NONE, |v| {
write_smhd(v, header)
})?
@ -703,9 +717,8 @@ fn write_stsd(
match s.name().as_str() {
"video/x-h264" | "video/x-h265" | "video/x-vp8" | "video/x-vp9" | "video/x-av1"
| "image/jpeg" => write_visual_sample_entry(v, header, stream)?,
"audio/mpeg" | "audio/x-opus" | "audio/x-alaw" | "audio/x-mulaw" | "audio/x-adpcm" => {
write_audio_sample_entry(v, header, stream)?
}
"audio/mpeg" | "audio/x-opus" | "audio/x-flac" | "audio/x-alaw" | "audio/x-mulaw"
| "audio/x-adpcm" => write_audio_sample_entry(v, header, stream)?,
"application/x-onvif-metadata" => write_xml_meta_data_sample_entry(v, header, stream)?,
_ => unreachable!(),
}
@ -916,8 +929,9 @@ fn write_visual_sample_entry(
_ => unreachable!(),
};
let level = 1; // FIXME
let tier = 0; // FIXME
// TODO: Use `gst_codec_utils_av1_get_seq_level_idx` when exposed in bindings
let level = av1_seq_level_idx(s.get::<&str>("level").ok());
let tier = av1_tier(s.get::<&str>("tier").ok());
let (high_bitdepth, twelve_bit) =
match s.get::<u32>("bit-depth-luma").unwrap() {
8 => (false, false),
@ -962,6 +976,10 @@ fn write_visual_sample_entry(
v.extend_from_slice(&codec_data);
}
if let Some(extra_data) = &stream.extra_header_data {
// unsigned int(8) configOBUs[];
v.extend_from_slice(extra_data.as_slice());
}
Ok(())
})?;
}
@ -1070,6 +1088,44 @@ fn write_visual_sample_entry(
Ok(())
}
fn av1_seq_level_idx(level: Option<&str>) -> u8 {
match level {
Some("2.0") => 0,
Some("2.1") => 1,
Some("2.2") => 2,
Some("2.3") => 3,
Some("3.0") => 4,
Some("3.1") => 5,
Some("3.2") => 6,
Some("3.3") => 7,
Some("4.0") => 8,
Some("4.1") => 9,
Some("4.2") => 10,
Some("4.3") => 11,
Some("5.0") => 12,
Some("5.1") => 13,
Some("5.2") => 14,
Some("5.3") => 15,
Some("6.0") => 16,
Some("6.1") => 17,
Some("6.2") => 18,
Some("6.3") => 19,
Some("7.0") => 20,
Some("7.1") => 21,
Some("7.2") => 22,
Some("7.3") => 23,
_ => 1,
}
}
fn av1_tier(tier: Option<&str>) -> u8 {
match tier {
Some("main") => 0,
Some("high") => 1,
_ => 0,
}
}
fn write_audio_sample_entry(
v: &mut Vec<u8>,
_header: &super::Header,
@ -1079,6 +1135,7 @@ fn write_audio_sample_entry(
let fourcc = match s.name().as_str() {
"audio/mpeg" => b"mp4a",
"audio/x-opus" => b"Opus",
"audio/x-flac" => b"fLaC",
"audio/x-alaw" => b"alaw",
"audio/x-mulaw" => b"ulaw",
"audio/x-adpcm" => {
@ -1097,6 +1154,10 @@ fn write_audio_sample_entry(
let bitrate = s.get::<i32>("bitrate").context("no ADPCM bitrate field")?;
(bitrate / 8000) as u16
}
"audio/x-flac" => with_flac_metadata(&stream.caps, |streaminfo, _| {
1 + (u16::from_be_bytes([streaminfo[16], streaminfo[17]]) >> 4 & 0b11111)
})
.context("FLAC metadata error")?,
_ => 16u16,
};
@ -1139,6 +1200,9 @@ fn write_audio_sample_entry(
"audio/x-opus" => {
write_dops(v, &stream.caps)?;
}
"audio/x-flac" => {
write_dfla(v, &stream.caps)?;
}
"audio/x-alaw" | "audio/x-mulaw" | "audio/x-adpcm" => {
// Nothing to do here
}
@ -1333,6 +1397,35 @@ fn write_dops(v: &mut Vec<u8>, caps: &gst::Caps) -> Result<(), Error> {
})
}
fn with_flac_metadata<R>(
caps: &gst::Caps,
cb: impl FnOnce(&[u8], &[gst::glib::SendValue]) -> R,
) -> Result<R, Error> {
let caps = caps.structure(0).unwrap();
let header = caps.get::<gst::ArrayRef>("streamheader").unwrap();
let (streaminfo, remainder) = header.as_ref().split_first().unwrap();
let streaminfo = streaminfo.get::<&gst::BufferRef>().unwrap();
let streaminfo = streaminfo.map_readable().unwrap();
// 13 bytes for the Ogg/FLAC prefix and 38 for the streaminfo itself.
match <&[_; 13 + 38]>::try_from(streaminfo.as_slice()) {
Ok(i) if i.starts_with(b"\x7FFLAC\x01\x00") => Ok(cb(&i[13..], remainder)),
Ok(_) | Err(_) => bail!("Unknown streamheader format"),
}
}
fn write_dfla(v: &mut Vec<u8>, caps: &gst::Caps) -> Result<(), Error> {
write_full_box(v, b"dfLa", 0, 0, move |v| {
with_flac_metadata(caps, |streaminfo, remainder| {
v.extend(streaminfo);
for metadata in remainder {
let metadata = metadata.get::<&gst::BufferRef>().unwrap();
let metadata = metadata.map_readable().unwrap();
v.extend(&metadata[..]);
}
})
})
}
fn write_xml_meta_data_sample_entry(
v: &mut Vec<u8>,
_header: &super::Header,

View file

@ -15,6 +15,7 @@ use gst_base::subclass::prelude::*;
use std::collections::VecDeque;
use std::sync::Mutex;
use crate::mp4mux::obu::read_seq_header_obu_bytes;
use once_cell::sync::Lazy;
use super::boxes;
@ -108,6 +109,8 @@ struct Stream {
caps: gst::Caps,
/// Whether this stream is intra-only and has frame reordering.
delta_frames: super::DeltaFrames,
/// Whether this stream might have header frames without timestamps that should be ignored.
discard_header_buffers: bool,
/// Already written out chunks with their samples for this stream
chunks: Vec<super::Chunk>,
@ -133,6 +136,8 @@ struct Stream {
/// In ONVIF mode, the mapping between running time and UTC time (UNIX)
running_time_utc_time_mapping: Option<(gst::Signed<gst::ClockTime>, gst::ClockTime)>,
extra_header_data: Option<Vec<u8>>,
}
#[derive(Default)]
@ -151,6 +156,9 @@ struct State {
/// Size of the `mdat` as written so far.
mdat_size: u64,
/// Language code from tags
language_code: Option<[u8; 3]>,
}
#[derive(Default)]
@ -165,7 +173,12 @@ impl MP4Mux {
buffer: &gst::BufferRef,
sinkpad: &super::MP4MuxPad,
delta_frames: super::DeltaFrames,
discard_headers: bool,
) -> Result<(), gst::FlowError> {
if discard_headers && buffer.flags().contains(gst::BufferFlags::HEADER) {
return Err(gst_base::AGGREGATOR_FLOW_NEED_DATA);
}
if delta_frames.requires_dts() && buffer.dts().is_none() {
gst::error!(CAT, obj: sinkpad, "Require DTS for video streams");
return Err(gst::FlowError::Error);
@ -188,6 +201,7 @@ impl MP4Mux {
&self,
sinkpad: &super::MP4MuxPad,
delta_frames: super::DeltaFrames,
discard_headers: bool,
pre_queue: &mut VecDeque<(gst::FormattedSegment<gst::ClockTime>, gst::Buffer)>,
running_time_utc_time_mapping: &Option<(gst::Signed<gst::ClockTime>, gst::ClockTime)>,
) -> Result<Option<(gst::FormattedSegment<gst::ClockTime>, gst::Buffer)>, gst::FlowError> {
@ -195,13 +209,10 @@ impl MP4Mux {
return Ok(Some((segment.clone(), buffer.clone())));
}
let mut buffer = match sinkpad.peek_buffer() {
None => return Ok(None),
Some(buffer) => buffer,
let Some(mut buffer) = sinkpad.peek_buffer() else {
return Ok(None);
};
Self::check_buffer(&buffer, sinkpad, delta_frames)?;
Self::check_buffer(&buffer, sinkpad, delta_frames, discard_headers)?;
let mut segment = match sinkpad.segment().downcast::<gst::ClockTime>().ok() {
Some(segment) => segment,
None => {
@ -276,19 +287,20 @@ impl MP4Mux {
fn pop_buffer(
&self,
sinkpad: &super::MP4MuxPad,
delta_frames: super::DeltaFrames,
pre_queue: &mut VecDeque<(gst::FormattedSegment<gst::ClockTime>, gst::Buffer)>,
running_time_utc_time_mapping: &mut Option<(gst::Signed<gst::ClockTime>, gst::ClockTime)>,
stream: &mut Stream,
) -> Result<Option<(gst::FormattedSegment<gst::ClockTime>, gst::Buffer)>, gst::FlowError> {
let Stream {
sinkpad, pre_queue, ..
} = stream;
// In ONVIF mode we need to get UTC times for each buffer and synchronize based on that.
// Queue up to 6s of data to get the first UTC time and then backdate.
if self.obj().class().as_ref().variant == super::Variant::ONVIF
&& running_time_utc_time_mapping.is_none()
&& stream.running_time_utc_time_mapping.is_none()
{
if let Some((last, first)) = Option::zip(pre_queue.back(), pre_queue.front()) {
// Existence of PTS/DTS checked below
let (last, first) = if delta_frames.requires_dts() {
let (last, first) = if stream.delta_frames.requires_dts() {
(
last.0.to_running_time_full(last.1.dts()).unwrap(),
first.0.to_running_time_full(first.1.dts()).unwrap(),
@ -312,19 +324,20 @@ impl MP4Mux {
}
}
let buffer = match sinkpad.pop_buffer() {
None => {
if sinkpad.is_eos() {
gst::error!(CAT, obj: sinkpad, "Got no UTC time before EOS");
return Err(gst::FlowError::Error);
} else {
return Err(gst_base::AGGREGATOR_FLOW_NEED_DATA);
}
let Some(buffer) = sinkpad.pop_buffer() else {
if sinkpad.is_eos() {
gst::error!(CAT, obj: sinkpad, "Got no UTC time before EOS");
return Err(gst::FlowError::Error);
} else {
return Err(gst_base::AGGREGATOR_FLOW_NEED_DATA);
}
Some(buffer) => buffer,
};
Self::check_buffer(&buffer, sinkpad, delta_frames)?;
Self::check_buffer(
&buffer,
sinkpad,
stream.delta_frames,
stream.discard_header_buffers,
)?;
let segment = match sinkpad.segment().downcast::<gst::ClockTime>().ok() {
Some(segment) => segment,
@ -350,7 +363,7 @@ impl MP4Mux {
);
let mapping = (running_time, utc_time);
*running_time_utc_time_mapping = Some(mapping);
stream.running_time_utc_time_mapping = Some(mapping);
// Push the buffer onto the pre-queue and re-timestamp it and all other buffers
// based on the mapping above.
@ -391,7 +404,7 @@ impl MP4Mux {
// Fall through below and pop the first buffer finally
}
if let Some((segment, buffer)) = pre_queue.pop_front() {
if let Some((segment, buffer)) = stream.pre_queue.pop_front() {
return Ok(Some((segment, buffer)));
}
@ -400,23 +413,26 @@ impl MP4Mux {
// for calculating the duration to the previous buffer, and then put into the pre-queue
// - or this is the very first buffer and we just put it into the queue overselves above
if self.obj().class().as_ref().variant == super::Variant::ONVIF {
if sinkpad.is_eos() {
if stream.sinkpad.is_eos() {
return Ok(None);
}
unreachable!();
}
let buffer = match sinkpad.pop_buffer() {
None => return Ok(None),
Some(buffer) => buffer,
let Some(buffer) = stream.sinkpad.pop_buffer() else {
return Ok(None);
};
Self::check_buffer(
&buffer,
&stream.sinkpad,
stream.delta_frames,
stream.discard_header_buffers,
)?;
Self::check_buffer(&buffer, sinkpad, delta_frames)?;
let segment = match sinkpad.segment().downcast::<gst::ClockTime>().ok() {
let segment = match stream.sinkpad.segment().downcast::<gst::ClockTime>().ok() {
Some(segment) => segment,
None => {
gst::error!(CAT, obj: sinkpad, "Got buffer before segment");
gst::error!(CAT, obj: stream.sinkpad, "Got buffer before segment");
return Err(gst::FlowError::Error);
}
};
@ -442,6 +458,12 @@ impl MP4Mux {
Some(PendingBuffer {
duration: Some(_), ..
}) => return Ok(()),
Some(PendingBuffer { ref buffer, .. })
if stream.discard_header_buffers
&& buffer.flags().contains(gst::BufferFlags::HEADER) =>
{
return Err(gst_base::AGGREGATOR_FLOW_NEED_DATA);
}
Some(PendingBuffer {
timestamp,
pts,
@ -449,13 +471,15 @@ impl MP4Mux {
ref mut duration,
..
}) => {
// Already have a pending buffer but no duration, so try to get that now
let (segment, buffer) = match self.peek_buffer(
let peek_outcome = self.peek_buffer(
&stream.sinkpad,
stream.delta_frames,
stream.discard_header_buffers,
&mut stream.pre_queue,
&stream.running_time_utc_time_mapping,
)? {
)?;
// Already have a pending buffer but no duration, so try to get that now
let (segment, buffer) = match peek_outcome {
Some(res) => res,
None => {
if stream.sinkpad.is_eos() {
@ -527,17 +551,28 @@ impl MP4Mux {
*duration = Some(dur);
// If the stream is AV1, we need to parse the SequenceHeader OBU to include in the
// extra data of the 'av1C' box. It makes the stream playable in some browsers.
let s = stream.caps.structure(0).unwrap();
if !buffer.flags().contains(gst::BufferFlags::DELTA_UNIT)
&& s.name().as_str() == "video/x-av1"
{
let buf_map = buffer.map_readable().map_err(|_| {
gst::error!(CAT, obj: stream.sinkpad, "Failed to map buffer");
gst::FlowError::Error
})?;
stream.extra_header_data = read_seq_header_obu_bytes(buf_map.as_slice()).map_err(|_| {
gst::error!(CAT, obj: stream.sinkpad, "Failed to parse AV1 SequenceHeader OBU");
gst::FlowError::Error
})?;
}
return Ok(());
}
None => {
// Have no buffer queued at all yet
let (segment, buffer) = match self.pop_buffer(
&stream.sinkpad,
stream.delta_frames,
&mut stream.pre_queue,
&mut stream.running_time_utc_time_mapping,
)? {
let (segment, buffer) = match self.pop_buffer(stream)? {
Some(res) => res,
None => {
if stream.sinkpad.is_eos() {
@ -870,6 +905,7 @@ impl MP4Mux {
let s = caps.structure(0).unwrap();
let mut delta_frames = super::DeltaFrames::IntraOnly;
let mut discard_header_buffers = false;
match s.name().as_str() {
"video/x-h264" | "video/x-h265" => {
if !s.has_field_with_type("codec_data", gst::Buffer::static_type()) {
@ -913,6 +949,13 @@ impl MP4Mux {
return Err(gst::FlowError::NotNegotiated);
}
}
"audio/x-flac" => {
discard_header_buffers = true;
if let Err(e) = s.get::<gst::ArrayRef>("streamheader") {
gst::error!(CAT, obj: pad, "Muxing FLAC into MP4 needs streamheader: {}", e);
return Err(gst::FlowError::NotNegotiated);
};
}
"audio/x-alaw" | "audio/x-mulaw" => (),
"audio/x-adpcm" => (),
"application/x-onvif-metadata" => (),
@ -924,6 +967,7 @@ impl MP4Mux {
pre_queue: VecDeque::new(),
caps,
delta_frames,
discard_header_buffers,
chunks: Vec::new(),
pending_buffer: None,
queued_chunk_time: gst::ClockTime::ZERO,
@ -932,6 +976,7 @@ impl MP4Mux {
earliest_pts: None,
end_pts: None,
running_time_utc_time_mapping: None,
extra_header_data: None,
});
}
@ -1144,6 +1189,25 @@ impl AggregatorImpl for MP4Mux {
}
self.parent_sink_event_pre_queue(aggregator_pad, event)
}
EventView::Tag(ev) => {
if let Some(tag_value) = ev.tag().get::<gst::tags::LanguageCode>() {
let lang = tag_value.get();
gst::trace!(CAT, imp: self, "Received language code from tags: {:?}", lang);
// Language as ISO-639-2/T
if lang.len() == 3 && lang.chars().all(|c| c.is_ascii_lowercase()) {
let mut state = self.state.lock().unwrap();
let mut language_code: [u8; 3] = [0; 3];
for (out, c) in Iterator::zip(language_code.iter_mut(), lang.chars()) {
*out = c as u8;
}
state.language_code = Some(language_code);
}
}
self.parent_sink_event_pre_queue(aggregator_pad, event)
}
_ => self.parent_sink_event_pre_queue(aggregator_pad, event),
}
}
@ -1277,7 +1341,15 @@ impl AggregatorImpl for MP4Mux {
// ... and then create the ftyp box plus mdat box header so we can start outputting
// actual data
let ftyp = boxes::create_ftyp(self.obj().class().as_ref().variant).map_err(|err| {
let ftyp = boxes::create_ftyp(
self.obj().class().as_ref().variant,
&state
.streams
.iter()
.map(|s| s.caps.as_ref())
.collect::<Vec<_>>(),
)
.map_err(|err| {
gst::error!(CAT, imp: self, "Failed to create ftyp box: {err}");
gst::FlowError::Error
})?;
@ -1336,6 +1408,7 @@ impl AggregatorImpl for MP4Mux {
earliest_pts,
end_pts,
chunks: stream.chunks,
extra_header_data: stream.extra_header_data.clone(),
});
}
@ -1343,6 +1416,7 @@ impl AggregatorImpl for MP4Mux {
variant: self.obj().class().as_ref().variant,
movie_timescale: settings.movie_timescale,
streams,
language_code: state.language_code,
})
.map_err(|err| {
gst::error!(CAT, imp: self, "Failed to create moov box: {err}");
@ -1523,6 +1597,11 @@ impl ElementImpl for ISOMP4Mux {
.field("channels", gst::IntRange::new(1i32, 8))
.field("rate", gst::IntRange::new(1, i32::MAX))
.build(),
gst::Structure::builder("audio/x-flac")
.field("framed", true)
.field("channels", gst::IntRange::<i32>::new(1, 8))
.field("rate", gst::IntRange::<i32>::new(1, 10 * u16::MAX as i32))
.build(),
]
.into_iter()
.collect::<gst::Caps>(),

View file

@ -11,6 +11,7 @@ use gst::prelude::*;
mod boxes;
mod imp;
mod obu;
glib::wrapper! {
pub(crate) struct MP4MuxPad(ObjectSubclass<imp::MP4MuxPad>) @extends gst_base::AggregatorPad, gst::Pad, gst::Object;
@ -126,6 +127,9 @@ pub(crate) struct Stream {
/// All the chunks stored for this stream
chunks: Vec<Chunk>,
// More data to be included in the fragmented stream header
extra_header_data: Option<Vec<u8>>,
}
#[derive(Debug)]
@ -135,6 +139,7 @@ pub(crate) struct Header {
/// Pre-defined movie timescale if not 0.
movie_timescale: u32,
streams: Vec<Stream>,
language_code: Option<[u8; 3]>,
}
#[allow(clippy::upper_case_acronyms)]

303
mux/mp4/src/mp4mux/obu.rs Normal file
View file

@ -0,0 +1,303 @@
//
// Copyright (C) 2022 Vivienne Watermeier <vwatermeier@igalia.com>
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
#![allow(non_camel_case_types)]
use bitstream_io::{BigEndian, BitRead, BitReader, Endianness};
use std::io::{self, Cursor, Read, Seek, SeekFrom};
pub fn parse_leb128<R, E>(reader: &mut BitReader<R, E>) -> io::Result<(u32, u32)>
where
R: Read + Seek,
E: Endianness,
{
let mut value = 0;
let mut num_bytes = 0;
for i in 0..8 {
let byte = reader.read::<u32>(8)?;
value |= (byte & 0x7f) << (i * 7);
num_bytes += 1;
if byte & 0x80 == 0 {
break;
}
}
reader.byte_align();
Ok((value, num_bytes))
}
#[derive(Default, Debug, Clone, Copy, PartialEq, Eq)]
pub struct SizedObu {
pub obu_type: ObuType,
pub has_extension: bool,
/// If the OBU header is followed by a leb128 size field.
pub has_size_field: bool,
pub temporal_id: u8,
pub spatial_id: u8,
/// size of the OBU payload in bytes.
/// This may refer to different sizes in different contexts, not always
/// to the entire OBU payload as it is in the AV1 bitstream.
pub size: u32,
/// the number of bytes the leb128 size field will take up
/// when written with write_leb128().
/// This does not imply `has_size_field`, and does not necessarily match with
/// the length of the internal size field if present.
pub leb_size: u32,
pub header_len: u32,
/// indicates that only part of this OBU has been processed so far
pub is_fragment: bool,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum ObuType {
Reserved,
SequenceHeader,
TemporalDelimiter,
FrameHeader,
TileGroup,
Metadata,
Frame,
RedundantFrameHeader,
TileList,
Padding,
}
impl Default for ObuType {
fn default() -> Self {
Self::Reserved
}
}
impl SizedObu {
/// Parse an OBU header and size field. If the OBU is not expected to contain
/// a size field, but the size is known from external information,
/// parse as an `UnsizedObu` and use `to_sized`.
pub fn parse<R, E>(reader: &mut BitReader<R, E>) -> io::Result<Self>
where
R: Read + Seek,
E: Endianness,
{
// check the forbidden bit
if reader.read_bit()? {
return Err(io::Error::new(
io::ErrorKind::InvalidData,
"forbidden bit in OBU header is set",
));
}
let obu_type = reader.read::<u8>(4)?.into();
let has_extension = reader.read_bit()?;
// require a size field
if !reader.read_bit()? {
return Err(io::Error::new(
io::ErrorKind::InvalidData,
"expected a size field",
));
}
// ignore the reserved bit
let _ = reader.read_bit()?;
let (temporal_id, spatial_id) = if has_extension {
(reader.read::<u8>(3)?, reader.read::<u8>(2)?)
} else {
(0, 0)
};
reader.byte_align();
let (size, leb_size) = parse_leb128(reader)?;
Ok(Self {
obu_type,
has_extension,
has_size_field: true,
temporal_id,
spatial_id,
size,
leb_size,
header_len: has_extension as u32 + 1,
is_fragment: false,
})
}
/// The amount of bytes this OBU will take up, including the space needed for
/// its leb128 size field.
pub fn full_size(&self) -> u32 {
self.size + self.leb_size + self.header_len
}
}
pub fn read_seq_header_obu_bytes(data: &[u8]) -> io::Result<Option<Vec<u8>>> {
let mut cursor = Cursor::new(data);
while cursor.position() < data.len() as u64 {
let obu_start = cursor.position();
let Ok(obu) = SizedObu::parse(&mut BitReader::endian(&mut cursor, BigEndian)) else {
break;
};
// set reader to the beginning of the OBU
cursor.seek(SeekFrom::Start(obu_start))?;
if obu.obu_type != ObuType::SequenceHeader {
// Skip the full OBU
cursor.seek(SeekFrom::Current(obu.full_size() as i64))?;
continue;
};
// read the full OBU
let mut bytes = vec![0; obu.full_size() as usize];
cursor.read_exact(&mut bytes)?;
return Ok(Some(bytes));
}
Ok(None)
}
impl From<u8> for ObuType {
fn from(n: u8) -> Self {
assert!(n < 16);
match n {
1 => Self::SequenceHeader,
2 => Self::TemporalDelimiter,
3 => Self::FrameHeader,
4 => Self::TileGroup,
5 => Self::Metadata,
6 => Self::Frame,
7 => Self::RedundantFrameHeader,
8 => Self::TileList,
15 => Self::Padding,
_ => Self::Reserved,
}
}
}
impl From<ObuType> for u8 {
fn from(ty: ObuType) -> Self {
match ty {
ObuType::Reserved => 0,
ObuType::SequenceHeader => 1,
ObuType::TemporalDelimiter => 2,
ObuType::FrameHeader => 3,
ObuType::TileGroup => 4,
ObuType::Metadata => 5,
ObuType::Frame => 6,
ObuType::RedundantFrameHeader => 7,
ObuType::TileList => 8,
ObuType::Padding => 15,
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use bitstream_io::{BigEndian, BitReader};
use once_cell::sync::Lazy;
use std::io::Cursor;
#[allow(clippy::type_complexity)]
static OBUS: Lazy<Vec<(SizedObu, Vec<u8>)>> = Lazy::new(|| {
vec![
(
SizedObu {
obu_type: ObuType::TemporalDelimiter,
has_extension: false,
has_size_field: true,
temporal_id: 0,
spatial_id: 0,
size: 0,
leb_size: 1,
header_len: 1,
is_fragment: false,
},
vec![0b0001_0010, 0b0000_0000],
),
(
SizedObu {
obu_type: ObuType::Padding,
has_extension: false,
has_size_field: true,
temporal_id: 0,
spatial_id: 0,
size: 10,
leb_size: 1,
header_len: 1,
is_fragment: false,
},
vec![0b0111_1010, 0b0000_1010, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
),
(
SizedObu {
obu_type: ObuType::SequenceHeader,
has_extension: true,
has_size_field: true,
temporal_id: 4,
spatial_id: 3,
size: 5,
leb_size: 1,
header_len: 2,
is_fragment: false,
},
vec![0b0000_1110, 0b1001_1000, 0b0000_0101, 1, 2, 3, 4, 5],
),
(
SizedObu {
obu_type: ObuType::Frame,
has_extension: true,
has_size_field: true,
temporal_id: 4,
spatial_id: 3,
size: 5,
leb_size: 1,
header_len: 2,
is_fragment: false,
},
vec![0b0011_0110, 0b1001_1000, 0b0000_0101, 1, 2, 3, 4, 5],
),
]
});
#[test]
fn test_parse_rtp_obu() {
for (idx, (sized_obu, raw_bytes)) in (*OBUS).iter().enumerate() {
println!("running test {idx}...");
let mut reader = BitReader::endian(Cursor::new(&raw_bytes), BigEndian);
let obu_parsed = SizedObu::parse(&mut reader).unwrap();
assert_eq!(&obu_parsed, sized_obu);
if let Some(seq_header_obu_bytes) = read_seq_header_obu_bytes(raw_bytes).unwrap() {
println!("validation of sequence header obu read/write...");
assert_eq!(&seq_header_obu_bytes, raw_bytes);
}
}
}
#[test]
fn test_read_seq_header_from_bitstream() {
let mut bitstream = Vec::new();
let mut seq_header_bytes_raw = None;
for (obu, raw_bytes) in (*OBUS).iter() {
bitstream.extend(raw_bytes);
if obu.obu_type == ObuType::SequenceHeader {
seq_header_bytes_raw = Some(raw_bytes.clone());
}
}
let seq_header_obu_bytes = read_seq_header_obu_bytes(&bitstream).unwrap().unwrap();
assert_eq!(seq_header_obu_bytes, seq_header_bytes_raw.unwrap());
}
}

View file

@ -7,6 +7,8 @@
// SPDX-License-Identifier: MPL-2.0
//
use std::path::Path;
use gst::prelude::*;
use gst_pbutils::prelude::*;
@ -20,33 +22,57 @@ fn init() {
});
}
#[test]
fn test_basic() {
init();
struct Pipeline(gst::Pipeline);
impl std::ops::Deref for Pipeline {
type Target = gst::Pipeline;
struct Pipeline(gst::Pipeline);
impl std::ops::Deref for Pipeline {
type Target = gst::Pipeline;
fn deref(&self) -> &Self::Target {
&self.0
}
fn deref(&self) -> &Self::Target {
&self.0
}
impl Drop for Pipeline {
fn drop(&mut self) {
let _ = self.0.set_state(gst::State::Null);
}
}
impl Drop for Pipeline {
fn drop(&mut self) {
let _ = self.0.set_state(gst::State::Null);
}
}
let pipeline = match gst::parse::launch(
"videotestsrc num-buffers=99 ! x264enc ! mux. \
audiotestsrc num-buffers=140 ! fdkaacenc ! mux. \
isomp4mux name=mux ! filesink name=sink \
",
) {
Ok(pipeline) => Pipeline(pipeline.downcast::<gst::Pipeline>().unwrap()),
Err(_) => return,
impl Pipeline {
fn into_completion(self) {
self.set_state(gst::State::Playing)
.expect("Unable to set the pipeline to the `Playing` state");
for msg in self.bus().unwrap().iter_timed(gst::ClockTime::NONE) {
use gst::MessageView;
match msg.view() {
MessageView::Eos(..) => break,
MessageView::Error(err) => {
panic!(
"Error from {:?}: {} ({:?})",
err.src().map(|s| s.path_string()),
err.error(),
err.debug()
);
}
_ => (),
}
}
self.set_state(gst::State::Null)
.expect("Unable to set the pipeline to the `Null` state");
}
}
fn test_basic_with(video_enc: &str, audio_enc: &str, cb: impl FnOnce(&Path)) {
let Ok(pipeline) = gst::parse::launch(&format!(
"videotestsrc num-buffers=99 ! {video_enc} ! mux. \
audiotestsrc num-buffers=140 ! {audio_enc} ! mux. \
isomp4mux name=mux ! filesink name=sink"
)) else {
println!("could not build encoding pipeline");
return;
};
let pipeline = Pipeline(pipeline.downcast::<gst::Pipeline>().unwrap());
let dir = tempfile::TempDir::new().unwrap();
let mut location = dir.path().to_owned();
@ -54,73 +80,95 @@ fn test_basic() {
let sink = pipeline.by_name("sink").unwrap();
sink.set_property("location", location.to_str().expect("Non-UTF8 filename"));
pipeline.into_completion();
pipeline
.set_state(gst::State::Playing)
.expect("Unable to set the pipeline to the `Playing` state");
for msg in pipeline.bus().unwrap().iter_timed(gst::ClockTime::NONE) {
use gst::MessageView;
match msg.view() {
MessageView::Eos(..) => break,
MessageView::Error(err) => {
panic!(
"Error from {:?}: {} ({:?})",
err.src().map(|s| s.path_string()),
err.error(),
err.debug()
);
}
_ => (),
}
}
pipeline
.set_state(gst::State::Null)
.expect("Unable to set the pipeline to the `Null` state");
drop(pipeline);
let discoverer = gst_pbutils::Discoverer::new(gst::ClockTime::from_seconds(5))
.expect("Failed to create discoverer");
let info = discoverer
.discover_uri(
url::Url::from_file_path(&location)
.expect("Failed to convert filename to URL")
.as_str(),
)
.expect("Failed to discover MP4 file");
assert_eq!(info.duration(), Some(gst::ClockTime::from_mseconds(3_300)));
let audio_streams = info.audio_streams();
assert_eq!(audio_streams.len(), 1);
let audio_stream = &audio_streams[0];
assert_eq!(audio_stream.channels(), 1);
assert_eq!(audio_stream.sample_rate(), 44_100);
let caps = audio_stream.caps().unwrap();
assert!(
caps.can_intersect(
&gst::Caps::builder("audio/mpeg")
.any_features()
.field("mpegversion", 4i32)
.build()
),
"Unexpected audio caps {caps:?}"
);
let video_streams = info.video_streams();
assert_eq!(video_streams.len(), 1);
let video_stream = &video_streams[0];
assert_eq!(video_stream.width(), 320);
assert_eq!(video_stream.height(), 240);
assert_eq!(video_stream.framerate(), gst::Fraction::new(30, 1));
assert_eq!(video_stream.par(), gst::Fraction::new(1, 1));
assert!(!video_stream.is_interlaced());
let caps = video_stream.caps().unwrap();
assert!(
caps.can_intersect(&gst::Caps::builder("video/x-h264").any_features().build()),
"Unexpected video caps {caps:?}"
);
cb(&location)
}
#[test]
fn test_basic_x264_aac() {
init();
test_basic_with("x264enc", "fdkaacenc", |location| {
let discoverer = gst_pbutils::Discoverer::new(gst::ClockTime::from_seconds(5))
.expect("Failed to create discoverer");
let info = discoverer
.discover_uri(
url::Url::from_file_path(location)
.expect("Failed to convert filename to URL")
.as_str(),
)
.expect("Failed to discover MP4 file");
assert_eq!(info.duration(), Some(gst::ClockTime::from_mseconds(3_300)));
let audio_streams = info.audio_streams();
assert_eq!(audio_streams.len(), 1);
let audio_stream = &audio_streams[0];
assert_eq!(audio_stream.channels(), 1);
assert_eq!(audio_stream.sample_rate(), 44_100);
let caps = audio_stream.caps().unwrap();
assert!(
caps.can_intersect(
&gst::Caps::builder("audio/mpeg")
.any_features()
.field("mpegversion", 4i32)
.build()
),
"Unexpected audio caps {caps:?}"
);
let video_streams = info.video_streams();
assert_eq!(video_streams.len(), 1);
let video_stream = &video_streams[0];
assert_eq!(video_stream.width(), 320);
assert_eq!(video_stream.height(), 240);
assert_eq!(video_stream.framerate(), gst::Fraction::new(30, 1));
assert_eq!(video_stream.par(), gst::Fraction::new(1, 1));
assert!(!video_stream.is_interlaced());
let caps = video_stream.caps().unwrap();
assert!(
caps.can_intersect(&gst::Caps::builder("video/x-h264").any_features().build()),
"Unexpected video caps {caps:?}"
);
})
}
#[test]
fn test_roundtrip_vp9_flac() {
init();
test_basic_with("vp9enc ! vp9parse", "flacenc ! flacparse", |location| {
let Ok(pipeline) = gst::parse::launch(
"filesrc name=src ! qtdemux name=demux \
demux.audio_0 ! queue ! flacdec ! fakesink \
demux.video_0 ! queue ! vp9dec ! fakesink",
) else {
panic!("could not build decoding pipeline")
};
let pipeline = Pipeline(pipeline.downcast::<gst::Pipeline>().unwrap());
pipeline
.by_name("src")
.unwrap()
.set_property("location", location.display().to_string());
pipeline.into_completion();
})
}
#[test]
fn test_roundtrip_av1_aac() {
init();
test_basic_with("av1enc ! av1parse", "avenc_aac ! aacparse", |location| {
let Ok(pipeline) = gst::parse::launch(
"filesrc name=src ! qtdemux name=demux \
demux.audio_0 ! queue ! avdec_aac ! fakesink \
demux.video_0 ! queue ! av1dec ! fakesink",
) else {
panic!("could not build decoding pipeline")
};
let pipeline = Pipeline(pipeline.downcast::<gst::Pipeline>().unwrap());
pipeline
.by_name("src")
.unwrap()
.set_property("location", location.display().to_string());
pipeline.into_completion();
})
}

View file

@ -12,7 +12,7 @@ rust-version.workspace = true
[dependencies]
async-stream = "0.3.4"
base32 = "0.4"
base32 = "0.5"
aws-config = "1.0"
aws-sdk-s3 = "1.0"
aws-sdk-transcribestreaming = "1.0"

View file

@ -18,7 +18,7 @@ mod s3hlssink;
mod s3sink;
mod s3src;
mod s3url;
mod s3utils;
pub mod s3utils;
mod transcribe_parse;
mod transcriber;

View file

@ -39,6 +39,7 @@ const S3_CHANNEL_SIZE: usize = 32;
const S3_ACL_DEFAULT: ObjectCannedAcl = ObjectCannedAcl::Private;
const DEFAULT_RETRY_ATTEMPTS: u32 = 5;
const DEFAULT_TIMEOUT_IN_MSECS: u64 = 15000;
const DEFAULT_FORCE_PATH_STYLE: bool = false;
struct Settings {
access_key: Option<String>,
@ -57,6 +58,7 @@ struct Settings {
video_sink: bool,
config: Option<SdkConfig>,
endpoint_uri: Option<String>,
force_path_style: bool,
}
impl Default for Settings {
@ -79,6 +81,7 @@ impl Default for Settings {
video_sink: false,
config: None,
endpoint_uri: None,
force_path_style: DEFAULT_FORCE_PATH_STYLE,
}
}
}
@ -276,10 +279,9 @@ impl S3HlsSink {
gst::error!(
CAT,
imp: self,
"Put object request for S3 key {} of data length {} failed with error {:?}",
"Put object request for S3 key {} of data length {} failed with error {err}",
s3_key,
s3_data_len,
err,
);
element_imp_error!(
self,
@ -320,9 +322,8 @@ impl S3HlsSink {
gst::error!(
CAT,
imp: self,
"Delete object request for S3 key {} failed with error {:?}",
"Delete object request for S3 key {} failed with error {err}",
s3_key,
err
);
element_imp_error!(
self,
@ -378,6 +379,7 @@ impl S3HlsSink {
let sdk_config = settings.config.as_ref().expect("SDK config must be set");
let config_builder = config::Builder::from(sdk_config)
.force_path_style(settings.force_path_style)
.region(settings.s3_region.clone())
.retry_config(RetryConfig::standard().with_max_attempts(settings.retry_attempts));
@ -531,6 +533,11 @@ impl ObjectImpl for S3HlsSink {
.blurb("The S3 endpoint URI to use")
.mutable_ready()
.build(),
glib::ParamSpecBoolean::builder("force-path-style")
.nick("Force path style")
.blurb("Force client to use path-style addressing for buckets")
.default_value(DEFAULT_FORCE_PATH_STYLE)
.build(),
]
});
@ -588,6 +595,9 @@ impl ObjectImpl for S3HlsSink {
.get::<Option<String>>()
.expect("type checked upstream");
}
"force-path-style" => {
settings.force_path_style = value.get::<bool>().expect("type checked upstream");
}
_ => unimplemented!(),
}
}
@ -608,6 +618,7 @@ impl ObjectImpl for S3HlsSink {
"request-timeout" => (settings.request_timeout.as_millis() as u64).to_value(),
"stats" => self.create_stats().to_value(),
"endpoint-uri" => settings.endpoint_uri.to_value(),
"force-path-style" => settings.force_path_style.to_value(),
_ => unimplemented!(),
}
}
@ -642,10 +653,7 @@ impl ObjectImpl for S3HlsSink {
self.hlssink.connect("get-playlist-stream", false, {
let self_weak = self.downgrade();
move |args| -> Option<glib::Value> {
let Some(self_) = self_weak.upgrade() else {
return None;
};
let self_ = self_weak.upgrade()?;
let s3client = self_.s3client_from_settings();
let settings = self_.settings.lock().unwrap();
let mut state = self_.state.lock().unwrap();
@ -676,10 +684,7 @@ impl ObjectImpl for S3HlsSink {
self.hlssink.connect("get-fragment-stream", false, {
let self_weak = self.downgrade();
move |args| -> Option<glib::Value> {
let Some(self_) = self_weak.upgrade() else {
return None;
};
let self_ = self_weak.upgrade()?;
let s3client = self_.s3client_from_settings();
let settings = self_.settings.lock().unwrap();
let mut state = self_.state.lock().unwrap();
@ -710,10 +715,7 @@ impl ObjectImpl for S3HlsSink {
self.hlssink.connect("delete-fragment", false, {
let self_weak = self.downgrade();
move |args| -> Option<glib::Value> {
let Some(self_) = self_weak.upgrade() else {
return None;
};
let self_ = self_weak.upgrade()?;
let s3_client = self_.s3client_from_settings();
let settings = self_.settings.lock().unwrap();

View file

@ -14,6 +14,7 @@ use gst_base::subclass::prelude::*;
use aws_sdk_s3::{
config::{self, retry::RetryConfig, Credentials, Region},
error::ProvideErrorMetadata,
operation::{
abort_multipart_upload::builders::AbortMultipartUploadFluentBuilder,
complete_multipart_upload::builders::CompleteMultipartUploadFluentBuilder,
@ -37,6 +38,7 @@ use crate::s3utils::{self, duration_from_millis, duration_to_millis, WaitError};
use super::OnError;
const DEFAULT_FORCE_PATH_STYLE: bool = false;
const DEFAULT_RETRY_ATTEMPTS: u32 = 5;
const DEFAULT_BUFFER_SIZE: u64 = 5 * 1024 * 1024;
const DEFAULT_MULTIPART_UPLOAD_ON_ERROR: OnError = OnError::DoNothing;
@ -113,6 +115,7 @@ struct Settings {
multipart_upload_on_error: OnError,
request_timeout: Duration,
endpoint_uri: Option<String>,
force_path_style: bool,
}
impl Settings {
@ -167,6 +170,7 @@ impl Default for Settings {
multipart_upload_on_error: DEFAULT_MULTIPART_UPLOAD_ON_ERROR,
request_timeout: Duration::from_millis(DEFAULT_REQUEST_TIMEOUT_MSEC),
endpoint_uri: None,
force_path_style: DEFAULT_FORCE_PATH_STYLE,
}
}
}
@ -265,7 +269,7 @@ impl S3Sink {
self.flush_multipart_upload(state);
Some(gst::error_msg!(
gst::ResourceError::OpenWrite,
["Failed to upload part: {}", err]
["Failed to upload part: {err}: {}", err.meta()]
))
}
WaitError::Cancelled => None,
@ -407,7 +411,7 @@ impl S3Sink {
WaitError::FutureError(err) => {
gst::error_msg!(
gst::ResourceError::Write,
["Failed to abort multipart upload: {}.", err.to_string()]
["Failed to abort multipart upload: {err}: {}", err.meta()]
)
}
WaitError::Cancelled => {
@ -431,7 +435,7 @@ impl S3Sink {
.map_err(|err| match err {
WaitError::FutureError(err) => gst::error_msg!(
gst::ResourceError::Write,
["Failed to complete multipart upload: {}.", err.to_string()]
["Failed to complete multipart upload: {err}: {}", err.meta()]
),
WaitError::Cancelled => {
gst::error_msg!(
@ -512,7 +516,7 @@ impl S3Sink {
.map_err(|err| match err {
WaitError::FutureError(err) => gst::error_msg!(
gst::ResourceError::OpenWrite,
["Failed to create SDK config: {}", err]
["Failed to create SDK config: {err}"]
),
WaitError::Cancelled => {
gst::error_msg!(
@ -523,6 +527,7 @@ impl S3Sink {
})?;
let config_builder = config::Builder::from(&sdk_config)
.force_path_style(settings.force_path_style)
.retry_config(RetryConfig::standard().with_max_attempts(settings.retry_attempts));
let config = if let Some(ref uri) = settings.endpoint_uri {
@ -541,7 +546,7 @@ impl S3Sink {
|err| match err {
WaitError::FutureError(err) => gst::error_msg!(
gst::ResourceError::OpenWrite,
["Failed to create multipart upload: {}", err]
["Failed to create multipart upload: {err}: {}", err.meta()]
),
WaitError::Cancelled => {
gst::error_msg!(
@ -774,6 +779,11 @@ impl ObjectImpl for S3Sink {
.nick("content-disposition")
.blurb("Content-Disposition header to set for uploaded object")
.build(),
glib::ParamSpecBoolean::builder("force-path-style")
.nick("Force path style")
.blurb("Force client to use path-style addressing for buckets")
.default_value(DEFAULT_FORCE_PATH_STYLE)
.build(),
]
});
@ -887,6 +897,9 @@ impl ObjectImpl for S3Sink {
.get::<Option<String>>()
.expect("type checked upstream");
}
"force-path-style" => {
settings.force_path_style = value.get::<bool>().expect("type checked upstream");
}
_ => unimplemented!(),
}
}
@ -928,6 +941,7 @@ impl ObjectImpl for S3Sink {
"endpoint-uri" => settings.endpoint_uri.to_value(),
"content-type" => settings.content_type.to_value(),
"content-disposition" => settings.content_disposition.to_value(),
"force-path-style" => settings.force_path_style.to_value(),
_ => unimplemented!(),
}
}

View file

@ -15,6 +15,7 @@ use gst_base::subclass::prelude::*;
use aws_sdk_s3::{
config::{self, retry::RetryConfig, Credentials, Region},
error::ProvideErrorMetadata,
operation::put_object::builders::PutObjectFluentBuilder,
primitives::ByteStream,
Client,
@ -35,6 +36,7 @@ const DEFAULT_FLUSH_INTERVAL_BUFFERS: u64 = 1;
const DEFAULT_FLUSH_INTERVAL_BYTES: u64 = 0;
const DEFAULT_FLUSH_INTERVAL_TIME: gst::ClockTime = gst::ClockTime::from_nseconds(0);
const DEFAULT_FLUSH_ON_ERROR: bool = false;
const DEFAULT_FORCE_PATH_STYLE: bool = false;
// General setting for create / abort requests
const DEFAULT_REQUEST_TIMEOUT_MSEC: u64 = 15_000;
@ -79,6 +81,7 @@ struct Settings {
retry_attempts: u32,
request_timeout: Duration,
endpoint_uri: Option<String>,
force_path_style: bool,
flush_interval_buffers: u64,
flush_interval_bytes: u64,
flush_interval_time: Option<gst::ClockTime>,
@ -135,6 +138,7 @@ impl Default for Settings {
retry_attempts: DEFAULT_RETRY_ATTEMPTS,
request_timeout: Duration::from_millis(DEFAULT_REQUEST_TIMEOUT_MSEC),
endpoint_uri: None,
force_path_style: DEFAULT_FORCE_PATH_STYLE,
flush_interval_buffers: DEFAULT_FLUSH_INTERVAL_BUFFERS,
flush_interval_bytes: DEFAULT_FLUSH_INTERVAL_BYTES,
flush_interval_time: Some(DEFAULT_FLUSH_INTERVAL_TIME),
@ -202,7 +206,7 @@ impl S3PutObjectSink {
s3utils::wait(&self.canceller, put_object_req_future).map_err(|err| match err {
WaitError::FutureError(err) => Some(gst::error_msg!(
gst::ResourceError::OpenWrite,
["Failed to upload object: {}", err]
["Failed to upload object: {err}: {}", err.meta()]
)),
WaitError::Cancelled => None,
})?;
@ -292,6 +296,7 @@ impl S3PutObjectSink {
})?;
let config_builder = config::Builder::from(&sdk_config)
.force_path_style(settings.force_path_style)
.retry_config(RetryConfig::standard().with_max_attempts(settings.retry_attempts));
let config = if let Some(ref uri) = settings.endpoint_uri {
@ -445,6 +450,11 @@ impl ObjectImpl for S3PutObjectSink {
.blurb("Whether to write out the data on error (like stopping without an EOS)")
.default_value(DEFAULT_FLUSH_ON_ERROR)
.build(),
glib::ParamSpecBoolean::builder("force-path-style")
.nick("Force path style")
.blurb("Force client to use path-style addressing for buckets")
.default_value(DEFAULT_FORCE_PATH_STYLE)
.build(),
]
});
@ -541,6 +551,9 @@ impl ObjectImpl for S3PutObjectSink {
"flush-on-error" => {
settings.flush_on_error = value.get::<bool>().expect("type checked upstream");
}
"force-path-style" => {
settings.force_path_style = value.get::<bool>().expect("type checked upstream");
}
_ => unimplemented!(),
}
}
@ -574,6 +587,7 @@ impl ObjectImpl for S3PutObjectSink {
"flush-interval-bytes" => settings.flush_interval_bytes.to_value(),
"flush-interval-time" => settings.flush_interval_time.to_value(),
"flush-on-error" => settings.flush_on_error.to_value(),
"force-path-style" => settings.force_path_style.to_value(),
_ => unimplemented!(),
}
}

View file

@ -14,6 +14,7 @@ use std::time::Duration;
use aws_sdk_s3::{
config::{self, retry::RetryConfig, Credentials},
error::ProvideErrorMetadata,
Client,
};
@ -28,6 +29,7 @@ use gst_base::subclass::prelude::*;
use crate::s3url::*;
use crate::s3utils::{self, duration_from_millis, duration_to_millis, WaitError};
const DEFAULT_FORCE_PATH_STYLE: bool = false;
const DEFAULT_RETRY_ATTEMPTS: u32 = 5;
const DEFAULT_REQUEST_TIMEOUT_MSEC: u64 = 15000;
const DEFAULT_RETRY_DURATION_MSEC: u64 = 60_000;
@ -52,6 +54,7 @@ struct Settings {
retry_attempts: u32,
request_timeout: Duration,
endpoint_uri: Option<String>,
force_path_style: bool,
}
impl Default for Settings {
@ -65,6 +68,7 @@ impl Default for Settings {
retry_attempts: DEFAULT_RETRY_ATTEMPTS,
request_timeout: duration,
endpoint_uri: None,
force_path_style: DEFAULT_FORCE_PATH_STYLE,
}
}
}
@ -127,6 +131,7 @@ impl S3Src {
})?;
let config_builder = config::Builder::from(&sdk_config)
.force_path_style(settings.force_path_style)
.retry_config(RetryConfig::standard().with_max_attempts(settings.retry_attempts));
let config = if let Some(ref uri) = settings.endpoint_uri {
@ -184,7 +189,7 @@ impl S3Src {
s3utils::wait(&self.canceller, head_object_future).map_err(|err| match err {
WaitError::FutureError(err) => gst::error_msg!(
gst::ResourceError::NotFound,
["Failed to get HEAD object: {:?}", err]
["Failed to get HEAD object: {err}: {}", err.meta()]
),
WaitError::Cancelled => {
gst::error_msg!(
@ -243,7 +248,7 @@ impl S3Src {
s3utils::wait(&self.canceller, get_object_future).map_err(|err| match err {
WaitError::FutureError(err) => Some(gst::error_msg!(
gst::ResourceError::Read,
["Could not read: {}", err]
["Could not read: {err}: {}", err.meta()]
)),
WaitError::Cancelled => None,
})?;
@ -253,7 +258,7 @@ impl S3Src {
s3utils::wait_stream(&self.canceller, &mut output.body).map_err(|err| match err {
WaitError::FutureError(err) => Some(gst::error_msg!(
gst::ResourceError::Read,
["Could not read: {}", err]
["Could not read: {err}"]
)),
WaitError::Cancelled => None,
})
@ -315,6 +320,11 @@ impl ObjectImpl for S3Src {
.nick("S3 endpoint URI")
.blurb("The S3 endpoint URI to use")
.build(),
glib::ParamSpecBoolean::builder("force-path-style")
.nick("Force path style")
.blurb("Force client to use path-style addressing for buckets")
.default_value(DEFAULT_FORCE_PATH_STYLE)
.build(),
]
});
@ -364,6 +374,9 @@ impl ObjectImpl for S3Src {
.get::<Option<String>>()
.expect("type checked upstream");
}
"force-path-style" => {
settings.force_path_style = value.get::<bool>().expect("type checked upstream");
}
_ => unimplemented!(),
}
}
@ -390,6 +403,7 @@ impl ObjectImpl for S3Src {
}
"retry-attempts" => settings.retry_attempts.to_value(),
"endpoint-uri" => settings.endpoint_uri.to_value(),
"force-path-style" => settings.force_path_style.to_value(),
_ => unimplemented!(),
}
}

View file

@ -25,9 +25,10 @@ const FRAGMENT: &AsciiSet = &CONTROLS.add(b' ').add(b'"').add(b'<').add(b'>').ad
const PATH: &AsciiSet = &FRAGMENT.add(b'#').add(b'?').add(b'{').add(b'}');
const PATH_SEGMENT: &AsciiSet = &PATH.add(b'/').add(b'%');
impl ToString for GstS3Url {
fn to_string(&self) -> String {
format!(
impl std::fmt::Display for GstS3Url {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"s3://{}/{}/{}{}",
self.region,
self.bucket,
@ -59,9 +60,9 @@ pub fn parse_s3_url(url_str: &str) -> Result<GstS3Url, String> {
.or_else(|_| {
let (name, endpoint) = host.split_once('+').ok_or(())?;
let name =
base32::decode(base32::Alphabet::RFC4648 { padding: true }, name).ok_or(())?;
base32::decode(base32::Alphabet::Rfc4648 { padding: true }, name).ok_or(())?;
let endpoint =
base32::decode(base32::Alphabet::RFC4648 { padding: true }, endpoint).ok_or(())?;
base32::decode(base32::Alphabet::Rfc4648 { padding: true }, endpoint).ok_or(())?;
let name = String::from_utf8(name).map_err(|_| ())?;
let endpoint = String::from_utf8(endpoint).map_err(|_| ())?;
Ok(format!("{name}{endpoint}"))

View file

@ -9,6 +9,7 @@
use aws_config::meta::region::RegionProviderChain;
use aws_sdk_s3::{
config::{timeout::TimeoutConfig, Credentials, Region},
error::ProvideErrorMetadata,
primitives::{ByteStream, ByteStreamError},
};
use aws_types::sdk_config::SdkConfig;
@ -16,11 +17,15 @@ use aws_types::sdk_config::SdkConfig;
use bytes::{buf::BufMut, Bytes, BytesMut};
use futures::{future, Future};
use once_cell::sync::Lazy;
use std::fmt;
use std::sync::Mutex;
use std::time::Duration;
use tokio::runtime;
const DEFAULT_S3_REGION: &str = "us-west-2";
pub const DEFAULT_S3_REGION: &str = "us-west-2";
pub static AWS_BEHAVIOR_VERSION: Lazy<aws_config::BehaviorVersion> =
Lazy::new(aws_config::BehaviorVersion::v2023_11_09);
static RUNTIME: Lazy<runtime::Runtime> = Lazy::new(|| {
runtime::Builder::new_multi_thread()
@ -37,6 +42,15 @@ pub enum WaitError<E> {
FutureError(E),
}
impl<E: ProvideErrorMetadata + std::error::Error> fmt::Display for WaitError<E> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
WaitError::Cancelled => f.write_str("Cancelled"),
WaitError::FutureError(err) => write!(f, "{err}: {}", err.meta()),
}
}
}
pub fn wait<F, T, E>(
canceller: &Mutex<Option<future::AbortHandle>>,
future: F,
@ -111,12 +125,12 @@ pub fn wait_config(
.or_default_provider()
.or_else(Region::new(DEFAULT_S3_REGION));
let config_future = match credentials {
Some(cred) => aws_config::defaults(aws_config::BehaviorVersion::latest())
Some(cred) => aws_config::defaults(AWS_BEHAVIOR_VERSION.clone())
.timeout_config(timeout_config)
.region(region_provider)
.credentials_provider(cred)
.load(),
None => aws_config::defaults(aws_config::BehaviorVersion::latest())
None => aws_config::defaults(AWS_BEHAVIOR_VERSION.clone())
.timeout_config(timeout_config)
.region(region_provider)
.load(),

View file

@ -47,6 +47,9 @@ static RUNTIME: Lazy<runtime::Runtime> = Lazy::new(|| {
.unwrap()
});
static AWS_BEHAVIOR_VERSION: Lazy<aws_config::BehaviorVersion> =
Lazy::new(aws_config::BehaviorVersion::v2023_11_09);
const DEFAULT_TRANSCRIBER_REGION: &str = "us-east-1";
// Deprecated in 0.11.0: due to evolutions of the transcriber element,
@ -531,13 +534,14 @@ impl Transcriber {
fn prepare(&self) -> Result<(), gst::ErrorMessage> {
gst::debug!(CAT, imp: self, "Preparing");
let (access_key, secret_access_key, session_token);
{
let (access_key, secret_access_key, session_token) = {
let settings = self.settings.lock().unwrap();
access_key = settings.access_key.to_owned();
secret_access_key = settings.secret_access_key.to_owned();
session_token = settings.session_token.to_owned();
}
(
settings.access_key.clone(),
settings.secret_access_key.clone(),
settings.session_token.clone(),
)
};
gst::info!(CAT, imp: self, "Loading aws config...");
let _enter_guard = RUNTIME.enter();
@ -545,7 +549,7 @@ impl Transcriber {
let config_loader = match (access_key, secret_access_key) {
(Some(key), Some(secret_key)) => {
gst::debug!(CAT, imp: self, "Using settings credentials");
aws_config::ConfigLoader::default().credentials_provider(
aws_config::defaults(AWS_BEHAVIOR_VERSION.clone()).credentials_provider(
aws_transcribe::config::Credentials::new(
key,
secret_key,
@ -557,7 +561,7 @@ impl Transcriber {
}
_ => {
gst::debug!(CAT, imp: self, "Attempting to get credentials from env...");
aws_config::defaults(aws_config::BehaviorVersion::latest())
aws_config::defaults(AWS_BEHAVIOR_VERSION.clone())
}
};

View file

@ -11,6 +11,7 @@ use gst::subclass::prelude::*;
use gst::{glib, prelude::*};
use aws_sdk_transcribestreaming as aws_transcribe;
use aws_sdk_transcribestreaming::error::ProvideErrorMetadata;
use aws_sdk_transcribestreaming::types;
use futures::channel::mpsc;
@ -165,7 +166,7 @@ impl TranscriberStream {
.send()
.await
.map_err(|err| {
let err = format!("Transcribe ws init error: {err}");
let err = format!("Transcribe ws init error: {err}: {}", err.meta());
gst::error!(CAT, imp: imp, "{err}");
gst::error_msg!(gst::LibraryError::Init, ["{err}"])
})?;
@ -187,7 +188,7 @@ impl TranscriberStream {
.recv()
.await
.map_err(|err| {
let err = format!("Transcribe ws stream error: {err}");
let err = format!("Transcribe ws stream error: {err}: {}", err.meta());
gst::error!(CAT, imp: self.imp, "{err}");
gst::error_msg!(gst::LibraryError::Failed, ["{err}"])
})?;

View file

@ -10,6 +10,7 @@ use gst::glib;
use gst::subclass::prelude::*;
use aws_sdk_translate as aws_translate;
use aws_sdk_translate::error::ProvideErrorMetadata;
use futures::channel::mpsc;
use futures::prelude::*;
@ -78,7 +79,10 @@ impl TranslateLoop {
pub async fn check_language(&self) -> Result<(), gst::ErrorMessage> {
let language_list = self.client.list_languages().send().await.map_err(|err| {
let err = format!("Failed to call list_languages service: {err}");
let err = format!(
"Failed to call list_languages service: {err}: {}",
err.meta()
);
gst::info!(CAT, imp: self.pad, "{err}");
gst::error_msg!(gst::LibraryError::Failed, ["{err}"])
})?;
@ -143,7 +147,7 @@ impl TranslateLoop {
.send()
.await
.map_err(|err| {
let err = format!("Failed to call translation service: {err}");
let err = format!("Failed to call translation service: {err}: {}", err.meta());
gst::info!(CAT, imp: self.pad, "{err}");
gst::error_msg!(gst::LibraryError::Failed, ["{err}"])
})?

View file

@ -14,7 +14,7 @@
mod tests {
use gst::prelude::*;
const DEFAULT_S3_REGION: &str = "us-west-2";
use gstaws::s3utils::{AWS_BEHAVIOR_VERSION, DEFAULT_S3_REGION};
fn init() {
use std::sync::Once;
@ -40,7 +40,7 @@ mod tests {
)
.or_default_provider();
let config = aws_config::defaults(aws_config::BehaviorVersion::latest())
let config = aws_config::defaults(AWS_BEHAVIOR_VERSION.clone())
.region(region_provider)
.load()
.await;

View file

@ -0,0 +1,557 @@
// Copyright (C) 2021 Rafael Caricio <rafael@caricio.com>
// Copyright (C) 2023 Seungha Yang <seungha@centricular.com>
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
use crate::playlist::Playlist;
use chrono::{DateTime, Duration, Utc};
use gio::prelude::*;
use gst::glib;
use gst::prelude::*;
use gst::subclass::prelude::*;
use m3u8_rs::MediaSegment;
use once_cell::sync::Lazy;
use std::fs;
use std::io::Write;
use std::path;
use std::sync::Mutex;
const DEFAULT_PLAYLIST_LOCATION: &str = "playlist.m3u8";
const DEFAULT_MAX_NUM_SEGMENT_FILES: u32 = 10;
const DEFAULT_PLAYLIST_LENGTH: u32 = 5;
const DEFAULT_PROGRAM_DATE_TIME_TAG: bool = false;
const DEFAULT_CLOCK_TRACKING_FOR_PDT: bool = true;
const DEFAULT_ENDLIST: bool = true;
const SIGNAL_GET_PLAYLIST_STREAM: &str = "get-playlist-stream";
const SIGNAL_GET_FRAGMENT_STREAM: &str = "get-fragment-stream";
const SIGNAL_DELETE_FRAGMENT: &str = "delete-fragment";
static CAT: Lazy<gst::DebugCategory> = Lazy::new(|| {
gst::DebugCategory::new(
"hlsbasesink",
gst::DebugColorFlags::empty(),
Some("HLS Base sink"),
)
});
struct Settings {
playlist_location: String,
playlist_root: Option<String>,
playlist_length: u32,
max_num_segment_files: usize,
enable_program_date_time: bool,
pdt_follows_pipeline_clock: bool,
enable_endlist: bool,
}
impl Default for Settings {
fn default() -> Self {
Self {
playlist_location: String::from(DEFAULT_PLAYLIST_LOCATION),
playlist_root: None,
playlist_length: DEFAULT_PLAYLIST_LENGTH,
max_num_segment_files: DEFAULT_MAX_NUM_SEGMENT_FILES as usize,
enable_program_date_time: DEFAULT_PROGRAM_DATE_TIME_TAG,
pdt_follows_pipeline_clock: DEFAULT_CLOCK_TRACKING_FOR_PDT,
enable_endlist: DEFAULT_ENDLIST,
}
}
}
pub struct PlaylistContext {
pdt_base_utc: Option<DateTime<Utc>>,
pdt_base_running_time: Option<gst::ClockTime>,
playlist: Playlist,
old_segment_locations: Vec<String>,
segment_template: String,
playlist_location: String,
max_num_segment_files: usize,
playlist_length: u32,
}
#[derive(Default)]
pub struct State {
context: Option<PlaylistContext>,
}
#[derive(Default)]
pub struct HlsBaseSink {
settings: Mutex<Settings>,
state: Mutex<State>,
}
#[glib::object_subclass]
impl ObjectSubclass for HlsBaseSink {
const NAME: &'static str = "GstHlsBaseSink";
type Type = super::HlsBaseSink;
type ParentType = gst::Bin;
}
pub trait HlsBaseSinkImpl: BinImpl {}
unsafe impl<T: HlsBaseSinkImpl> IsSubclassable<T> for super::HlsBaseSink {}
impl ObjectImpl for HlsBaseSink {
fn constructed(&self) {
self.parent_constructed();
let obj = self.obj();
obj.set_suppressed_flags(gst::ElementFlags::SINK | gst::ElementFlags::SOURCE);
obj.set_element_flags(gst::ElementFlags::SINK);
}
fn properties() -> &'static [glib::ParamSpec] {
static PROPERTIES: Lazy<Vec<glib::ParamSpec>> = Lazy::new(|| {
vec![
glib::ParamSpecString::builder("playlist-location")
.nick("Playlist Location")
.blurb("Location of the playlist to write.")
.default_value(Some(DEFAULT_PLAYLIST_LOCATION))
.build(),
glib::ParamSpecString::builder("playlist-root")
.nick("Playlist Root")
.blurb("Base path for the segments in the playlist file.")
.build(),
glib::ParamSpecUInt::builder("max-files")
.nick("Max files")
.blurb("Maximum number of files to keep on disk. Once the maximum is reached, old files start to be deleted to make room for new ones.")
.build(),
glib::ParamSpecUInt::builder("playlist-length")
.nick("Playlist length")
.blurb("Length of HLS playlist. To allow players to conform to section 6.3.3 of the HLS specification, this should be at least 3. If set to 0, the playlist will be infinite.")
.default_value(DEFAULT_PLAYLIST_LENGTH)
.build(),
glib::ParamSpecBoolean::builder("enable-program-date-time")
.nick("add EXT-X-PROGRAM-DATE-TIME tag")
.blurb("put EXT-X-PROGRAM-DATE-TIME tag in the playlist")
.default_value(DEFAULT_PROGRAM_DATE_TIME_TAG)
.build(),
glib::ParamSpecBoolean::builder("pdt-follows-pipeline-clock")
.nick("Whether Program-Date-Time should follow the pipeline clock")
.blurb("As there might be drift between the wallclock and pipeline clock, this controls whether the Program-Date-Time markers should follow the pipeline clock rate (true), or be skewed to match the wallclock rate (false).")
.default_value(DEFAULT_CLOCK_TRACKING_FOR_PDT)
.build(),
glib::ParamSpecBoolean::builder("enable-endlist")
.nick("Enable Endlist")
.blurb("Write \"EXT-X-ENDLIST\" tag to manifest at the end of stream")
.default_value(DEFAULT_ENDLIST)
.build(),
]
});
PROPERTIES.as_ref()
}
fn set_property(&self, _id: usize, value: &glib::Value, pspec: &glib::ParamSpec) {
let mut settings = self.settings.lock().unwrap();
match pspec.name() {
"playlist-location" => {
settings.playlist_location = value
.get::<Option<String>>()
.expect("type checked upstream")
.unwrap_or_else(|| String::from(DEFAULT_PLAYLIST_LOCATION));
}
"playlist-root" => {
settings.playlist_root = value
.get::<Option<String>>()
.expect("type checked upstream");
}
"max-files" => {
let max_files: u32 = value.get().expect("type checked upstream");
settings.max_num_segment_files = max_files as usize;
}
"playlist-length" => {
settings.playlist_length = value.get().expect("type checked upstream");
}
"enable-program-date-time" => {
settings.enable_program_date_time = value.get().expect("type checked upstream");
}
"pdt-follows-pipeline-clock" => {
settings.pdt_follows_pipeline_clock = value.get().expect("type checked upstream");
}
"enable-endlist" => {
settings.enable_endlist = value.get().expect("type checked upstream");
}
_ => unimplemented!(),
};
}
fn property(&self, _id: usize, pspec: &glib::ParamSpec) -> glib::Value {
let settings = self.settings.lock().unwrap();
match pspec.name() {
"playlist-location" => settings.playlist_location.to_value(),
"playlist-root" => settings.playlist_root.to_value(),
"max-files" => {
let max_files = settings.max_num_segment_files as u32;
max_files.to_value()
}
"playlist-length" => settings.playlist_length.to_value(),
"enable-program-date-time" => settings.enable_program_date_time.to_value(),
"pdt-follows-pipeline-clock" => settings.pdt_follows_pipeline_clock.to_value(),
"enable-endlist" => settings.enable_endlist.to_value(),
_ => unimplemented!(),
}
}
fn signals() -> &'static [glib::subclass::Signal] {
static SIGNALS: Lazy<Vec<glib::subclass::Signal>> = Lazy::new(|| {
vec![
glib::subclass::Signal::builder(SIGNAL_GET_PLAYLIST_STREAM)
.param_types([String::static_type()])
.return_type::<Option<gio::OutputStream>>()
.class_handler(|_, args| {
let elem = args[0].get::<super::HlsBaseSink>().expect("signal arg");
let playlist_location = args[1].get::<String>().expect("signal arg");
let imp = elem.imp();
Some(imp.new_file_stream(&playlist_location).ok().to_value())
})
.accumulator(|_hint, ret, value| {
// First signal handler wins
*ret = value.clone();
false
})
.build(),
glib::subclass::Signal::builder(SIGNAL_GET_FRAGMENT_STREAM)
.param_types([String::static_type()])
.return_type::<Option<gio::OutputStream>>()
.class_handler(|_, args| {
let elem = args[0].get::<super::HlsBaseSink>().expect("signal arg");
let fragment_location = args[1].get::<String>().expect("signal arg");
let imp = elem.imp();
Some(imp.new_file_stream(&fragment_location).ok().to_value())
})
.accumulator(|_hint, ret, value| {
// First signal handler wins
*ret = value.clone();
false
})
.build(),
glib::subclass::Signal::builder(SIGNAL_DELETE_FRAGMENT)
.param_types([String::static_type()])
.return_type::<bool>()
.class_handler(|_, args| {
let elem = args[0].get::<super::HlsBaseSink>().expect("signal arg");
let fragment_location = args[1].get::<String>().expect("signal arg");
let imp = elem.imp();
imp.delete_fragment(&fragment_location);
Some(true.to_value())
})
.accumulator(|_hint, ret, value| {
// First signal handler wins
*ret = value.clone();
false
})
.build(),
]
});
SIGNALS.as_ref()
}
}
impl GstObjectImpl for HlsBaseSink {}
impl ElementImpl for HlsBaseSink {
fn change_state(
&self,
transition: gst::StateChange,
) -> Result<gst::StateChangeSuccess, gst::StateChangeError> {
let ret = self.parent_change_state(transition)?;
match transition {
gst::StateChange::PlayingToPaused => {
let mut state = self.state.lock().unwrap();
if let Some(context) = state.context.as_mut() {
// reset mapping from rt to utc. during pause
// rt is stopped but utc keep moving so need to
// calculate the mapping again
context.pdt_base_running_time = None;
context.pdt_base_utc = None
}
}
gst::StateChange::PausedToReady => {
self.close_playlist();
}
_ => (),
}
Ok(ret)
}
}
impl BinImpl for HlsBaseSink {}
impl HlsBaseSinkImpl for HlsBaseSink {}
impl HlsBaseSink {
pub fn open_playlist(&self, playlist: Playlist, segment_template: String) {
let mut state = self.state.lock().unwrap();
let settings = self.settings.lock().unwrap();
state.context = Some(PlaylistContext {
pdt_base_utc: None,
pdt_base_running_time: None,
playlist,
old_segment_locations: Vec::new(),
segment_template,
playlist_location: settings.playlist_location.clone(),
max_num_segment_files: settings.max_num_segment_files,
playlist_length: settings.playlist_length,
});
}
fn close_playlist(&self) {
let mut state = self.state.lock().unwrap();
if let Some(mut context) = state.context.take() {
if context.playlist.is_rendering() {
context
.playlist
.stop(self.settings.lock().unwrap().enable_endlist);
let _ = self.write_playlist(&mut context);
}
}
}
pub fn get_fragment_stream(&self, fragment_id: u32) -> Option<(gio::OutputStream, String)> {
let mut state = self.state.lock().unwrap();
let context = match state.context.as_mut() {
Some(context) => context,
None => {
gst::error!(
CAT,
imp: self,
"Playlist is not configured",
);
return None;
}
};
let location = match sprintf::sprintf!(&context.segment_template, fragment_id) {
Ok(file_name) => file_name,
Err(err) => {
gst::error!(
CAT,
imp: self,
"Couldn't build file name, err: {:?}", err,
);
return None;
}
};
gst::trace!(
CAT,
imp: self,
"Segment location formatted: {}",
location
);
let stream = match self
.obj()
.emit_by_name::<Option<gio::OutputStream>>(SIGNAL_GET_FRAGMENT_STREAM, &[&location])
{
Some(stream) => stream,
None => return None,
};
Some((stream, location))
}
pub fn get_segment_uri(&self, location: &str) -> String {
let settings = self.settings.lock().unwrap();
let file_name = path::Path::new(&location)
.file_name()
.unwrap()
.to_str()
.unwrap();
if let Some(playlist_root) = &settings.playlist_root {
format!("{playlist_root}/{file_name}")
} else {
file_name.to_string()
}
}
pub fn add_segment(
&self,
location: &str,
running_time: Option<gst::ClockTime>,
mut segment: MediaSegment,
) -> Result<gst::FlowSuccess, gst::FlowError> {
let mut state = self.state.lock().unwrap();
let context = match state.context.as_mut() {
Some(context) => context,
None => {
gst::error!(
CAT,
imp: self,
"Playlist is not configured",
);
return Err(gst::FlowError::Error);
}
};
if let Some(running_time) = running_time {
if context.pdt_base_running_time.is_none() {
context.pdt_base_running_time = Some(running_time);
}
let settings = self.settings.lock().unwrap();
// Calculate the mapping from running time to UTC
// calculate pdt_base_utc for each segment for !pdt_follows_pipeline_clock
// when pdt_follows_pipeline_clock is set, we calculate the base time every time
// this avoids the drift between pdt tag and external clock (if gst clock has skew w.r.t external clock)
if context.pdt_base_utc.is_none() || !settings.pdt_follows_pipeline_clock {
let obj = self.obj();
let now_utc = Utc::now();
let now_gst = obj.clock().unwrap().time().unwrap();
let pts_clock_time = running_time + obj.base_time().unwrap();
let diff = now_gst.nseconds() as i64 - pts_clock_time.nseconds() as i64;
let pts_utc = now_utc
.checked_sub_signed(Duration::nanoseconds(diff))
.expect("offsetting the utc with gstreamer clock-diff overflow");
context.pdt_base_utc = Some(pts_utc);
}
if settings.enable_program_date_time {
// Add the diff of running time to UTC time
// date_time = first_segment_utc + (current_seg_running_time - first_seg_running_time)
let date_time =
context
.pdt_base_utc
.unwrap()
.checked_add_signed(Duration::nanoseconds(
running_time
.opt_checked_sub(context.pdt_base_running_time)
.unwrap()
.unwrap()
.nseconds() as i64,
));
if let Some(date_time) = date_time {
segment.program_date_time = Some(date_time.into());
}
}
}
context.playlist.add_segment(segment);
if context.playlist.is_type_undefined() {
context.old_segment_locations.push(location.to_string());
}
self.write_playlist(context)
}
fn write_playlist(
&self,
context: &mut PlaylistContext,
) -> Result<gst::FlowSuccess, gst::FlowError> {
gst::info!(CAT, imp: self, "Preparing to write new playlist, COUNT {}", context.playlist.len());
context
.playlist
.update_playlist_state(context.playlist_length as usize);
// Acquires the playlist file handle so we can update it with new content. By default, this
// is expected to be the same file every time.
let mut playlist_stream = self
.obj()
.emit_by_name::<Option<gio::OutputStream>>(
SIGNAL_GET_PLAYLIST_STREAM,
&[&context.playlist_location],
)
.ok_or_else(|| {
gst::error!(
CAT,
imp: self,
"Could not get stream to write playlist content",
);
gst::FlowError::Error
})?
.into_write();
context
.playlist
.write_to(&mut playlist_stream)
.map_err(|err| {
gst::error!(
CAT,
imp: self,
"Could not write new playlist: {}",
err.to_string()
);
gst::FlowError::Error
})?;
playlist_stream.flush().map_err(|err| {
gst::error!(
CAT,
imp: self,
"Could not flush playlist: {}",
err.to_string()
);
gst::FlowError::Error
})?;
if context.playlist.is_type_undefined() && context.max_num_segment_files > 0 {
// Cleanup old segments from filesystem
while context.old_segment_locations.len() > context.max_num_segment_files {
let old_segment_location = context.old_segment_locations.remove(0);
if !self
.obj()
.emit_by_name::<bool>(SIGNAL_DELETE_FRAGMENT, &[&old_segment_location])
{
gst::error!(CAT, imp: self, "Could not delete fragment");
}
}
}
gst::debug!(CAT, imp: self, "Wrote new playlist file!");
Ok(gst::FlowSuccess::Ok)
}
pub fn new_file_stream<P>(&self, location: &P) -> Result<gio::OutputStream, String>
where
P: AsRef<path::Path>,
{
let file = fs::File::create(location).map_err(move |err| {
let error_msg = gst::error_msg!(
gst::ResourceError::OpenWrite,
[
"Could not open file {} for writing: {}",
location.as_ref().to_str().unwrap(),
err.to_string(),
]
);
self.post_error_message(error_msg);
err.to_string()
})?;
Ok(gio::WriteOutputStream::new(file).upcast())
}
fn delete_fragment<P>(&self, location: &P)
where
P: AsRef<path::Path>,
{
let _ = fs::remove_file(location).map_err(|err| {
gst::warning!(
CAT,
imp: self,
"Could not delete segment file: {}",
err.to_string()
);
});
}
}

View file

@ -0,0 +1,527 @@
// Copyright (C) 2023 Seungha Yang <seungha@centricular.com>
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
use crate::hlsbasesink::HlsBaseSinkImpl;
use crate::hlssink3::HlsSink3PlaylistType;
use crate::playlist::Playlist;
use crate::HlsBaseSink;
use gio::prelude::*;
use gst::glib;
use gst::prelude::*;
use gst::subclass::prelude::*;
use m3u8_rs::{MediaPlaylist, MediaPlaylistType, MediaSegment};
use once_cell::sync::Lazy;
use std::io::Write;
use std::sync::Mutex;
const DEFAULT_INIT_LOCATION: &str = "init%05d.mp4";
const DEFAULT_CMAF_LOCATION: &str = "segment%05d.m4s";
const DEFAULT_TARGET_DURATION: u32 = 15;
const DEFAULT_PLAYLIST_TYPE: HlsSink3PlaylistType = HlsSink3PlaylistType::Unspecified;
const DEFAULT_SYNC: bool = true;
const DEFAULT_LATENCY: gst::ClockTime =
gst::ClockTime::from_mseconds((DEFAULT_TARGET_DURATION * 500) as u64);
const SIGNAL_GET_INIT_STREAM: &str = "get-init-stream";
static CAT: Lazy<gst::DebugCategory> = Lazy::new(|| {
gst::DebugCategory::new(
"hlscmafsink",
gst::DebugColorFlags::empty(),
Some("HLS CMAF sink"),
)
});
macro_rules! base_imp {
($i:expr) => {
$i.obj().upcast_ref::<HlsBaseSink>().imp()
};
}
struct HlsCmafSinkSettings {
init_location: String,
location: String,
target_duration: u32,
playlist_type: Option<MediaPlaylistType>,
sync: bool,
latency: gst::ClockTime,
cmafmux: gst::Element,
appsink: gst_app::AppSink,
}
impl Default for HlsCmafSinkSettings {
fn default() -> Self {
let cmafmux = gst::ElementFactory::make("cmafmux")
.name("muxer")
.property(
"fragment-duration",
gst::ClockTime::from_seconds(DEFAULT_TARGET_DURATION as u64),
)
.property("latency", DEFAULT_LATENCY)
.build()
.expect("Could not make element cmafmux");
let appsink = gst_app::AppSink::builder()
.buffer_list(true)
.sync(DEFAULT_SYNC)
.name("sink")
.build();
Self {
init_location: String::from(DEFAULT_INIT_LOCATION),
location: String::from(DEFAULT_CMAF_LOCATION),
target_duration: DEFAULT_TARGET_DURATION,
playlist_type: None,
sync: DEFAULT_SYNC,
latency: DEFAULT_LATENCY,
cmafmux,
appsink,
}
}
}
#[derive(Default)]
struct HlsCmafSinkState {
init_idx: u32,
segment_idx: u32,
init_segment: Option<m3u8_rs::Map>,
new_header: bool,
}
#[derive(Default)]
pub struct HlsCmafSink {
settings: Mutex<HlsCmafSinkSettings>,
state: Mutex<HlsCmafSinkState>,
}
#[glib::object_subclass]
impl ObjectSubclass for HlsCmafSink {
const NAME: &'static str = "GstHlsCmafSink";
type Type = super::HlsCmafSink;
type ParentType = HlsBaseSink;
}
impl ObjectImpl for HlsCmafSink {
fn properties() -> &'static [glib::ParamSpec] {
static PROPERTIES: Lazy<Vec<glib::ParamSpec>> = Lazy::new(|| {
vec![
glib::ParamSpecString::builder("init-location")
.nick("Init Location")
.blurb("Location of the init fragment file to write")
.default_value(Some(DEFAULT_INIT_LOCATION))
.build(),
glib::ParamSpecString::builder("location")
.nick("Location")
.blurb("Location of the fragment file to write")
.default_value(Some(DEFAULT_CMAF_LOCATION))
.build(),
glib::ParamSpecUInt::builder("target-duration")
.nick("Target duration")
.blurb("The target duration in seconds of a segment/file. (0 - disabled, useful for management of segment duration by the streaming server)")
.default_value(DEFAULT_TARGET_DURATION)
.mutable_ready()
.build(),
glib::ParamSpecEnum::builder_with_default("playlist-type", DEFAULT_PLAYLIST_TYPE)
.nick("Playlist Type")
.blurb("The type of the playlist to use. When VOD type is set, the playlist will be live until the pipeline ends execution.")
.mutable_ready()
.build(),
glib::ParamSpecBoolean::builder("sync")
.nick("Sync")
.blurb("Sync on the clock")
.default_value(DEFAULT_SYNC)
.build(),
glib::ParamSpecUInt64::builder("latency")
.nick("Latency")
.blurb(
"Additional latency to allow upstream to take longer to \
produce buffers for the current position (in nanoseconds)",
)
.maximum(i64::MAX as u64)
.default_value(DEFAULT_LATENCY.nseconds())
.build(),
]
});
PROPERTIES.as_ref()
}
fn set_property(&self, _id: usize, value: &glib::Value, pspec: &glib::ParamSpec) {
let mut settings = self.settings.lock().unwrap();
match pspec.name() {
"init-location" => {
settings.init_location = value
.get::<Option<String>>()
.expect("type checked upstream")
.unwrap_or_else(|| DEFAULT_INIT_LOCATION.into());
}
"location" => {
settings.location = value
.get::<Option<String>>()
.expect("type checked upstream")
.unwrap_or_else(|| DEFAULT_CMAF_LOCATION.into());
}
"target-duration" => {
settings.target_duration = value.get().expect("type checked upstream");
settings.cmafmux.set_property(
"fragment-duration",
gst::ClockTime::from_seconds(settings.target_duration as u64),
);
}
"playlist-type" => {
settings.playlist_type = value
.get::<HlsSink3PlaylistType>()
.expect("type checked upstream")
.into();
}
"sync" => {
settings.sync = value.get().expect("type checked upstream");
settings.appsink.set_property("sync", settings.sync);
}
"latency" => {
settings.latency = value.get().expect("type checked upstream");
settings.cmafmux.set_property("latency", settings.latency);
}
_ => unimplemented!(),
};
}
fn property(&self, _id: usize, pspec: &glib::ParamSpec) -> glib::Value {
let settings = self.settings.lock().unwrap();
match pspec.name() {
"init-location" => settings.init_location.to_value(),
"location" => settings.location.to_value(),
"target-duration" => settings.target_duration.to_value(),
"playlist-type" => {
let playlist_type: HlsSink3PlaylistType = settings.playlist_type.as_ref().into();
playlist_type.to_value()
}
"sync" => settings.sync.to_value(),
"latency" => settings.latency.to_value(),
_ => unimplemented!(),
}
}
fn signals() -> &'static [glib::subclass::Signal] {
static SIGNALS: Lazy<Vec<glib::subclass::Signal>> = Lazy::new(|| {
vec![glib::subclass::Signal::builder(SIGNAL_GET_INIT_STREAM)
.param_types([String::static_type()])
.return_type::<Option<gio::OutputStream>>()
.class_handler(|_, args| {
let elem = args[0].get::<HlsBaseSink>().expect("signal arg");
let init_location = args[1].get::<String>().expect("signal arg");
let imp = elem.imp();
Some(imp.new_file_stream(&init_location).ok().to_value())
})
.accumulator(|_hint, ret, value| {
// First signal handler wins
*ret = value.clone();
false
})
.build()]
});
SIGNALS.as_ref()
}
fn constructed(&self) {
self.parent_constructed();
let obj = self.obj();
let settings = self.settings.lock().unwrap();
obj.add_many([&settings.cmafmux, settings.appsink.upcast_ref()])
.unwrap();
settings.cmafmux.link(&settings.appsink).unwrap();
let sinkpad = settings.cmafmux.static_pad("sink").unwrap();
let gpad = gst::GhostPad::with_target(&sinkpad).unwrap();
obj.add_pad(&gpad).unwrap();
let self_weak = self.downgrade();
settings.appsink.set_callbacks(
gst_app::AppSinkCallbacks::builder()
.new_sample(move |sink| {
let Some(imp) = self_weak.upgrade() else {
return Err(gst::FlowError::Eos);
};
let sample = sink.pull_sample().map_err(|_| gst::FlowError::Eos)?;
imp.on_new_sample(sample)
})
.build(),
);
}
}
impl GstObjectImpl for HlsCmafSink {}
impl ElementImpl for HlsCmafSink {
fn metadata() -> Option<&'static gst::subclass::ElementMetadata> {
static ELEMENT_METADATA: Lazy<gst::subclass::ElementMetadata> = Lazy::new(|| {
gst::subclass::ElementMetadata::new(
"HTTP Live Streaming CMAF Sink",
"Sink/Muxer",
"HTTP Live Streaming CMAF Sink",
"Seungha Yang <seungha@centricular.com>",
)
});
Some(&*ELEMENT_METADATA)
}
fn pad_templates() -> &'static [gst::PadTemplate] {
static PAD_TEMPLATES: Lazy<Vec<gst::PadTemplate>> = Lazy::new(|| {
let pad_template = gst::PadTemplate::new(
"sink",
gst::PadDirection::Sink,
gst::PadPresence::Always,
&[
gst::Structure::builder("video/x-h264")
.field("stream-format", gst::List::new(["avc", "avc3"]))
.field("alignment", "au")
.field("width", gst::IntRange::new(1, u16::MAX as i32))
.field("height", gst::IntRange::new(1, u16::MAX as i32))
.build(),
gst::Structure::builder("video/x-h265")
.field("stream-format", gst::List::new(["hvc1", "hev1"]))
.field("alignment", "au")
.field("width", gst::IntRange::new(1, u16::MAX as i32))
.field("height", gst::IntRange::new(1, u16::MAX as i32))
.build(),
gst::Structure::builder("audio/mpeg")
.field("mpegversion", 4i32)
.field("stream-format", "raw")
.field("channels", gst::IntRange::new(1, u16::MAX as i32))
.field("rate", gst::IntRange::new(1, i32::MAX))
.build(),
]
.into_iter()
.collect::<gst::Caps>(),
)
.unwrap();
vec![pad_template]
});
PAD_TEMPLATES.as_ref()
}
fn change_state(
&self,
transition: gst::StateChange,
) -> Result<gst::StateChangeSuccess, gst::StateChangeError> {
if transition == gst::StateChange::ReadyToPaused {
let (target_duration, playlist_type, segment_template) = {
let settings = self.settings.lock().unwrap();
(
settings.target_duration,
settings.playlist_type.clone(),
settings.location.clone(),
)
};
let playlist = self.start(target_duration, playlist_type);
base_imp!(self).open_playlist(playlist, segment_template);
}
self.parent_change_state(transition)
}
}
impl BinImpl for HlsCmafSink {}
impl HlsBaseSinkImpl for HlsCmafSink {}
impl HlsCmafSink {
fn start(&self, target_duration: u32, playlist_type: Option<MediaPlaylistType>) -> Playlist {
gst::info!(CAT, imp: self, "Starting");
let mut state = self.state.lock().unwrap();
*state = HlsCmafSinkState::default();
let (turn_vod, playlist_type) = if playlist_type == Some(MediaPlaylistType::Vod) {
(true, Some(MediaPlaylistType::Event))
} else {
(false, playlist_type)
};
let playlist = MediaPlaylist {
version: Some(6),
target_duration: target_duration as f32,
playlist_type,
independent_segments: true,
..Default::default()
};
Playlist::new(playlist, turn_vod, true)
}
fn on_init_segment(&self) -> Result<gio::OutputStreamWrite<gio::OutputStream>, String> {
let settings = self.settings.lock().unwrap();
let mut state = self.state.lock().unwrap();
let location = match sprintf::sprintf!(&settings.init_location, state.init_idx) {
Ok(location) => location,
Err(err) => {
gst::error!(
CAT,
imp: self,
"Couldn't build file name, err: {:?}", err,
);
return Err(String::from("Invalid init segment file pattern"));
}
};
let stream = self
.obj()
.emit_by_name::<Option<gio::OutputStream>>(SIGNAL_GET_INIT_STREAM, &[&location])
.ok_or_else(|| String::from("Error while getting fragment stream"))?
.into_write();
let uri = base_imp!(self).get_segment_uri(&location);
state.init_segment = Some(m3u8_rs::Map {
uri,
..Default::default()
});
state.new_header = true;
state.init_idx += 1;
Ok(stream)
}
fn on_new_fragment(
&self,
) -> Result<(gio::OutputStreamWrite<gio::OutputStream>, String), String> {
let mut state = self.state.lock().unwrap();
let (stream, location) = base_imp!(self)
.get_fragment_stream(state.segment_idx)
.ok_or_else(|| String::from("Error while getting fragment stream"))?;
state.segment_idx += 1;
Ok((stream.into_write(), location))
}
fn add_segment(
&self,
duration: f32,
running_time: Option<gst::ClockTime>,
location: String,
) -> Result<gst::FlowSuccess, gst::FlowError> {
let uri = base_imp!(self).get_segment_uri(&location);
let mut state = self.state.lock().unwrap();
let map = if state.new_header {
state.new_header = false;
state.init_segment.clone()
} else {
None
};
base_imp!(self).add_segment(
&location,
running_time,
MediaSegment {
uri,
duration,
map,
..Default::default()
},
)
}
fn on_new_sample(&self, sample: gst::Sample) -> Result<gst::FlowSuccess, gst::FlowError> {
let mut buffer_list = sample.buffer_list_owned().unwrap();
let mut first = buffer_list.get(0).unwrap();
if first
.flags()
.contains(gst::BufferFlags::DISCONT | gst::BufferFlags::HEADER)
{
let mut stream = self.on_init_segment().map_err(|err| {
gst::error!(
CAT,
imp: self,
"Couldn't get output stream for init segment, {err}",
);
gst::FlowError::Error
})?;
let map = first.map_readable().unwrap();
stream.write(&map).map_err(|_| {
gst::error!(
CAT,
imp: self,
"Couldn't write init segment to output stream",
);
gst::FlowError::Error
})?;
stream.flush().map_err(|_| {
gst::error!(
CAT,
imp: self,
"Couldn't flush output stream",
);
gst::FlowError::Error
})?;
drop(map);
buffer_list.make_mut().remove(0, 1);
if buffer_list.is_empty() {
return Ok(gst::FlowSuccess::Ok);
}
first = buffer_list.get(0).unwrap();
}
let segment = sample
.segment()
.unwrap()
.downcast_ref::<gst::ClockTime>()
.unwrap();
let running_time = segment.to_running_time(first.pts().unwrap());
let dur = first.duration().unwrap();
let (mut stream, location) = self.on_new_fragment().map_err(|err| {
gst::error!(
CAT,
imp: self,
"Couldn't get output stream for segment, {err}",
);
gst::FlowError::Error
})?;
for buffer in &*buffer_list {
let map = buffer.map_readable().unwrap();
stream.write(&map).map_err(|_| {
gst::error!(
CAT,
imp: self,
"Couldn't write segment to output stream",
);
gst::FlowError::Error
})?;
}
stream.flush().map_err(|_| {
gst::error!(
CAT,
imp: self,
"Couldn't flush output stream",
);
gst::FlowError::Error
})?;
self.add_segment(dur.mseconds() as f32 / 1_000f32, running_time, location)
}
}

View file

@ -0,0 +1,34 @@
// Copyright (C) 2023 Seungha Yang <seungha@centricular.com>
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
#![allow(clippy::non_send_fields_in_send_ty, unused_doc_comments)]
use crate::HlsBaseSink;
/**
* plugin-hlssink3:
*
* Since: plugins-rs-0.8.0
*/
use gst::glib;
use gst::prelude::*;
mod imp;
glib::wrapper! {
pub struct HlsCmafSink(ObjectSubclass<imp::HlsCmafSink>) @extends HlsBaseSink, gst::Bin, gst::Element, gst::Object;
}
pub fn register(plugin: &gst::Plugin) -> Result<(), glib::BoolError> {
gst::Element::register(
Some(plugin),
"hlscmafsink",
gst::Rank::NONE,
HlsCmafSink::static_type(),
)?;
Ok(())
}

View file

@ -0,0 +1,581 @@
// Copyright (C) 2021 Rafael Caricio <rafael@caricio.com>
// Copyright (C) 2023 Seungha Yang <seungha@centricular.com>
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
use crate::hlsbasesink::HlsBaseSinkImpl;
use crate::hlssink3::HlsSink3PlaylistType;
use crate::playlist::Playlist;
use crate::HlsBaseSink;
use gio::prelude::*;
use gst::glib;
use gst::prelude::*;
use gst::subclass::prelude::*;
use m3u8_rs::{MediaPlaylist, MediaPlaylistType, MediaSegment};
use once_cell::sync::Lazy;
use std::sync::Mutex;
const DEFAULT_TS_LOCATION: &str = "segment%05d.ts";
const DEFAULT_TARGET_DURATION: u32 = 15;
const DEFAULT_PLAYLIST_TYPE: HlsSink3PlaylistType = HlsSink3PlaylistType::Unspecified;
const DEFAULT_I_FRAMES_ONLY_PLAYLIST: bool = false;
const DEFAULT_SEND_KEYFRAME_REQUESTS: bool = true;
static CAT: Lazy<gst::DebugCategory> = Lazy::new(|| {
gst::DebugCategory::new("hlssink3", gst::DebugColorFlags::empty(), Some("HLS sink"))
});
macro_rules! base_imp {
($i:expr) => {
$i.obj().upcast_ref::<HlsBaseSink>().imp()
};
}
impl From<HlsSink3PlaylistType> for Option<MediaPlaylistType> {
fn from(pl_type: HlsSink3PlaylistType) -> Self {
use HlsSink3PlaylistType::*;
match pl_type {
Unspecified => None,
Event => Some(MediaPlaylistType::Event),
Vod => Some(MediaPlaylistType::Vod),
}
}
}
impl From<Option<&MediaPlaylistType>> for HlsSink3PlaylistType {
fn from(inner_pl_type: Option<&MediaPlaylistType>) -> Self {
use HlsSink3PlaylistType::*;
match inner_pl_type {
None | Some(MediaPlaylistType::Other(_)) => Unspecified,
Some(MediaPlaylistType::Event) => Event,
Some(MediaPlaylistType::Vod) => Vod,
}
}
}
struct HlsSink3Settings {
location: String,
target_duration: u32,
playlist_type: Option<MediaPlaylistType>,
i_frames_only: bool,
send_keyframe_requests: bool,
splitmuxsink: gst::Element,
giostreamsink: gst::Element,
video_sink: bool,
audio_sink: bool,
}
impl Default for HlsSink3Settings {
fn default() -> Self {
let muxer = gst::ElementFactory::make("mpegtsmux")
.name("mpeg-ts_mux")
.build()
.expect("Could not make element mpegtsmux");
let giostreamsink = gst::ElementFactory::make("giostreamsink")
.name("giostream_sink")
.build()
.expect("Could not make element giostreamsink");
let splitmuxsink = gst::ElementFactory::make("splitmuxsink")
.name("split_mux_sink")
.property("muxer", &muxer)
.property("reset-muxer", false)
.property("send-keyframe-requests", DEFAULT_SEND_KEYFRAME_REQUESTS)
.property(
"max-size-time",
gst::ClockTime::from_seconds(DEFAULT_TARGET_DURATION as u64),
)
.property("sink", &giostreamsink)
.build()
.expect("Could not make element splitmuxsink");
// giostreamsink doesn't let go of its stream until the element is finalized, which might
// be too late for the calling application. Let's try to force it to close while tearing
// down the pipeline.
if giostreamsink.has_property("close-on-stop", Some(bool::static_type())) {
giostreamsink.set_property("close-on-stop", true);
} else {
gst::warning!(
CAT,
"hlssink3 may sometimes fail to write out the final playlist update. This can be fixed by using giostreamsink from GStreamer 1.24 or later."
)
}
Self {
location: String::from(DEFAULT_TS_LOCATION),
target_duration: DEFAULT_TARGET_DURATION,
playlist_type: None,
send_keyframe_requests: DEFAULT_SEND_KEYFRAME_REQUESTS,
i_frames_only: DEFAULT_I_FRAMES_ONLY_PLAYLIST,
splitmuxsink,
giostreamsink,
video_sink: false,
audio_sink: false,
}
}
}
#[derive(Default)]
struct HlsSink3State {
fragment_opened_at: Option<gst::ClockTime>,
fragment_running_time: Option<gst::ClockTime>,
current_segment_location: Option<String>,
}
#[derive(Default)]
pub struct HlsSink3 {
settings: Mutex<HlsSink3Settings>,
state: Mutex<HlsSink3State>,
}
#[glib::object_subclass]
impl ObjectSubclass for HlsSink3 {
const NAME: &'static str = "GstHlsSink3";
type Type = super::HlsSink3;
type ParentType = HlsBaseSink;
}
impl ObjectImpl for HlsSink3 {
fn properties() -> &'static [glib::ParamSpec] {
static PROPERTIES: Lazy<Vec<glib::ParamSpec>> = Lazy::new(|| {
vec![
glib::ParamSpecString::builder("location")
.nick("File Location")
.blurb("Location of the file to write")
.default_value(Some(DEFAULT_TS_LOCATION))
.build(),
glib::ParamSpecUInt::builder("target-duration")
.nick("Target duration")
.blurb("The target duration in seconds of a segment/file. (0 - disabled, useful for management of segment duration by the streaming server)")
.default_value(DEFAULT_TARGET_DURATION)
.build(),
glib::ParamSpecEnum::builder_with_default("playlist-type", DEFAULT_PLAYLIST_TYPE)
.nick("Playlist Type")
.blurb("The type of the playlist to use. When VOD type is set, the playlist will be live until the pipeline ends execution.")
.build(),
glib::ParamSpecBoolean::builder("i-frames-only")
.nick("I-Frames only playlist")
.blurb("Each video segments is single iframe, So put EXT-X-I-FRAMES-ONLY tag in the playlist")
.default_value(DEFAULT_I_FRAMES_ONLY_PLAYLIST)
.build(),
glib::ParamSpecBoolean::builder("send-keyframe-requests")
.nick("Send Keyframe Requests")
.blurb("Send keyframe requests to ensure correct fragmentation. If this is disabled then the input must have keyframes in regular intervals.")
.default_value(DEFAULT_SEND_KEYFRAME_REQUESTS)
.build(),
]
});
PROPERTIES.as_ref()
}
fn set_property(&self, _id: usize, value: &glib::Value, pspec: &glib::ParamSpec) {
let mut settings = self.settings.lock().unwrap();
match pspec.name() {
"location" => {
settings.location = value
.get::<Option<String>>()
.expect("type checked upstream")
.unwrap_or_else(|| DEFAULT_TS_LOCATION.into());
settings
.splitmuxsink
.set_property("location", &settings.location);
}
"target-duration" => {
settings.target_duration = value.get().expect("type checked upstream");
settings.splitmuxsink.set_property(
"max-size-time",
gst::ClockTime::from_seconds(settings.target_duration as u64),
);
}
"playlist-type" => {
settings.playlist_type = value
.get::<HlsSink3PlaylistType>()
.expect("type checked upstream")
.into();
}
"i-frames-only" => {
settings.i_frames_only = value.get().expect("type checked upstream");
if settings.i_frames_only && settings.audio_sink {
gst::element_error!(
self.obj(),
gst::StreamError::WrongType,
("Invalid configuration"),
["Audio not allowed for i-frames-only-stream"]
);
}
}
"send-keyframe-requests" => {
settings.send_keyframe_requests = value.get().expect("type checked upstream");
settings
.splitmuxsink
.set_property("send-keyframe-requests", settings.send_keyframe_requests);
}
_ => unimplemented!(),
};
}
fn property(&self, _id: usize, pspec: &glib::ParamSpec) -> glib::Value {
let settings = self.settings.lock().unwrap();
match pspec.name() {
"location" => settings.location.to_value(),
"target-duration" => settings.target_duration.to_value(),
"playlist-type" => {
let playlist_type: HlsSink3PlaylistType = settings.playlist_type.as_ref().into();
playlist_type.to_value()
}
"i-frames-only" => settings.i_frames_only.to_value(),
"send-keyframe-requests" => settings.send_keyframe_requests.to_value(),
_ => unimplemented!(),
}
}
fn constructed(&self) {
self.parent_constructed();
let obj = self.obj();
let settings = self.settings.lock().unwrap();
obj.add(&settings.splitmuxsink).unwrap();
settings
.splitmuxsink
.connect("format-location-full", false, {
let imp_weak = self.downgrade();
move |args| {
let Some(imp) = imp_weak.upgrade() else {
return Some(None::<String>.to_value());
};
let fragment_id = args[1].get::<u32>().unwrap();
gst::info!(CAT, imp: imp, "Got fragment-id: {}", fragment_id);
let sample = args[2].get::<gst::Sample>().unwrap();
let buffer = sample.buffer();
let running_time = if let Some(buffer) = buffer {
let segment = sample
.segment()
.expect("segment not available")
.downcast_ref::<gst::ClockTime>()
.expect("no time segment");
segment.to_running_time(buffer.pts().unwrap())
} else {
gst::warning!(
CAT,
imp: imp,
"buffer null for fragment-id: {}",
fragment_id
);
None
};
match imp.on_format_location(fragment_id, running_time) {
Ok(segment_location) => Some(segment_location.to_value()),
Err(err) => {
gst::error!(CAT, imp: imp, "on format-location handler: {}", err);
Some("unknown_segment".to_value())
}
}
}
});
}
}
impl GstObjectImpl for HlsSink3 {}
impl ElementImpl for HlsSink3 {
fn metadata() -> Option<&'static gst::subclass::ElementMetadata> {
static ELEMENT_METADATA: Lazy<gst::subclass::ElementMetadata> = Lazy::new(|| {
gst::subclass::ElementMetadata::new(
"HTTP Live Streaming sink",
"Sink/Muxer",
"HTTP Live Streaming sink",
"Alessandro Decina <alessandro.d@gmail.com>, \
Sebastian Dröge <sebastian@centricular.com>, \
Rafael Caricio <rafael@caricio.com>",
)
});
Some(&*ELEMENT_METADATA)
}
fn pad_templates() -> &'static [gst::PadTemplate] {
static PAD_TEMPLATES: Lazy<Vec<gst::PadTemplate>> = Lazy::new(|| {
let caps = gst::Caps::new_any();
let video_pad_template = gst::PadTemplate::new(
"video",
gst::PadDirection::Sink,
gst::PadPresence::Request,
&caps,
)
.unwrap();
let caps = gst::Caps::new_any();
let audio_pad_template = gst::PadTemplate::new(
"audio",
gst::PadDirection::Sink,
gst::PadPresence::Request,
&caps,
)
.unwrap();
vec![video_pad_template, audio_pad_template]
});
PAD_TEMPLATES.as_ref()
}
fn change_state(
&self,
transition: gst::StateChange,
) -> Result<gst::StateChangeSuccess, gst::StateChangeError> {
if transition == gst::StateChange::ReadyToPaused {
let (target_duration, playlist_type, i_frames_only, segment_template) = {
let settings = self.settings.lock().unwrap();
(
settings.target_duration,
settings.playlist_type.clone(),
settings.i_frames_only,
settings.location.clone(),
)
};
let playlist = self.start(target_duration, playlist_type, i_frames_only);
base_imp!(self).open_playlist(playlist, segment_template);
}
self.parent_change_state(transition)
}
fn request_new_pad(
&self,
templ: &gst::PadTemplate,
_name: Option<&str>,
_caps: Option<&gst::Caps>,
) -> Option<gst::Pad> {
let mut settings = self.settings.lock().unwrap();
match templ.name_template() {
"audio" => {
if settings.audio_sink {
gst::debug!(
CAT,
imp: self,
"requested_new_pad: audio pad is already set"
);
return None;
}
if settings.i_frames_only {
gst::element_error!(
self.obj(),
gst::StreamError::WrongType,
("Invalid configuration"),
["Audio not allowed for i-frames-only-stream"]
);
return None;
}
let peer_pad = settings.splitmuxsink.request_pad_simple("audio_0").unwrap();
let sink_pad = gst::GhostPad::from_template_with_target(templ, &peer_pad).unwrap();
self.obj().add_pad(&sink_pad).unwrap();
sink_pad.set_active(true).unwrap();
settings.audio_sink = true;
Some(sink_pad.upcast())
}
"video" => {
if settings.video_sink {
gst::debug!(
CAT,
imp: self,
"requested_new_pad: video pad is already set"
);
return None;
}
let peer_pad = settings.splitmuxsink.request_pad_simple("video").unwrap();
let sink_pad = gst::GhostPad::from_template_with_target(templ, &peer_pad).unwrap();
self.obj().add_pad(&sink_pad).unwrap();
sink_pad.set_active(true).unwrap();
settings.video_sink = true;
Some(sink_pad.upcast())
}
other_name => {
gst::debug!(
CAT,
imp: self,
"requested_new_pad: name \"{}\" is not audio or video",
other_name
);
None
}
}
}
fn release_pad(&self, pad: &gst::Pad) {
let mut settings = self.settings.lock().unwrap();
if !settings.audio_sink && !settings.video_sink {
return;
}
let ghost_pad = pad.downcast_ref::<gst::GhostPad>().unwrap();
if let Some(peer) = ghost_pad.target() {
settings.splitmuxsink.release_request_pad(&peer);
}
pad.set_active(false).unwrap();
self.obj().remove_pad(pad).unwrap();
if "audio" == ghost_pad.name() {
settings.audio_sink = false;
} else {
settings.video_sink = false;
}
}
}
impl BinImpl for HlsSink3 {
#[allow(clippy::single_match)]
fn handle_message(&self, msg: gst::Message) {
use gst::MessageView;
match msg.view() {
MessageView::Element(msg) => {
let event_is_from_splitmuxsink = {
let settings = self.settings.lock().unwrap();
msg.src() == Some(settings.splitmuxsink.upcast_ref())
};
if !event_is_from_splitmuxsink {
return;
}
let s = msg.structure().unwrap();
match s.name().as_str() {
"splitmuxsink-fragment-opened" => {
if let Ok(new_fragment_opened_at) = s.get::<gst::ClockTime>("running-time")
{
let mut state = self.state.lock().unwrap();
state.fragment_opened_at = Some(new_fragment_opened_at);
}
}
"splitmuxsink-fragment-closed" => {
let s = msg.structure().unwrap();
if let Ok(fragment_closed_at) = s.get::<gst::ClockTime>("running-time") {
self.on_fragment_closed(fragment_closed_at);
}
}
_ => {}
}
}
_ => self.parent_handle_message(msg),
}
}
}
impl HlsBaseSinkImpl for HlsSink3 {}
impl HlsSink3 {
fn start(
&self,
target_duration: u32,
playlist_type: Option<MediaPlaylistType>,
i_frames_only: bool,
) -> Playlist {
gst::info!(CAT, imp: self, "Starting");
let mut state = self.state.lock().unwrap();
*state = HlsSink3State::default();
let (turn_vod, playlist_type) = if playlist_type == Some(MediaPlaylistType::Vod) {
(true, Some(MediaPlaylistType::Event))
} else {
(false, playlist_type)
};
let playlist = MediaPlaylist {
version: if i_frames_only { Some(4) } else { Some(3) },
target_duration: target_duration as f32,
playlist_type,
i_frames_only,
..Default::default()
};
Playlist::new(playlist, turn_vod, false)
}
fn on_format_location(
&self,
fragment_id: u32,
running_time: Option<gst::ClockTime>,
) -> Result<String, String> {
gst::info!(
CAT,
imp: self,
"Starting the formatting of the fragment-id: {}",
fragment_id
);
let (fragment_stream, segment_file_location) = base_imp!(self)
.get_fragment_stream(fragment_id)
.ok_or_else(|| String::from("Error while getting fragment stream"))?;
let mut state = self.state.lock().unwrap();
state.current_segment_location = Some(segment_file_location.clone());
state.fragment_running_time = running_time;
let settings = self.settings.lock().unwrap();
settings
.giostreamsink
.set_property("stream", &fragment_stream);
gst::info!(
CAT,
imp: self,
"New segment location: {:?}",
state.current_segment_location.as_ref()
);
Ok(segment_file_location)
}
fn on_fragment_closed(&self, closed_at: gst::ClockTime) {
let mut state = self.state.lock().unwrap();
let location = match state.current_segment_location.take() {
Some(location) => location,
None => {
gst::error!(CAT, imp: self, "Unknown segment location");
return;
}
};
let opened_at = match state.fragment_opened_at.take() {
Some(opened_at) => opened_at,
None => {
gst::error!(CAT, imp: self, "Unknown segment duration");
return;
}
};
let duration = ((closed_at - opened_at).mseconds() as f32) / 1_000f32;
let running_time = state.fragment_running_time;
drop(state);
let obj = self.obj();
let base_imp = obj.upcast_ref::<HlsBaseSink>().imp();
let uri = base_imp.get_segment_uri(&location);
let _ = base_imp.add_segment(
&location,
running_time,
MediaSegment {
uri,
duration,
..Default::default()
},
);
}
}

View file

@ -0,0 +1,63 @@
// Copyright (C) 2021 Rafael Caricio <rafael@caricio.com>
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
#![allow(clippy::non_send_fields_in_send_ty, unused_doc_comments)]
use crate::HlsBaseSink;
/**
* plugin-hlssink3:
*
* Since: plugins-rs-0.8.0
*/
use gst::glib;
use gst::prelude::*;
mod imp;
#[derive(Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Clone, Copy, glib::Enum)]
#[repr(u32)]
#[enum_type(name = "GstHlsSink3PlaylistType")]
#[non_exhaustive]
pub enum HlsSink3PlaylistType {
#[enum_value(
name = "Unspecified: The tag `#EXT-X-PLAYLIST-TYPE` won't be present in the playlist during the pipeline processing.",
nick = "unspecified"
)]
Unspecified = 0,
#[enum_value(
name = "Event: No segments will be removed from the playlist. At the end of the processing, the tag `#EXT-X-ENDLIST` is added to the playlist. The tag `#EXT-X-PLAYLIST-TYPE:EVENT` will be present in the playlist.",
nick = "event"
)]
Event = 1,
#[enum_value(
name = "Vod: The playlist behaves like the `event` option (a live event), but at the end of the processing, the playlist will be set to `#EXT-X-PLAYLIST-TYPE:VOD`.",
nick = "vod"
)]
Vod = 2,
}
glib::wrapper! {
pub struct HlsSink3(ObjectSubclass<imp::HlsSink3>) @extends HlsBaseSink, gst::Bin, gst::Element, gst::Object;
}
pub fn register(plugin: &gst::Plugin) -> Result<(), glib::BoolError> {
#[cfg(feature = "doc")]
{
HlsSink3PlaylistType::static_type().mark_as_plugin_api(gst::PluginAPIFlags::empty());
}
gst::Element::register(
Some(plugin),
"hlssink3",
gst::Rank::NONE,
HlsSink3::static_type(),
)?;
Ok(())
}

File diff suppressed because it is too large Load diff

View file

@ -13,67 +13,25 @@
* Since: plugins-rs-0.8.0
*/
use gst::glib;
use gst::prelude::*;
mod imp;
mod hlsbasesink;
pub mod hlscmafsink;
pub mod hlssink3;
mod playlist;
#[derive(Debug, Eq, PartialEq, Ord, PartialOrd, Hash, Clone, Copy, glib::Enum)]
#[repr(u32)]
#[enum_type(name = "GstHlsSink3PlaylistType")]
#[non_exhaustive]
pub enum HlsSink3PlaylistType {
#[enum_value(
name = "Unspecified: The tag `#EXT-X-PLAYLIST-TYPE` won't be present in the playlist during the pipeline processing.",
nick = "unspecified"
)]
Unspecified = 0,
#[enum_value(
name = "Event: No segments will be removed from the playlist. At the end of the processing, the tag `#EXT-X-ENDLIST` is added to the playlist. The tag `#EXT-X-PLAYLIST-TYPE:EVENT` will be present in the playlist.",
nick = "event"
)]
Event = 1,
#[enum_value(
name = "Vod: The playlist behaves like the `event` option (a live event), but at the end of the processing, the playlist will be set to `#EXT-X-PLAYLIST-TYPE:VOD`.",
nick = "vod"
)]
Vod = 2,
}
glib::wrapper! {
pub struct HlsBaseSink(ObjectSubclass<imp::HlsBaseSink>) @extends gst::Bin, gst::Element, gst::Object;
}
glib::wrapper! {
pub struct HlsSink3(ObjectSubclass<imp::HlsSink3>) @extends HlsBaseSink, gst::Bin, gst::Element, gst::Object;
}
glib::wrapper! {
pub struct HlsCmafSink(ObjectSubclass<imp::HlsCmafSink>) @extends HlsBaseSink, gst::Bin, gst::Element, gst::Object;
pub struct HlsBaseSink(ObjectSubclass<hlsbasesink::HlsBaseSink>) @extends gst::Bin, gst::Element, gst::Object;
}
pub fn plugin_init(plugin: &gst::Plugin) -> Result<(), glib::BoolError> {
#[cfg(feature = "doc")]
{
HlsSink3PlaylistType::static_type().mark_as_plugin_api(gst::PluginAPIFlags::empty());
use gst::prelude::*;
HlsBaseSink::static_type().mark_as_plugin_api(gst::PluginAPIFlags::empty());
}
gst::Element::register(
Some(plugin),
"hlssink3",
gst::Rank::NONE,
HlsSink3::static_type(),
)?;
gst::Element::register(
Some(plugin),
"hlscmafsink",
gst::Rank::NONE,
HlsCmafSink::static_type(),
)?;
hlssink3::register(plugin)?;
hlscmafsink::register(plugin)?;
Ok(())
}

View file

@ -59,7 +59,7 @@ impl Playlist {
while self.inner.segments.len() > max_playlist_length {
let to_remove = self.inner.segments.remove(0);
if self.inner.segments[0].map.is_none() {
self.inner.segments[0].map = to_remove.map.clone()
self.inner.segments[0].map.clone_from(&to_remove.map)
}
}
} else if self.inner.segments.len() > max_playlist_length {

View file

@ -8,7 +8,7 @@
use gio::prelude::*;
use gst::prelude::*;
use gsthlssink3::HlsSink3PlaylistType;
use gsthlssink3::hlssink3::HlsSink3PlaylistType;
use once_cell::sync::Lazy;
use std::io::Write;
use std::sync::{mpsc, Arc, Mutex};

View file

@ -153,6 +153,14 @@ impl Default for State {
}
}
impl Drop for State {
fn drop(&mut self) {
if let Some(clock_wait) = self.clock_wait.take() {
clock_wait.unschedule();
}
}
}
pub struct OnvifMetadataParse {
srcpad: gst::Pad,
sinkpad: gst::Pad,
@ -1595,14 +1603,18 @@ impl ElementImpl for OnvifMetadataParse {
) -> Result<gst::StateChangeSuccess, gst::StateChangeError> {
gst::trace!(CAT, imp: self, "Changing state {:?}", transition);
if matches!(
transition,
gst::StateChange::PausedToReady | gst::StateChange::ReadyToPaused
) {
if matches!(transition, gst::StateChange::ReadyToPaused) {
let mut state = self.state.lock().unwrap();
*state = State::default();
}
self.parent_change_state(transition)
let res = self.parent_change_state(transition)?;
if matches!(transition, gst::StateChange::PausedToReady) {
let mut state = self.state.lock().unwrap();
*state = State::default();
}
Ok(res)
}
}

56
net/quinn/Cargo.toml Normal file
View file

@ -0,0 +1,56 @@
[package]
name = "gst-plugin-quinn"
version.workspace = true
authors = ["Sanchayan Maity <sanchayan@asymptotic.io"]
repository.workspace = true
license = "MPL-2.0"
edition.workspace = true
description = "GStreamer Plugin for QUIC"
rust-version.workspace = true
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
gst.workspace = true
gst-base.workspace = true
once_cell.workspace = true
tokio = { version = "1.36.0", default-features = false, features = ["time", "rt-multi-thread"] }
futures = "0.3.30"
quinn = { version = "0.11", default-features = true, features = ["ring"]}
rustls = { version = "0.23", default-features = false, features = ["ring", "std"]}
rustls-pemfile = "2"
rustls-pki-types = "1"
rcgen = "0.13"
bytes = "1.5.0"
thiserror = "1"
[dev-dependencies]
gst-check = { workspace = true, features = ["v1_20"] }
serial_test = "3"
[lib]
name = "gstquinn"
crate-type = ["cdylib", "rlib"]
path = "src/lib.rs"
[build-dependencies]
gst-plugin-version-helper.workspace = true
[features]
static = []
capi = []
doc = []
[package.metadata.capi]
min_version = "0.9.21"
[package.metadata.capi.header]
enabled = false
[package.metadata.capi.library]
install_subdir = "gstreamer-1.0"
versioning = false
import_library = false
[package.metadata.capi.pkg_config]
requires_private = "gstreamer-1.0, gstreamer-base-1.0, gobject-2.0, glib-2.0"

3
net/quinn/build.rs Normal file
View file

@ -0,0 +1,3 @@
fn main() {
gst_plugin_version_helper::info()
}

38
net/quinn/src/lib.rs Normal file
View file

@ -0,0 +1,38 @@
// Copyright (C) 2024, Asymptotic Inc.
// Author: Sanchayan Maity <sanchayan@asymptotic.io>
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
#![allow(clippy::non_send_fields_in_send_ty, unused_doc_comments)]
/**
* plugin-quinn:
*
* Since: plugins-rs-0.13.0
*/
use gst::glib;
mod quinnquicsink;
mod quinnquicsrc;
mod utils;
fn plugin_init(plugin: &gst::Plugin) -> Result<(), glib::BoolError> {
quinnquicsink::register(plugin)?;
quinnquicsrc::register(plugin)?;
Ok(())
}
gst::plugin_define!(
quinn,
env!("CARGO_PKG_DESCRIPTION"),
plugin_init,
concat!(env!("CARGO_PKG_VERSION"), "-", env!("COMMIT_ID")),
"MPL",
env!("CARGO_PKG_NAME"),
env!("CARGO_PKG_NAME"),
env!("CARGO_PKG_REPOSITORY"),
env!("BUILD_REL_DATE")
);

View file

@ -0,0 +1,592 @@
// Copyright (C) 2024, Asymptotic Inc.
// Author: Sanchayan Maity <sanchayan@asymptotic.io>
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
use crate::utils::{
client_endpoint, make_socket_addr, wait, WaitError, CONNECTION_CLOSE_CODE, CONNECTION_CLOSE_MSG,
};
use bytes::Bytes;
use futures::future;
use gst::{glib, prelude::*, subclass::prelude::*};
use gst_base::subclass::prelude::*;
use once_cell::sync::Lazy;
use quinn::{Connection, SendStream};
use std::path::PathBuf;
use std::sync::Mutex;
static DEFAULT_SERVER_NAME: &str = "localhost";
static DEFAULT_SERVER_ADDR: &str = "127.0.0.1";
static DEFAULT_SERVER_PORT: u16 = 5000;
static DEFAULT_CLIENT_ADDR: &str = "127.0.0.1";
static DEFAULT_CLIENT_PORT: u16 = 5001;
/*
* For QUIC transport parameters
* <https://datatracker.ietf.org/doc/html/rfc9000#section-7.4>
*
* A HTTP client might specify "http/1.1" and/or "h2" or "h3".
* Other well-known values are listed in the at IANA registry at
* <https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#alpn-protocol-ids>.
*/
const DEFAULT_ALPN: &str = "gst-quinn";
const DEFAULT_TIMEOUT: u32 = 15;
const DEFAULT_SECURE_CONNECTION: bool = true;
static CAT: Lazy<gst::DebugCategory> = Lazy::new(|| {
gst::DebugCategory::new(
"quinnquicsink",
gst::DebugColorFlags::empty(),
Some("Quinn QUIC Sink"),
)
});
struct Started {
connection: Connection,
stream: Option<SendStream>,
}
#[derive(Default)]
enum State {
#[default]
Stopped,
Started(Started),
}
#[derive(Clone, Debug)]
struct Settings {
client_address: String,
client_port: u16,
server_address: String,
server_port: u16,
server_name: String,
alpns: Vec<String>,
timeout: u32,
keep_alive_interval: u64,
secure_conn: bool,
use_datagram: bool,
certificate_file: Option<PathBuf>,
private_key_file: Option<PathBuf>,
}
impl Default for Settings {
fn default() -> Self {
Settings {
client_address: DEFAULT_CLIENT_ADDR.to_string(),
client_port: DEFAULT_CLIENT_PORT,
server_address: DEFAULT_SERVER_ADDR.to_string(),
server_port: DEFAULT_SERVER_PORT,
server_name: DEFAULT_SERVER_NAME.to_string(),
alpns: vec![DEFAULT_ALPN.to_string()],
timeout: DEFAULT_TIMEOUT,
keep_alive_interval: 0,
secure_conn: DEFAULT_SECURE_CONNECTION,
use_datagram: false,
certificate_file: None,
private_key_file: None,
}
}
}
pub struct QuinnQuicSink {
settings: Mutex<Settings>,
state: Mutex<State>,
canceller: Mutex<Option<future::AbortHandle>>,
}
impl Default for QuinnQuicSink {
fn default() -> Self {
Self {
settings: Mutex::new(Settings::default()),
state: Mutex::new(State::default()),
canceller: Mutex::new(None),
}
}
}
impl GstObjectImpl for QuinnQuicSink {}
impl ElementImpl for QuinnQuicSink {
fn metadata() -> Option<&'static gst::subclass::ElementMetadata> {
static ELEMENT_METADATA: Lazy<gst::subclass::ElementMetadata> = Lazy::new(|| {
gst::subclass::ElementMetadata::new(
"Quinn QUIC Sink",
"Source/Network/QUIC",
"Send data over the network via QUIC",
"Sanchayan Maity <sanchayan@asymptotic.io>",
)
});
Some(&*ELEMENT_METADATA)
}
fn pad_templates() -> &'static [gst::PadTemplate] {
static PAD_TEMPLATES: Lazy<Vec<gst::PadTemplate>> = Lazy::new(|| {
let sink_pad_template = gst::PadTemplate::new(
"sink",
gst::PadDirection::Sink,
gst::PadPresence::Always,
&gst::Caps::new_any(),
)
.unwrap();
vec![sink_pad_template]
});
PAD_TEMPLATES.as_ref()
}
fn change_state(
&self,
transition: gst::StateChange,
) -> Result<gst::StateChangeSuccess, gst::StateChangeError> {
if transition == gst::StateChange::NullToReady {
let settings = self.settings.lock().unwrap();
/*
* Fail the state change if a secure connection was requested but
* no certificate path was provided.
*/
if settings.secure_conn
&& (settings.certificate_file.is_none() || settings.private_key_file.is_none())
{
gst::error!(
CAT,
imp: self,
"Certificate or private key file not provided for secure connection"
);
return Err(gst::StateChangeError);
}
}
self.parent_change_state(transition)
}
}
impl ObjectImpl for QuinnQuicSink {
fn constructed(&self) {
self.parent_constructed();
}
fn properties() -> &'static [glib::ParamSpec] {
static PROPERTIES: Lazy<Vec<glib::ParamSpec>> = Lazy::new(|| {
vec![
glib::ParamSpecString::builder("server-name")
.nick("QUIC server name")
.blurb("Name of the QUIC server which is in server certificate")
.build(),
glib::ParamSpecString::builder("server-address")
.nick("QUIC server address")
.blurb("Address of the QUIC server to connect to e.g. 127.0.0.1")
.build(),
glib::ParamSpecUInt::builder("server-port")
.nick("QUIC server port")
.blurb("Port of the QUIC server to connect to e.g. 5000")
.maximum(65535)
.default_value(DEFAULT_SERVER_PORT as u32)
.readwrite()
.build(),
glib::ParamSpecString::builder("client-address")
.nick("QUIC client address")
.blurb("Address to be used by this QUIC client e.g. 127.0.0.1")
.build(),
glib::ParamSpecUInt::builder("client-port")
.nick("QUIC client port")
.blurb("Port to be used by this QUIC client e.g. 5001")
.maximum(65535)
.default_value(DEFAULT_CLIENT_PORT as u32)
.readwrite()
.build(),
gst::ParamSpecArray::builder("alpn-protocols")
.nick("QUIC ALPN values")
.blurb("QUIC connection Application-Layer Protocol Negotiation (ALPN) values")
.element_spec(&glib::ParamSpecString::builder("alpn-protocol").build())
.build(),
glib::ParamSpecUInt::builder("timeout")
.nick("Timeout")
.blurb("Value in seconds to timeout QUIC endpoint requests (0 = No timeout).")
.maximum(3600)
.default_value(DEFAULT_TIMEOUT)
.readwrite()
.build(),
glib::ParamSpecUInt64::builder("keep-alive-interval")
.nick("QUIC connection keep alive interval in ms")
.blurb("Keeps QUIC connection alive by periodically pinging the server. Value set in ms, 0 disables this feature")
.default_value(0)
.readwrite()
.build(),
glib::ParamSpecBoolean::builder("secure-connection")
.nick("Use secure connection")
.blurb("Use certificates for QUIC connection. False: Insecure connection, True: Secure connection.")
.default_value(DEFAULT_SECURE_CONNECTION)
.build(),
glib::ParamSpecString::builder("certificate-file")
.nick("Certificate file")
.blurb("Path to certificate chain in single file")
.build(),
glib::ParamSpecString::builder("private-key-file")
.nick("Private key file")
.blurb("Path to a PKCS8 or RSA private key file")
.build(),
glib::ParamSpecBoolean::builder("use-datagram")
.nick("Use datagram")
.blurb("Use datagram for lower latency, unreliable messaging")
.default_value(false)
.build(),
]
});
PROPERTIES.as_ref()
}
fn set_property(&self, _id: usize, value: &glib::Value, pspec: &glib::ParamSpec) {
let mut settings = self.settings.lock().unwrap();
match pspec.name() {
"server-name" => {
settings.server_name = value.get::<String>().expect("type checked upstream");
}
"server-address" => {
settings.server_address = value.get::<String>().expect("type checked upstream");
}
"server-port" => {
settings.server_port = value.get::<u32>().expect("type checked upstream") as u16;
}
"client-address" => {
settings.client_address = value.get::<String>().expect("type checked upstream");
}
"client-port" => {
settings.client_port = value.get::<u32>().expect("type checked upstream") as u16;
}
"alpn-protocols" => {
settings.alpns = value
.get::<gst::ArrayRef>()
.expect("type checked upstream")
.as_slice()
.iter()
.map(|alpn| {
alpn.get::<&str>()
.expect("type checked upstream")
.to_string()
})
.collect::<Vec<_>>();
}
"timeout" => {
settings.timeout = value.get().expect("type checked upstream");
}
"keep-alive-interval" => {
settings.keep_alive_interval = value.get().expect("type checked upstream");
}
"secure-connection" => {
settings.secure_conn = value.get().expect("type checked upstream");
}
"certificate-file" => {
let value: String = value.get().unwrap();
settings.certificate_file = Some(value.into());
}
"private-key-file" => {
let value: String = value.get().unwrap();
settings.private_key_file = Some(value.into());
}
"use-datagram" => {
settings.use_datagram = value.get().expect("type checked upstream");
}
_ => unimplemented!(),
}
}
fn property(&self, _id: usize, pspec: &glib::ParamSpec) -> glib::Value {
let settings = self.settings.lock().unwrap();
match pspec.name() {
"server-name" => settings.server_name.to_value(),
"server-address" => settings.server_address.to_string().to_value(),
"server-port" => {
let port = settings.server_port as u32;
port.to_value()
}
"client-address" => settings.client_address.to_string().to_value(),
"client-port" => {
let port = settings.client_port as u32;
port.to_value()
}
"alpn-protocols" => {
let alpns = settings.alpns.iter().map(|v| v.as_str());
gst::Array::new(alpns).to_value()
}
"timeout" => settings.timeout.to_value(),
"keep-alive-interval" => settings.keep_alive_interval.to_value(),
"secure-connection" => settings.secure_conn.to_value(),
"certificate-file" => {
let certfile = settings.certificate_file.as_ref();
certfile.and_then(|file| file.to_str()).to_value()
}
"private-key-file" => {
let privkey = settings.private_key_file.as_ref();
privkey.and_then(|file| file.to_str()).to_value()
}
"use-datagram" => settings.use_datagram.to_value(),
_ => unimplemented!(),
}
}
}
#[glib::object_subclass]
impl ObjectSubclass for QuinnQuicSink {
const NAME: &'static str = "GstQuinnQuicSink";
type Type = super::QuinnQuicSink;
type ParentType = gst_base::BaseSink;
}
impl BaseSinkImpl for QuinnQuicSink {
fn start(&self) -> Result<(), gst::ErrorMessage> {
let settings = self.settings.lock().unwrap();
let timeout = settings.timeout;
drop(settings);
let mut state = self.state.lock().unwrap();
if let State::Started { .. } = *state {
unreachable!("QuicSink is already started");
}
match wait(&self.canceller, self.establish_connection(), timeout) {
Ok(Ok((c, s))) => {
*state = State::Started(Started {
connection: c,
stream: s,
});
gst::info!(CAT, imp: self, "Started");
Ok(())
}
Ok(Err(e)) => match e {
WaitError::FutureAborted => {
gst::warning!(CAT, imp: self, "Connection aborted");
Ok(())
}
WaitError::FutureError(err) => {
gst::error!(CAT, imp: self, "Connection request failed: {}", err);
Err(gst::error_msg!(
gst::ResourceError::Failed,
["Connection request failed: {}", err]
))
}
},
Err(e) => {
gst::error!(CAT, imp: self, "Failed to establish a connection: {:?}", e);
Err(gst::error_msg!(
gst::ResourceError::Failed,
["Failed to establish a connection: {:?}", e]
))
}
}
}
fn stop(&self) -> Result<(), gst::ErrorMessage> {
let settings = self.settings.lock().unwrap();
let timeout = settings.timeout;
let use_datagram = settings.use_datagram;
drop(settings);
let mut state = self.state.lock().unwrap();
if let State::Started(ref mut state) = *state {
let connection = &state.connection;
let mut close_msg = CONNECTION_CLOSE_MSG.to_string();
if !use_datagram {
let send = &mut state.stream.as_mut().unwrap();
// Shutdown stream gracefully
// send.finish() may fail, but the error is harmless.
let _ = send.finish();
match wait(&self.canceller, send.stopped(), timeout) {
Ok(r) => {
if let Err(e) = r {
close_msg = format!("Stream finish request error: {}", e);
gst::error!(CAT, imp: self, "{}", close_msg);
}
}
Err(e) => match e {
WaitError::FutureAborted => {
close_msg = "Stream finish request aborted".to_string();
gst::warning!(CAT, imp: self, "{}", close_msg);
}
WaitError::FutureError(e) => {
close_msg = format!("Stream finish request future error: {}", e);
gst::error!(CAT, imp: self, "{}", close_msg);
}
},
};
}
connection.close(CONNECTION_CLOSE_CODE.into(), close_msg.as_bytes());
}
*state = State::Stopped;
gst::info!(CAT, imp: self, "Stopped");
Ok(())
}
fn render(&self, buffer: &gst::Buffer) -> Result<gst::FlowSuccess, gst::FlowError> {
if let State::Stopped = *self.state.lock().unwrap() {
gst::element_imp_error!(self, gst::CoreError::Failed, ["Not started yet"]);
return Err(gst::FlowError::Error);
}
gst::trace!(CAT, imp: self, "Rendering {:?}", buffer);
let map = buffer.map_readable().map_err(|_| {
gst::element_imp_error!(self, gst::CoreError::Failed, ["Failed to map buffer"]);
gst::FlowError::Error
})?;
match self.send_buffer(&map) {
Ok(_) => Ok(gst::FlowSuccess::Ok),
Err(err) => match err {
Some(error_message) => {
gst::error!(CAT, imp: self, "Data sending failed: {}", error_message);
self.post_error_message(error_message);
Err(gst::FlowError::Error)
}
_ => {
gst::info!(CAT, imp: self, "Send interrupted. Flushing...");
Err(gst::FlowError::Flushing)
}
},
}
}
}
impl QuinnQuicSink {
fn send_buffer(&self, src: &[u8]) -> Result<(), Option<gst::ErrorMessage>> {
let settings = self.settings.lock().unwrap();
let timeout = settings.timeout;
let use_datagram = settings.use_datagram;
drop(settings);
let mut state = self.state.lock().unwrap();
let (conn, stream) = match *state {
State::Started(Started {
ref connection,
ref mut stream,
}) => (connection, stream),
State::Stopped => {
return Err(Some(gst::error_msg!(
gst::LibraryError::Failed,
["Cannot send before start()"]
)));
}
};
if use_datagram {
match conn.send_datagram(Bytes::copy_from_slice(src)) {
Ok(_) => Ok(()),
Err(e) => Err(Some(gst::error_msg!(
gst::ResourceError::Failed,
["Sending data failed: {}", e]
))),
}
} else {
let send = &mut stream.as_mut().unwrap();
match wait(&self.canceller, send.write_all(src), timeout) {
Ok(Ok(_)) => Ok(()),
Ok(Err(e)) => Err(Some(gst::error_msg!(
gst::ResourceError::Failed,
["Sending data failed: {}", e]
))),
Err(e) => match e {
WaitError::FutureAborted => {
gst::warning!(CAT, imp: self, "Sending aborted");
Ok(())
}
WaitError::FutureError(e) => Err(Some(gst::error_msg!(
gst::ResourceError::Failed,
["Sending data failed: {}", e]
))),
},
}
}
}
async fn establish_connection(&self) -> Result<(Connection, Option<SendStream>), WaitError> {
let client_addr;
let server_addr;
let server_name;
let alpns;
let use_datagram;
let keep_alive_interval;
let secure_conn;
let cert_file;
let private_key_file;
{
let settings = self.settings.lock().unwrap();
client_addr = make_socket_addr(
format!("{}:{}", settings.client_address, settings.client_port).as_str(),
)?;
server_addr = make_socket_addr(
format!("{}:{}", settings.server_address, settings.server_port).as_str(),
)?;
server_name = settings.server_name.clone();
alpns = settings.alpns.clone();
use_datagram = settings.use_datagram;
keep_alive_interval = settings.keep_alive_interval;
secure_conn = settings.secure_conn;
cert_file = settings.certificate_file.clone();
private_key_file = settings.private_key_file.clone();
}
let endpoint = client_endpoint(
client_addr,
secure_conn,
alpns,
cert_file,
private_key_file,
keep_alive_interval,
)
.map_err(|err| {
WaitError::FutureError(gst::error_msg!(
gst::ResourceError::Failed,
["Failed to configure endpoint: {}", err]
))
})?;
let connection = endpoint
.connect(server_addr, &server_name)
.unwrap()
.await
.map_err(|err| {
WaitError::FutureError(gst::error_msg!(
gst::ResourceError::Failed,
["Connection error: {}", err]
))
})?;
let stream = if !use_datagram {
let res = connection.open_uni().await.map_err(|err| {
WaitError::FutureError(gst::error_msg!(
gst::ResourceError::Failed,
["Failed to open stream: {}", err]
))
})?;
Some(res)
} else {
None
};
Ok((connection, stream))
}
}

View file

@ -0,0 +1,39 @@
// Copyright (C) 2024, Asymptotic Inc.
// Author: Sanchayan Maity <sanchayan@asymptotic.io>
//G
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
/**
* element-quinnquicsink:
* @short-description: Send data over the network via QUIC
*
* ## Example sender pipeline
* ```bash
* gst-launch-1.0 -v -e audiotestsrc num-buffers=512 ! \
* audio/x-raw,format=S16LE,rate=48000,channels=2,layout=interleaved ! opusenc ! \
* quinnquicsink server-name="quic.net" client-address="127.0.0.1" client-port=6001 \
* server-address="127.0.0.1" server-port=6000 certificate-file="certificates/fullchain.pem" \
* private-key-file="certificates/privkey.pem"
* ```
*/
use gst::glib;
use gst::prelude::*;
pub mod imp;
glib::wrapper! {
pub struct QuinnQuicSink(ObjectSubclass<imp::QuinnQuicSink>) @extends gst_base::BaseSink, gst::Element, gst::Object;
}
pub fn register(plugin: &gst::Plugin) -> Result<(), glib::BoolError> {
gst::Element::register(
Some(plugin),
"quinnquicsink",
gst::Rank::MARGINAL,
QuinnQuicSink::static_type(),
)
}

View file

@ -0,0 +1,629 @@
// Copyright (C) 2024, Asymptotic Inc.
// Author: Sanchayan Maity <sanchayan@asymptotic.io>
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
use crate::utils::{
make_socket_addr, server_endpoint, wait, WaitError, CONNECTION_CLOSE_CODE, CONNECTION_CLOSE_MSG,
};
use bytes::Bytes;
use futures::future;
use gst::{glib, prelude::*, subclass::prelude::*};
use gst_base::prelude::*;
use gst_base::subclass::base_src::CreateSuccess;
use gst_base::subclass::prelude::*;
use once_cell::sync::Lazy;
use quinn::{Connection, ConnectionError, ReadError, RecvStream};
use std::path::PathBuf;
use std::sync::Mutex;
static DEFAULT_SERVER_NAME: &str = "localhost";
static DEFAULT_SERVER_ADDR: &str = "127.0.0.1";
static DEFAULT_SERVER_PORT: u16 = 5000;
/*
* For QUIC transport parameters
* <https://datatracker.ietf.org/doc/html/rfc9000#section-7.4>
*
* A HTTP client might specify "http/1.1" and/or "h2" or "h3".
* Other well-known values are listed in the at IANA registry at
* <https://www.iana.org/assignments/tls-extensiontype-values/tls-extensiontype-values.xhtml#alpn-protocol-ids>.
*/
const DEFAULT_ALPN: &str = "gst-quinn";
const DEFAULT_TIMEOUT: u32 = 15;
const DEFAULT_SECURE_CONNECTION: bool = true;
static CAT: Lazy<gst::DebugCategory> = Lazy::new(|| {
gst::DebugCategory::new(
"quinnquicsrc",
gst::DebugColorFlags::empty(),
Some("Quinn QUIC Source"),
)
});
struct Started {
connection: Connection,
stream: Option<RecvStream>,
}
#[derive(Default)]
enum State {
#[default]
Stopped,
Started(Started),
}
#[derive(Clone, Debug)]
struct Settings {
server_address: String,
server_port: u16,
server_name: String,
alpns: Vec<String>,
timeout: u32,
secure_conn: bool,
caps: gst::Caps,
use_datagram: bool,
certificate_file: Option<PathBuf>,
private_key_file: Option<PathBuf>,
}
impl Default for Settings {
fn default() -> Self {
Settings {
server_address: DEFAULT_SERVER_ADDR.to_string(),
server_port: DEFAULT_SERVER_PORT,
server_name: DEFAULT_SERVER_NAME.to_string(),
alpns: vec![DEFAULT_ALPN.to_string()],
timeout: DEFAULT_TIMEOUT,
secure_conn: DEFAULT_SECURE_CONNECTION,
caps: gst::Caps::new_any(),
use_datagram: false,
certificate_file: None,
private_key_file: None,
}
}
}
pub struct QuinnQuicSrc {
settings: Mutex<Settings>,
state: Mutex<State>,
canceller: Mutex<Option<future::AbortHandle>>,
}
impl Default for QuinnQuicSrc {
fn default() -> Self {
Self {
settings: Mutex::new(Settings::default()),
state: Mutex::new(State::default()),
canceller: Mutex::new(None),
}
}
}
impl GstObjectImpl for QuinnQuicSrc {}
impl ElementImpl for QuinnQuicSrc {
fn metadata() -> Option<&'static gst::subclass::ElementMetadata> {
static ELEMENT_METADATA: Lazy<gst::subclass::ElementMetadata> = Lazy::new(|| {
gst::subclass::ElementMetadata::new(
"Quinn QUIC Source",
"Source/Network/QUIC",
"Receive data over the network via QUIC",
"Sanchayan Maity <sanchayan@asymptotic.io>",
)
});
Some(&*ELEMENT_METADATA)
}
fn pad_templates() -> &'static [gst::PadTemplate] {
static PAD_TEMPLATES: Lazy<Vec<gst::PadTemplate>> = Lazy::new(|| {
let sink_pad_template = gst::PadTemplate::new(
"src",
gst::PadDirection::Src,
gst::PadPresence::Always,
&gst::Caps::new_any(),
)
.unwrap();
vec![sink_pad_template]
});
PAD_TEMPLATES.as_ref()
}
fn change_state(
&self,
transition: gst::StateChange,
) -> Result<gst::StateChangeSuccess, gst::StateChangeError> {
if transition == gst::StateChange::NullToReady {
let settings = self.settings.lock().unwrap();
/*
* Fail the state change if a secure connection was requested but
* no certificate path was provided.
*/
if settings.secure_conn
&& (settings.certificate_file.is_none() || settings.private_key_file.is_none())
{
gst::error!(
CAT,
imp: self,
"Certificate or private key file not provided for secure connection"
);
return Err(gst::StateChangeError);
}
}
self.parent_change_state(transition)
}
}
impl ObjectImpl for QuinnQuicSrc {
fn constructed(&self) {
self.parent_constructed();
self.obj().set_format(gst::Format::Bytes);
}
fn properties() -> &'static [glib::ParamSpec] {
static PROPERTIES: Lazy<Vec<glib::ParamSpec>> = Lazy::new(|| {
vec![
glib::ParamSpecString::builder("server-name")
.nick("QUIC server name")
.blurb("Name of the QUIC server which is in server certificate")
.build(),
glib::ParamSpecString::builder("server-address")
.nick("QUIC server address")
.blurb("Address of the QUIC server e.g. 127.0.0.1")
.build(),
glib::ParamSpecUInt::builder("server-port")
.nick("QUIC server port")
.blurb("Port of the QUIC server e.g. 5000")
.maximum(65535)
.default_value(DEFAULT_SERVER_PORT as u32)
.readwrite()
.build(),
gst::ParamSpecArray::builder("alpn-protocols")
.nick("QUIC ALPN values")
.blurb("QUIC connection Application-Layer Protocol Negotiation (ALPN) values")
.element_spec(&glib::ParamSpecString::builder("alpn-protocol").build())
.build(),
glib::ParamSpecUInt::builder("timeout")
.nick("Timeout")
.blurb("Value in seconds to timeout QUIC endpoint requests (0 = No timeout).")
.maximum(3600)
.default_value(DEFAULT_TIMEOUT)
.readwrite()
.build(),
glib::ParamSpecBoolean::builder("secure-connection")
.nick("Use secure connection")
.blurb("Use certificates for QUIC connection. False: Insecure connection, True: Secure connection.")
.default_value(DEFAULT_SECURE_CONNECTION)
.build(),
glib::ParamSpecString::builder("certificate-file")
.nick("Certificate file")
.blurb("Path to certificate chain in single file")
.build(),
glib::ParamSpecString::builder("private-key-file")
.nick("Private key file")
.blurb("Path to a PKCS8 or RSA private key file")
.build(),
glib::ParamSpecBoxed::builder::<gst::Caps>("caps")
.nick("caps")
.blurb("The caps of the source pad")
.build(),
glib::ParamSpecBoolean::builder("use-datagram")
.nick("Use datagram")
.blurb("Use datagram for lower latency, unreliable messaging")
.default_value(false)
.build(),
]
});
PROPERTIES.as_ref()
}
fn set_property(&self, _id: usize, value: &glib::Value, pspec: &glib::ParamSpec) {
let mut settings = self.settings.lock().unwrap();
match pspec.name() {
"server-name" => {
settings.server_name = value.get::<String>().expect("type checked upstream");
}
"server-address" => {
settings.server_address = value.get::<String>().expect("type checked upstream");
}
"server-port" => {
settings.server_port = value.get::<u32>().expect("type checked upstream") as u16;
}
"alpn-protocols" => {
settings.alpns = value
.get::<gst::ArrayRef>()
.expect("type checked upstream")
.as_slice()
.iter()
.map(|alpn| {
alpn.get::<&str>()
.expect("type checked upstream")
.to_string()
})
.collect::<Vec<String>>()
}
"caps" => {
settings.caps = value
.get::<Option<gst::Caps>>()
.expect("type checked upstream")
.unwrap_or_else(gst::Caps::new_any);
let srcpad = self.obj().static_pad("src").expect("source pad expected");
srcpad.mark_reconfigure();
}
"timeout" => {
settings.timeout = value.get().expect("type checked upstream");
}
"secure-connection" => {
settings.secure_conn = value.get().expect("type checked upstream");
}
"certificate-file" => {
let value: String = value.get().unwrap();
settings.certificate_file = Some(value.into());
}
"private-key-file" => {
let value: String = value.get().unwrap();
settings.private_key_file = Some(value.into());
}
"use-datagram" => {
settings.use_datagram = value.get().expect("type checked upstream");
}
_ => unimplemented!(),
}
}
fn property(&self, _id: usize, pspec: &glib::ParamSpec) -> glib::Value {
let settings = self.settings.lock().unwrap();
match pspec.name() {
"server-name" => settings.server_name.to_value(),
"server-address" => settings.server_address.to_string().to_value(),
"server-port" => {
let port = settings.server_port as u32;
port.to_value()
}
"alpn-protocols" => {
let alpns = settings.alpns.iter().map(|v| v.as_str());
gst::Array::new(alpns).to_value()
}
"caps" => settings.caps.to_value(),
"timeout" => settings.timeout.to_value(),
"secure-connection" => settings.secure_conn.to_value(),
"certificate-file" => {
let certfile = settings.certificate_file.as_ref();
certfile.and_then(|file| file.to_str()).to_value()
}
"private-key-file" => {
let privkey = settings.private_key_file.as_ref();
privkey.and_then(|file| file.to_str()).to_value()
}
"use-datagram" => settings.use_datagram.to_value(),
_ => unimplemented!(),
}
}
}
#[glib::object_subclass]
impl ObjectSubclass for QuinnQuicSrc {
const NAME: &'static str = "GstQuinnQuicSrc";
type Type = super::QuinnQuicSrc;
type ParentType = gst_base::BaseSrc;
}
impl BaseSrcImpl for QuinnQuicSrc {
fn is_seekable(&self) -> bool {
false
}
fn start(&self) -> Result<(), gst::ErrorMessage> {
let settings = self.settings.lock().unwrap();
let timeout = settings.timeout;
drop(settings);
let mut state = self.state.lock().unwrap();
if let State::Started { .. } = *state {
unreachable!("QuicSrc already started");
}
match wait(&self.canceller, self.wait_for_connection(), timeout) {
Ok(Ok((c, s))) => {
*state = State::Started(Started {
connection: c,
stream: s,
});
gst::info!(CAT, imp: self, "Started");
Ok(())
}
Ok(Err(e)) | Err(e) => match e {
WaitError::FutureAborted => {
gst::warning!(CAT, imp: self, "Connection aborted");
Ok(())
}
WaitError::FutureError(err) => {
gst::error!(CAT, imp: self, "Connection request failed: {}", err);
Err(gst::error_msg!(
gst::ResourceError::Failed,
["Connection request failed: {}", err]
))
}
},
}
}
fn stop(&self) -> Result<(), gst::ErrorMessage> {
self.cancel();
let mut state = self.state.lock().unwrap();
if let State::Started(ref mut state) = *state {
let connection = &state.connection;
connection.close(
CONNECTION_CLOSE_CODE.into(),
CONNECTION_CLOSE_MSG.as_bytes(),
);
}
*state = State::Stopped;
Ok(())
}
fn query(&self, query: &mut gst::QueryRef) -> bool {
if let gst::QueryViewMut::Scheduling(q) = query.view_mut() {
q.set(
gst::SchedulingFlags::SEQUENTIAL | gst::SchedulingFlags::BANDWIDTH_LIMITED,
1,
-1,
0,
);
q.add_scheduling_modes(&[gst::PadMode::Pull, gst::PadMode::Push]);
return true;
}
BaseSrcImplExt::parent_query(self, query)
}
fn create(
&self,
offset: u64,
buffer: Option<&mut gst::BufferRef>,
length: u32,
) -> Result<CreateSuccess, gst::FlowError> {
let data = self.get(offset, u64::from(length));
match data {
Ok(bytes) => {
if bytes.is_empty() {
gst::debug!(CAT, imp: self, "End of stream");
return Err(gst::FlowError::Eos);
}
if let Some(buffer) = buffer {
if let Err(copied_bytes) = buffer.copy_from_slice(0, bytes.as_ref()) {
buffer.set_size(copied_bytes);
}
Ok(CreateSuccess::FilledBuffer)
} else {
Ok(CreateSuccess::NewBuffer(gst::Buffer::from_slice(bytes)))
}
}
Err(None) => Err(gst::FlowError::Flushing),
Err(Some(err)) => {
gst::error!(CAT, imp: self, "Could not GET: {}", err);
Err(gst::FlowError::Error)
}
}
}
fn unlock(&self) -> Result<(), gst::ErrorMessage> {
self.cancel();
Ok(())
}
fn caps(&self, filter: Option<&gst::Caps>) -> Option<gst::Caps> {
let settings = self.settings.lock().unwrap();
let mut tmp_caps = settings.caps.clone();
gst::debug!(CAT, imp: self, "Advertising our own caps: {:?}", &tmp_caps);
if let Some(filter_caps) = filter {
gst::debug!(
CAT,
imp: self,
"Intersecting with filter caps: {:?}",
&filter_caps
);
tmp_caps = filter_caps.intersect_with_mode(&tmp_caps, gst::CapsIntersectMode::First);
};
gst::debug!(CAT, imp: self, "Returning caps: {:?}", &tmp_caps);
Some(tmp_caps)
}
}
impl QuinnQuicSrc {
fn get(&self, _offset: u64, length: u64) -> Result<Bytes, Option<gst::ErrorMessage>> {
let settings = self.settings.lock().unwrap();
let timeout = settings.timeout;
let use_datagram = settings.use_datagram;
drop(settings);
let mut state = self.state.lock().unwrap();
let (conn, stream) = match *state {
State::Started(Started {
ref connection,
ref mut stream,
}) => (connection, stream),
State::Stopped => {
return Err(Some(gst::error_msg!(
gst::LibraryError::Failed,
["Cannot get data before start"]
)));
}
};
let future = async {
if use_datagram {
match conn.read_datagram().await {
Ok(bytes) => Ok(bytes),
Err(err) => match err {
ConnectionError::ApplicationClosed(ac) => {
gst::info!(CAT, imp: self, "Application closed connection, {}", ac);
Ok(Bytes::new())
}
ConnectionError::ConnectionClosed(cc) => {
gst::info!(CAT, imp: self, "Transport closed connection, {}", cc);
Ok(Bytes::new())
}
_ => Err(WaitError::FutureError(gst::error_msg!(
gst::ResourceError::Failed,
["Datagram read error: {}", err]
))),
},
}
} else {
let recv = stream.as_mut().unwrap();
match recv.read_chunk(length as usize, true).await {
Ok(Some(chunk)) => Ok(chunk.bytes),
Ok(None) => Ok(Bytes::new()),
Err(err) => match err {
ReadError::ConnectionLost(conn_err) => match conn_err {
ConnectionError::ConnectionClosed(cc) => {
gst::info!(CAT, imp: self, "Transport closed connection, {}", cc);
Ok(Bytes::new())
}
ConnectionError::ApplicationClosed(ac) => {
gst::info!(CAT, imp: self, "Application closed connection, {}", ac);
Ok(Bytes::new())
}
_ => Err(WaitError::FutureError(gst::error_msg!(
gst::ResourceError::Failed,
["Stream read error: {}", conn_err]
))),
},
ReadError::ClosedStream => {
gst::info!(CAT, imp: self, "Stream closed");
Ok(Bytes::new())
}
_ => Err(WaitError::FutureError(gst::error_msg!(
gst::ResourceError::Failed,
["Stream read error: {}", err]
))),
},
}
}
};
match wait(&self.canceller, future, timeout) {
Ok(Ok(bytes)) => Ok(bytes),
Ok(Err(e)) | Err(e) => match e {
WaitError::FutureAborted => {
gst::warning!(CAT, imp: self, "Read from stream request aborted");
Err(None)
}
WaitError::FutureError(e) => {
gst::error!(CAT, imp: self, "Failed to read from stream: {}", e);
Err(Some(e))
}
},
}
}
fn cancel(&self) {
let mut canceller = self.canceller.lock().unwrap();
if let Some(c) = canceller.take() {
c.abort()
};
}
async fn wait_for_connection(&self) -> Result<(Connection, Option<RecvStream>), WaitError> {
let server_addr;
let server_name;
let alpns;
let use_datagram;
let secure_conn;
let cert_file;
let private_key_file;
{
let settings = self.settings.lock().unwrap();
server_addr = make_socket_addr(
format!("{}:{}", settings.server_address, settings.server_port).as_str(),
)?;
server_name = settings.server_name.clone();
alpns = settings.alpns.clone();
use_datagram = settings.use_datagram;
secure_conn = settings.secure_conn;
cert_file = settings.certificate_file.clone();
private_key_file = settings.private_key_file.clone();
}
let endpoint = server_endpoint(
server_addr,
&server_name,
secure_conn,
alpns,
cert_file,
private_key_file,
)
.map_err(|err| {
WaitError::FutureError(gst::error_msg!(
gst::ResourceError::Failed,
["Failed to configure endpoint: {}", err]
))
})?;
let incoming_conn = endpoint.accept().await.unwrap();
let connection = incoming_conn.await.map_err(|err| {
WaitError::FutureError(gst::error_msg!(
gst::ResourceError::Failed,
["Connection error: {}", err]
))
})?;
let stream = if !use_datagram {
let res = connection.accept_uni().await.map_err(|err| {
WaitError::FutureError(gst::error_msg!(
gst::ResourceError::Failed,
["Failed to open stream: {}", err]
))
})?;
Some(res)
} else {
None
};
gst::info!(
CAT,
imp: self,
"Remote connection accepted: {}",
connection.remote_address()
);
Ok((connection, stream))
}
}

View file

@ -0,0 +1,39 @@
// Copyright (C) 2024, Asymptotic Inc.
// Author: Sanchayan Maity <sanchayan@asymptotic.io>
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
/**
* element-quinnquicsrc:
* @short-description: Receive data over the network via QUIC
*
* ## Example receiver pipeline
* ```bash
* gst-launch-1.0 -v -e quinnquicsrc caps=audio/x-opus server-name="quic.net" \
* certificate-file="certificates/fullchain.pem" private-key-file="certificates/privkey.pem" \
* server-address="127.0.0.1" server-port=6000 ! opusparse ! opusdec ! \
* audio/x-raw,format=S16LE,rate=48000,channels=2,layout=interleaved ! \
* audioconvert ! autoaudiosink
* ```
*/
use gst::glib;
use gst::prelude::*;
mod imp;
glib::wrapper! {
pub struct QuinnQuicSrc(ObjectSubclass<imp::QuinnQuicSrc>) @extends gst_base::BaseSrc, gst::Element, gst::Object;
}
pub fn register(plugin: &gst::Plugin) -> Result<(), glib::BoolError> {
gst::Element::register(
Some(plugin),
"quinnquicsrc",
gst::Rank::MARGINAL,
QuinnQuicSrc::static_type(),
)
}

367
net/quinn/src/utils.rs Normal file
View file

@ -0,0 +1,367 @@
// Copyright (C) 2024, Asymptotic Inc.
// Author: Sanchayan Maity <sanchayan@asymptotic.io>
//G
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
use futures::future;
use futures::prelude::*;
use gst::ErrorMessage;
use once_cell::sync::Lazy;
use quinn::{
crypto::rustls::QuicClientConfig, crypto::rustls::QuicServerConfig, ClientConfig, Endpoint,
ServerConfig, TransportConfig,
};
use std::error::Error;
use std::fs::File;
use std::io::BufReader;
use std::net::SocketAddr;
use std::path::PathBuf;
use std::sync::{Arc, Mutex};
use std::time::Duration;
use thiserror::Error;
use tokio::runtime;
pub const CONNECTION_CLOSE_CODE: u32 = 0;
pub const CONNECTION_CLOSE_MSG: &str = "Stopped";
#[derive(Error, Debug)]
pub enum WaitError {
#[error("Future aborted")]
FutureAborted,
#[error("Future returned an error: {0}")]
FutureError(ErrorMessage),
}
pub static RUNTIME: Lazy<runtime::Runtime> = Lazy::new(|| {
runtime::Builder::new_multi_thread()
.enable_all()
.worker_threads(1)
.thread_name("gst-quic-runtime")
.build()
.unwrap()
});
pub fn wait<F, T>(
canceller: &Mutex<Option<future::AbortHandle>>,
future: F,
timeout: u32,
) -> Result<T, WaitError>
where
F: Send + Future<Output = T>,
T: Send + 'static,
{
let mut canceller_guard = canceller.lock().unwrap();
let (abort_handle, abort_registration) = future::AbortHandle::new_pair();
if canceller_guard.is_some() {
return Err(WaitError::FutureError(gst::error_msg!(
gst::ResourceError::Failed,
["Old Canceller should not exist"]
)));
}
canceller_guard.replace(abort_handle);
drop(canceller_guard);
let future = async {
if timeout == 0 {
Ok(future.await)
} else {
let res = tokio::time::timeout(Duration::from_secs(timeout.into()), future).await;
match res {
Ok(r) => Ok(r),
Err(e) => Err(gst::error_msg!(
gst::ResourceError::Read,
["Request timeout, elapsed: {}", e.to_string()]
)),
}
}
};
let future = async {
match future::Abortable::new(future, abort_registration).await {
Ok(Ok(res)) => Ok(res),
Ok(Err(err)) => Err(WaitError::FutureError(gst::error_msg!(
gst::ResourceError::Failed,
["Future resolved with an error {:?}", err]
))),
Err(future::Aborted) => Err(WaitError::FutureAborted),
}
};
let res = RUNTIME.block_on(future);
canceller_guard = canceller.lock().unwrap();
*canceller_guard = None;
res
}
pub fn make_socket_addr(addr: &str) -> Result<SocketAddr, WaitError> {
match addr.parse::<SocketAddr>() {
Ok(address) => Ok(address),
Err(e) => Err(WaitError::FutureError(gst::error_msg!(
gst::ResourceError::Failed,
["Invalid address: {}", e]
))),
}
}
/*
* Following functions are taken from Quinn documentation/repository
*/
#[derive(Debug)]
struct SkipServerVerification;
impl SkipServerVerification {
pub fn new() -> Arc<Self> {
Arc::new(Self)
}
}
impl rustls::client::danger::ServerCertVerifier for SkipServerVerification {
fn verify_server_cert(
&self,
_end_entity: &rustls_pki_types::CertificateDer,
_intermediates: &[rustls_pki_types::CertificateDer],
_server_name: &rustls::pki_types::ServerName,
_ocsp_response: &[u8],
_now: rustls::pki_types::UnixTime,
) -> Result<rustls::client::danger::ServerCertVerified, rustls::Error> {
Ok(rustls::client::danger::ServerCertVerified::assertion())
}
fn verify_tls12_signature(
&self,
_: &[u8],
_: &rustls_pki_types::CertificateDer<'_>,
_: &rustls::DigitallySignedStruct,
) -> Result<rustls::client::danger::HandshakeSignatureValid, rustls::Error> {
Ok(rustls::client::danger::HandshakeSignatureValid::assertion())
}
fn verify_tls13_signature(
&self,
_: &[u8],
_: &rustls_pki_types::CertificateDer<'_>,
_: &rustls::DigitallySignedStruct,
) -> Result<rustls::client::danger::HandshakeSignatureValid, rustls::Error> {
Ok(rustls::client::danger::HandshakeSignatureValid::assertion())
}
fn supported_verify_schemes(&self) -> Vec<rustls::SignatureScheme> {
vec![
rustls::SignatureScheme::RSA_PKCS1_SHA1,
rustls::SignatureScheme::ECDSA_SHA1_Legacy,
rustls::SignatureScheme::RSA_PKCS1_SHA256,
rustls::SignatureScheme::ECDSA_NISTP256_SHA256,
rustls::SignatureScheme::RSA_PKCS1_SHA384,
rustls::SignatureScheme::ECDSA_NISTP384_SHA384,
rustls::SignatureScheme::RSA_PKCS1_SHA512,
rustls::SignatureScheme::ECDSA_NISTP521_SHA512,
rustls::SignatureScheme::RSA_PSS_SHA256,
rustls::SignatureScheme::RSA_PSS_SHA384,
rustls::SignatureScheme::RSA_PSS_SHA512,
rustls::SignatureScheme::ED25519,
rustls::SignatureScheme::ED448,
]
}
}
fn configure_client(
secure_conn: bool,
certificate_file: Option<PathBuf>,
private_key_file: Option<PathBuf>,
alpns: Vec<String>,
keep_alive_interval_ms: u64,
) -> Result<ClientConfig, Box<dyn Error>> {
let mut crypto = if secure_conn {
let (certs, key) = read_certs_from_file(certificate_file, private_key_file)?;
let mut cert_store = rustls::RootCertStore::empty();
cert_store.add_parsable_certificates(certs.clone());
rustls::ClientConfig::builder()
.with_root_certificates(Arc::new(cert_store))
.with_client_auth_cert(certs, key)?
} else {
rustls::ClientConfig::builder()
.dangerous()
.with_custom_certificate_verifier(SkipServerVerification::new())
.with_no_client_auth()
};
let alpn_protocols: Vec<Vec<u8>> = alpns
.iter()
.map(|x| x.as_bytes().to_vec())
.collect::<Vec<_>>();
crypto.alpn_protocols = alpn_protocols;
crypto.key_log = Arc::new(rustls::KeyLogFile::new());
let mut client_config = ClientConfig::new(Arc::new(QuicClientConfig::try_from(crypto)?));
if keep_alive_interval_ms > 0 {
let mut transport_config = TransportConfig::default();
transport_config.keep_alive_interval(Some(Duration::from_millis(keep_alive_interval_ms)));
client_config.transport_config(Arc::new(transport_config));
}
Ok(client_config)
}
fn read_certs_from_file(
certificate_file: Option<PathBuf>,
private_key_file: Option<PathBuf>,
) -> Result<
(
Vec<rustls_pki_types::CertificateDer<'static>>,
rustls_pki_types::PrivateKeyDer<'static>,
),
Box<dyn Error>,
> {
/*
* NOTE:
*
* Certificate file here should correspond to fullchain.pem where
* fullchain.pem = cert.pem + chain.pem.
* fullchain.pem DOES NOT include a CA's Root Certificates.
*
* One typically uses chain.pem (or the first certificate in it) when asked
* for a CA bundle or CA certificate.
*
* One typically uses fullchain.pem when asked for the entire certificate
* chain in a single file. For example, this is the case of modern day
* Apache and nginx.
*/
let cert_file = certificate_file
.clone()
.expect("Expected path to certificates be valid");
let key_file = private_key_file.expect("Expected path to certificates be valid");
let certs: Vec<rustls_pki_types::CertificateDer<'static>> = {
let cert_file = File::open(cert_file.as_path())?;
let mut cert_file_rdr = BufReader::new(cert_file);
let cert_vec = rustls_pemfile::certs(&mut cert_file_rdr);
cert_vec.into_iter().map(|c| c.unwrap()).collect()
};
let key: rustls_pki_types::PrivateKeyDer<'static> = {
let key_file = File::open(key_file.as_path())?;
let mut key_file_rdr = BufReader::new(key_file);
let keys_iter = rustls_pemfile::read_all(&mut key_file_rdr);
let key_item = keys_iter
.into_iter()
.map(|c| c.unwrap())
.next()
.ok_or("Certificate should have at least one private key")?;
match key_item {
rustls_pemfile::Item::Pkcs1Key(key) => rustls_pki_types::PrivateKeyDer::from(key),
rustls_pemfile::Item::Pkcs8Key(key) => rustls_pki_types::PrivateKeyDer::from(key),
_ => unimplemented!(),
}
};
Ok((certs, key))
}
fn configure_server(
server_name: &str,
secure_conn: bool,
certificate_file: Option<PathBuf>,
private_key_file: Option<PathBuf>,
alpns: Vec<String>,
) -> Result<(ServerConfig, Vec<rustls_pki_types::CertificateDer>), Box<dyn Error>> {
let (certs, key) = if secure_conn {
read_certs_from_file(certificate_file, private_key_file)?
} else {
let rcgen::CertifiedKey { cert: _, key_pair } =
rcgen::generate_simple_self_signed(vec![server_name.into()]).unwrap();
let cert_der = key_pair.serialize_der();
let priv_key = rustls_pki_types::PrivateKeyDer::try_from(cert_der.clone()).unwrap();
let cert_chain = vec![rustls_pki_types::CertificateDer::from(cert_der)];
(cert_chain, priv_key)
};
let mut crypto = if secure_conn {
let mut cert_store = rustls::RootCertStore::empty();
cert_store.add_parsable_certificates(certs.clone());
let auth_client = rustls::server::WebPkiClientVerifier::builder(Arc::new(cert_store))
.build()
.unwrap();
rustls::ServerConfig::builder()
.with_client_cert_verifier(auth_client)
.with_single_cert(certs.clone(), key)
} else {
rustls::ServerConfig::builder()
.with_no_client_auth()
.with_single_cert(certs.clone(), key)
}?;
let alpn_protocols: Vec<Vec<u8>> = alpns
.iter()
.map(|x| x.as_bytes().to_vec())
.collect::<Vec<_>>();
crypto.alpn_protocols = alpn_protocols;
crypto.key_log = Arc::new(rustls::KeyLogFile::new());
let mut server_config =
ServerConfig::with_crypto(Arc::new(QuicServerConfig::try_from(crypto)?));
Arc::get_mut(&mut server_config.transport)
.unwrap()
.max_concurrent_bidi_streams(0_u8.into())
.max_concurrent_uni_streams(1_u8.into());
Ok((server_config, certs))
}
pub fn server_endpoint(
server_addr: SocketAddr,
server_name: &str,
secure_conn: bool,
alpns: Vec<String>,
certificate_file: Option<PathBuf>,
private_key_file: Option<PathBuf>,
) -> Result<Endpoint, Box<dyn Error>> {
let (server_config, _) = configure_server(
server_name,
secure_conn,
certificate_file,
private_key_file,
alpns,
)?;
let endpoint = Endpoint::server(server_config, server_addr)?;
Ok(endpoint)
}
pub fn client_endpoint(
client_addr: SocketAddr,
secure_conn: bool,
alpns: Vec<String>,
certificate_file: Option<PathBuf>,
private_key_file: Option<PathBuf>,
keep_alive_interval_ms: u64,
) -> Result<Endpoint, Box<dyn Error>> {
let client_cfg = configure_client(
secure_conn,
certificate_file,
private_key_file,
alpns,
keep_alive_interval_ms,
)?;
let mut endpoint = Endpoint::client(client_addr)?;
endpoint.set_default_client_config(client_cfg);
Ok(endpoint)
}

View file

@ -0,0 +1,112 @@
// Copyright (C) 2024, Asymptotic Inc.
// Author: Sanchayan Maity <sanchayan@asymptotic.io>
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
use gst::prelude::*;
use serial_test::serial;
use std::thread;
fn init() {
use std::sync::Once;
static INIT: Once = Once::new();
INIT.call_once(|| {
gst::init().unwrap();
gstquinn::plugin_register_static().expect("QUIC source sink send receive tests");
});
}
fn make_buffer(content: &[u8]) -> gst::Buffer {
let mut buf = gst::Buffer::from_slice(content.to_owned());
buf.make_mut().set_pts(gst::ClockTime::from_mseconds(200));
buf
}
#[test]
#[serial]
fn test_send_receive_without_datagram() {
init();
let content = "Hello, world!\n".as_bytes();
thread::spawn(move || {
let mut h1 = gst_check::Harness::new_empty();
h1.add_parse("quinnquicsink secure-connection=false");
h1.set_src_caps(gst::Caps::builder("text/plain").build());
h1.play();
assert!(h1.push(make_buffer(content)) == Ok(gst::FlowSuccess::Ok));
h1.push_event(gst::event::Eos::new());
h1.element().unwrap().set_state(gst::State::Null).unwrap();
drop(h1);
});
let mut h2 = gst_check::Harness::new_empty();
h2.add_parse("quinnquicsrc secure-connection=false");
h2.play();
let buf = h2.pull_until_eos().unwrap().unwrap();
assert_eq!(
content,
buf.into_mapped_buffer_readable().unwrap().as_slice()
);
h2.element().unwrap().set_state(gst::State::Null).unwrap();
drop(h2);
}
#[test]
#[serial]
fn test_send_receive_with_datagram() {
init();
let content = "Hello, world!\n".as_bytes();
// Use a different port address compared to the default that will be used
// in the other test. We get a address already in use error otherwise.
thread::spawn(move || {
let mut h1 = gst_check::Harness::new_empty();
h1.add_parse("quinnquicsrc use-datagram=true server-address=127.0.0.1 server-port=6000 secure-connection=false");
h1.play();
let buf = h1.pull_until_eos().unwrap().unwrap();
assert_eq!(
content,
buf.into_mapped_buffer_readable().unwrap().as_slice()
);
h1.element().unwrap().set_state(gst::State::Null).unwrap();
drop(h1);
});
let mut h2 = gst_check::Harness::new_empty();
h2.add_parse("quinnquicsink use-datagram=true client-address=127.0.0.1 client-port=6001 server-address=127.0.0.1 server-port=6000 secure-connection=false");
h2.set_src_caps(gst::Caps::builder("text/plain").build());
h2.play();
assert!(h2.push(make_buffer(content)) == Ok(gst::FlowSuccess::Ok));
h2.push_event(gst::event::Eos::new());
h2.element().unwrap().set_state(gst::State::Null).unwrap();
drop(h2);
}

View file

@ -10,9 +10,9 @@ rust-version.workspace = true
[dependencies]
url = "2.1"
reqwest = { version = "0.11", features = ["cookies", "gzip"] }
reqwest = { version = "0.12", features = ["cookies", "gzip"] }
futures = "0.3"
headers = "0.3"
headers = "0.4"
mime = "0.3"
gst.workspace = true
gst-base.workspace = true
@ -20,7 +20,10 @@ tokio = { version = "1.0", default-features = false, features = ["time", "rt-mul
once_cell.workspace = true
[dev-dependencies]
hyper = { version = "0.14", features = ["server"] }
hyper = { version = "1.0", features = ["server"] }
http-body-util = "0.1.1"
bytes = "1.0"
pin-project-lite = "0.2"
gst.workspace = true
[lib]

View file

@ -1045,7 +1045,7 @@ impl BaseSrcImpl for ReqwestHttpSrc {
.ok_or_else(|| {
gst::error_msg!(gst::CoreError::StateChange, ["Can't start without an URI"])
})
.map(|uri| uri.clone())?;
.cloned()?;
gst::debug!(CAT, imp: self, "Starting for URI {}", uri);

View file

@ -10,8 +10,9 @@
#![allow(clippy::single_match)]
use gst::glib;
use gst::prelude::*;
use gst::{glib, prelude::*};
use http_body_util::combinators::BoxBody;
use tokio::io::{AsyncBufReadExt, AsyncWriteExt};
use std::sync::mpsc;
@ -33,7 +34,7 @@ struct Harness {
src: gst::Element,
pad: gst::Pad,
receiver: Option<mpsc::Receiver<Message>>,
rt: Option<tokio::runtime::Runtime>,
rt: tokio::runtime::Runtime,
}
/// Messages sent from our test harness
@ -46,20 +47,34 @@ enum Message {
ServerError(String),
}
fn full_body(s: impl Into<bytes::Bytes>) -> BoxBody<bytes::Bytes, hyper::Error> {
use http_body_util::{BodyExt, Full};
Full::new(s.into()).map_err(|never| match never {}).boxed()
}
fn empty_body() -> BoxBody<bytes::Bytes, hyper::Error> {
use http_body_util::{BodyExt, Empty};
Empty::new().map_err(|never| match never {}).boxed()
}
impl Harness {
/// Creates a new HTTP source and test harness around it
///
/// `http_func`: Function to generate HTTP responses based on a request
/// `setup_func`: Setup function for the HTTP source, should only set properties and similar
fn new<
F: FnMut(hyper::Request<hyper::Body>) -> hyper::Response<hyper::Body> + Send + 'static,
F: FnMut(
hyper::Request<hyper::body::Incoming>,
) -> hyper::Response<BoxBody<bytes::Bytes, hyper::Error>>
+ Send
+ 'static,
G: FnOnce(&gst::Element),
>(
http_func: F,
setup_func: G,
) -> Harness {
use hyper::service::{make_service_fn, service_fn};
use hyper::Server;
use hyper::server::conn::http1;
use hyper::service::service_fn;
use std::sync::{Arc, Mutex};
// Create the HTTP source
@ -112,21 +127,15 @@ impl Harness {
.unwrap();
// Create an HTTP sever that listens on localhost on some random, free port
let addr = ([127, 0, 0, 1], 0).into();
let addr = std::net::SocketAddr::from(([127, 0, 0, 1], 0));
// Whenever a new client is connecting, a new service function is requested. For each
// client we use the same service function, which simply calls the function used by the
// test
let http_func = Arc::new(Mutex::new(http_func));
let make_service = make_service_fn(move |_ctx| {
let service = service_fn(move |req: hyper::Request<hyper::body::Incoming>| {
let http_func = http_func.clone();
async move {
let http_func = http_func.clone();
Ok::<_, hyper::Error>(service_fn(move |req| {
let http_func = http_func.clone();
async move { Ok::<_, hyper::Error>((*http_func.lock().unwrap())(req)) }
}))
}
async move { Ok::<_, hyper::Error>((*http_func.lock().unwrap())(req)) }
});
let (local_addr_sender, local_addr_receiver) = tokio::sync::oneshot::channel();
@ -135,13 +144,22 @@ impl Harness {
rt.spawn(async move {
// Bind the server, retrieve the local port that was selected in the end and set this as
// the location property on the source
let server = Server::bind(&addr).serve(make_service);
let local_addr = server.local_addr();
let listener = tokio::net::TcpListener::bind(addr).await.unwrap();
let local_addr = listener.local_addr().unwrap();
local_addr_sender.send(local_addr).unwrap();
if let Err(e) = server.await {
let _ = sender.send(Message::ServerError(format!("{e:?}")));
loop {
let (stream, _) = listener.accept().await.unwrap();
let io = tokio_io::TokioIo::new(stream);
let service = service.clone();
let sender = sender.clone();
tokio::task::spawn(async move {
let http = http1::Builder::new().serve_connection(io, service);
if let Err(e) = http.await {
let _ = sender.send(Message::ServerError(format!("{e}")));
}
});
}
});
@ -155,7 +173,7 @@ impl Harness {
src,
pad,
receiver: Some(receiver),
rt: Some(rt),
rt,
}
}
@ -337,28 +355,25 @@ impl Drop for Harness {
self.pad.set_active(false).unwrap();
self.src.set_state(gst::State::Null).unwrap();
self.rt.take().unwrap();
}
}
#[test]
fn test_basic_request() {
use std::io::{Cursor, Read};
init();
// Set up a harness that returns "Hello World" for any HTTP request and checks if the
// default headers are all sent
let mut h = Harness::new(
|req| {
use hyper::{Body, Response};
let headers = req.headers();
assert_eq!(headers.get("connection").unwrap(), "keep-alive");
assert_eq!(headers.get("accept-encoding").unwrap(), "identity");
assert_eq!(headers.get("icy-metadata").unwrap(), "1");
Response::new(Body::from("Hello World"))
hyper::Response::new(full_body("Hello World"))
},
|_src| {
// No additional setup needed here
@ -399,21 +414,20 @@ fn test_basic_request() {
#[test]
fn test_basic_request_inverted_defaults() {
use std::io::{Cursor, Read};
init();
// Set up a harness that returns "Hello World" for any HTTP request and override various
// default properties to check if the corresponding headers are set correctly
let mut h = Harness::new(
|req| {
use hyper::{Body, Response};
let headers = req.headers();
assert_eq!(headers.get("connection").unwrap(), "close");
assert_eq!(headers.get("accept-encoding").unwrap(), "gzip");
assert_eq!(headers.get("icy-metadata"), None);
assert_eq!(headers.get("user-agent").unwrap(), "test user-agent");
Response::new(Body::from("Hello World"))
hyper::Response::new(full_body("Hello World"))
},
|src| {
src.set_property("keep-alive", false);
@ -457,14 +471,13 @@ fn test_basic_request_inverted_defaults() {
#[test]
fn test_extra_headers() {
use std::io::{Cursor, Read};
init();
// Set up a harness that returns "Hello World" for any HTTP request and check if the
// extra-headers property works correctly for setting additional headers
let mut h = Harness::new(
|req| {
use hyper::{Body, Response};
let headers = req.headers();
assert_eq!(headers.get("foo").unwrap(), "bar");
assert_eq!(headers.get("baz").unwrap(), "1");
@ -485,7 +498,7 @@ fn test_extra_headers() {
vec!["1", "2"]
);
Response::new(Body::from("Hello World"))
hyper::Response::new(full_body("Hello World"))
},
|src| {
src.set_property(
@ -534,18 +547,17 @@ fn test_extra_headers() {
#[test]
fn test_cookies_property() {
use std::io::{Cursor, Read};
init();
// Set up a harness that returns "Hello World" for any HTTP request and check if the
// cookies property can be used to set cookies correctly
let mut h = Harness::new(
|req| {
use hyper::{Body, Response};
let headers = req.headers();
assert_eq!(headers.get("cookie").unwrap(), "foo=1; bar=2; baz=3");
Response::new(Body::from("Hello World"))
hyper::Response::new(full_body("Hello World"))
},
|src| {
src.set_property(
@ -593,6 +605,7 @@ fn test_cookies_property() {
#[test]
fn test_iradio_mode() {
use std::io::{Cursor, Read};
init();
// Set up a harness that returns "Hello World" for any HTTP request and check if the
@ -600,18 +613,16 @@ fn test_iradio_mode() {
// and put into caps/tags
let mut h = Harness::new(
|req| {
use hyper::{Body, Response};
let headers = req.headers();
assert_eq!(headers.get("icy-metadata").unwrap(), "1");
Response::builder()
hyper::Response::builder()
.header("icy-metaint", "8192")
.header("icy-name", "Name")
.header("icy-genre", "Genre")
.header("icy-url", "http://www.example.com")
.header("Content-Type", "audio/mpeg; rate=44100")
.body(Body::from("Hello World"))
.body(full_body("Hello World"))
.unwrap()
},
|_src| {
@ -677,17 +688,16 @@ fn test_iradio_mode() {
#[test]
fn test_audio_l16() {
use std::io::{Cursor, Read};
init();
// Set up a harness that returns "Hello World" for any HTTP request and check if the
// audio/L16 content type is parsed correctly and put into the caps
let mut h = Harness::new(
|_req| {
use hyper::{Body, Response};
Response::builder()
hyper::Response::builder()
.header("Content-Type", "audio/L16; rate=48000; channels=2")
.body(Body::from("Hello World"))
.body(full_body("Hello World"))
.unwrap()
},
|_src| {
@ -741,25 +751,23 @@ fn test_audio_l16() {
#[test]
fn test_authorization() {
use std::io::{Cursor, Read};
init();
// Set up a harness that returns "Hello World" for any HTTP request
// but requires authentication first
let mut h = Harness::new(
|req| {
use hyper::{Body, Response};
use reqwest::StatusCode;
let headers = req.headers();
if let Some(authorization) = headers.get("authorization") {
assert_eq!(authorization, "Basic dXNlcjpwYXNzd29yZA==");
Response::new(Body::from("Hello World"))
hyper::Response::new(full_body("Hello World"))
} else {
Response::builder()
.status(StatusCode::UNAUTHORIZED.as_u16())
hyper::Response::builder()
.status(reqwest::StatusCode::UNAUTHORIZED.as_u16())
.header("WWW-Authenticate", "Basic realm=\"realm\"")
.body(Body::empty())
.body(empty_body())
.unwrap()
}
},
@ -802,17 +810,14 @@ fn test_authorization() {
#[test]
fn test_404_error() {
use reqwest::StatusCode;
init();
// Harness that always returns 404 and we check if that is mapped to the correct error code
let mut h = Harness::new(
|_req| {
use hyper::{Body, Response};
Response::builder()
.status(StatusCode::NOT_FOUND.as_u16())
.body(Body::empty())
hyper::Response::builder()
.status(reqwest::StatusCode::NOT_FOUND.as_u16())
.body(empty_body())
.unwrap()
},
|_src| {},
@ -830,17 +835,14 @@ fn test_404_error() {
#[test]
fn test_403_error() {
use reqwest::StatusCode;
init();
// Harness that always returns 403 and we check if that is mapped to the correct error code
let mut h = Harness::new(
|_req| {
use hyper::{Body, Response};
Response::builder()
.status(StatusCode::FORBIDDEN.as_u16())
.body(Body::empty())
hyper::Response::builder()
.status(reqwest::StatusCode::FORBIDDEN.as_u16())
.body(empty_body())
.unwrap()
},
|_src| {},
@ -881,13 +883,12 @@ fn test_network_error() {
#[test]
fn test_seek_after_ready() {
use std::io::{Cursor, Read};
init();
// Harness that checks if seeking in Ready state works correctly
let mut h = Harness::new(
|req| {
use hyper::{Body, Response};
let headers = req.headers();
if let Some(range) = headers.get("Range") {
if range == "bytes=123-" {
@ -896,11 +897,11 @@ fn test_seek_after_ready() {
*d = ((i + 123) % 256) as u8;
}
Response::builder()
hyper::Response::builder()
.header("content-length", 8192 - 123)
.header("accept-ranges", "bytes")
.header("content-range", "bytes 123-8192/8192")
.body(Body::from(data_seek))
.body(full_body(data_seek))
.unwrap()
} else {
panic!("Received an unexpected Range header")
@ -916,10 +917,10 @@ fn test_seek_after_ready() {
*d = (i % 256) as u8;
}
Response::builder()
hyper::Response::builder()
.header("content-length", 8192)
.header("accept-ranges", "bytes")
.body(Body::from(data_full))
.body(full_body(data_full))
.unwrap()
}
},
@ -961,14 +962,13 @@ fn test_seek_after_ready() {
#[test]
fn test_seek_after_buffer_received() {
use std::io::{Cursor, Read};
init();
// Harness that checks if seeking in Playing state after having received a buffer works
// correctly
let mut h = Harness::new(
|req| {
use hyper::{Body, Response};
let headers = req.headers();
if let Some(range) = headers.get("Range") {
if range == "bytes=123-" {
@ -977,11 +977,11 @@ fn test_seek_after_buffer_received() {
*d = ((i + 123) % 256) as u8;
}
Response::builder()
hyper::Response::builder()
.header("content-length", 8192 - 123)
.header("accept-ranges", "bytes")
.header("content-range", "bytes 123-8192/8192")
.body(Body::from(data_seek))
.body(full_body(data_seek))
.unwrap()
} else {
panic!("Received an unexpected Range header")
@ -992,10 +992,10 @@ fn test_seek_after_buffer_received() {
*d = (i % 256) as u8;
}
Response::builder()
hyper::Response::builder()
.header("content-length", 8192)
.header("accept-ranges", "bytes")
.body(Body::from(data_full))
.body(full_body(data_full))
.unwrap()
}
},
@ -1038,14 +1038,13 @@ fn test_seek_after_buffer_received() {
#[test]
fn test_seek_with_stop_position() {
use std::io::{Cursor, Read};
init();
// Harness that checks if seeking in Playing state after having received a buffer works
// correctly
let mut h = Harness::new(
|req| {
use hyper::{Body, Response};
let headers = req.headers();
if let Some(range) = headers.get("Range") {
if range == "bytes=123-130" {
@ -1054,11 +1053,11 @@ fn test_seek_with_stop_position() {
*d = ((i + 123) % 256) as u8;
}
Response::builder()
hyper::Response::builder()
.header("content-length", 8)
.header("accept-ranges", "bytes")
.header("content-range", "bytes 123-130/8192")
.body(Body::from(data_seek))
.body(full_body(data_seek))
.unwrap()
} else {
panic!("Received an unexpected Range header")
@ -1069,10 +1068,10 @@ fn test_seek_with_stop_position() {
*d = (i % 256) as u8;
}
Response::builder()
hyper::Response::builder()
.header("content-length", 8192)
.header("accept-ranges", "bytes")
.body(Body::from(data_full))
.body(full_body(data_full))
.unwrap()
}
},
@ -1131,11 +1130,9 @@ fn test_cookies() {
// client
let mut h = Harness::new(
|_req| {
use hyper::{Body, Response};
Response::builder()
hyper::Response::builder()
.header("Set-Cookie", "foo=bar")
.body(Body::from("Hello World"))
.body(full_body("Hello World"))
.unwrap()
},
|_src| {
@ -1158,8 +1155,6 @@ fn test_cookies() {
// client provides the cookie that was set in the previous request
let mut h2 = Harness::new(
|req| {
use hyper::{Body, Response};
let headers = req.headers();
let cookies = headers
.get("Cookie")
@ -1167,8 +1162,8 @@ fn test_cookies() {
.to_str()
.unwrap();
assert!(cookies.split(';').any(|c| c == "foo=bar"));
Response::builder()
.body(Body::from("Hello again!"))
hyper::Response::builder()
.body(full_body("Hello again!"))
.unwrap()
},
|_src| {
@ -1224,63 +1219,76 @@ fn test_proxy_prop_souphttpsrc_compatibility() {
fn test_proxy() {
init();
// Simplest possible implementation of naive oneshot proxy server?
// Listen on socket before spawning thread (we won't error out with connection refused).
let incoming = std::net::TcpListener::bind("127.0.0.1:0").unwrap();
let proxy_addr = incoming.local_addr().unwrap();
println!("listening on {proxy_addr}, starting proxy server");
let proxy_server = std::thread::spawn(move || {
use std::io::*;
println!("awaiting connection to proxy server");
let (mut conn, _addr) = incoming.accept().unwrap();
println!("client connected, reading request line");
let mut reader = BufReader::new(conn.try_clone().unwrap());
let mut buf = String::new();
reader.read_line(&mut buf).unwrap();
let parts: Vec<&str> = buf.split(' ').collect();
let url = reqwest::Url::parse(parts[1]).unwrap();
let host = format!(
"{}:{}",
url.host_str().unwrap(),
url.port_or_known_default().unwrap()
);
println!("connecting to target server {host}");
let mut server_connection = std::net::TcpStream::connect(host).unwrap();
println!("connected to target server, sending modified request line");
server_connection
.write_all(format!("{} {} {}\r\n", parts[0], url.path(), parts[2]).as_bytes())
.unwrap();
println!("sent modified request line, forwarding data in both directions");
let send_join_handle = {
let mut server_connection = server_connection.try_clone().unwrap();
std::thread::spawn(move || {
copy(&mut reader, &mut server_connection).unwrap();
})
};
copy(&mut server_connection, &mut conn).unwrap();
send_join_handle.join().unwrap();
println!("shutting down proxy server");
});
let mut h = Harness::new(
|_req| {
use hyper::{Body, Response};
Response::builder()
.body(Body::from("Hello Proxy World"))
hyper::Response::builder()
.body(full_body("Hello Proxy World"))
.unwrap()
},
|src| {
src.set_property("proxy", proxy_addr.to_string());
},
|_src| {},
);
// Simplest possible implementation of naive oneshot proxy server?
// Listen on socket before spawning thread (we won't error out with connection refused).
let (proxy_handle, proxy_addr) = {
let (proxy_addr_sender, proxy_addr_receiver) = tokio::sync::oneshot::channel();
let proxy_handle = h.rt.spawn(async move {
let listener = tokio::net::TcpListener::bind("127.0.0.1:0").await.unwrap();
let proxy_addr = listener.local_addr().unwrap();
println!("listening on {proxy_addr}, starting proxy server");
proxy_addr_sender.send(proxy_addr).unwrap();
println!("awaiting connection to proxy server");
let (conn, _addr) = listener.accept().await.unwrap();
let (conn_reader, mut conn_writer) = tokio::io::split(conn);
println!("client connected, reading request line");
let mut reader = tokio::io::BufReader::new(conn_reader);
let mut buf = String::new();
reader.read_line(&mut buf).await.unwrap();
let parts: Vec<&str> = buf.split(' ').collect();
let url = reqwest::Url::parse(parts[1]).unwrap();
let host = format!(
"{}:{}",
url.host_str().unwrap(),
url.port_or_known_default().unwrap()
);
println!("connecting to target server {host}");
let mut server_connection = tokio::net::TcpStream::connect(host).await.unwrap();
println!("connected to target server, sending modified request line");
server_connection
.write_all(format!("{} {} {}", parts[0], url.path(), parts[2]).as_bytes())
.await
.unwrap();
let (mut server_reader, mut server_writer) = tokio::io::split(server_connection);
println!("sent modified request line, forwarding data in both directions");
let send_join_handle = tokio::task::spawn(async move {
tokio::io::copy(&mut reader, &mut server_writer)
.await
.unwrap();
});
tokio::io::copy(&mut server_reader, &mut conn_writer)
.await
.unwrap();
send_join_handle.await.unwrap();
println!("shutting down proxy server");
});
(
proxy_handle,
futures::executor::block_on(proxy_addr_receiver).unwrap(),
)
};
// Set the HTTP source to Playing so that everything can start.
h.run(|src| {
h.run(move |src| {
src.set_property("proxy", proxy_addr.to_string());
src.set_state(gst::State::Playing).unwrap();
});
@ -1292,5 +1300,90 @@ fn test_proxy() {
assert_eq!(num_bytes, "Hello Proxy World".len());
// Don't leave threads hanging around.
proxy_server.join().unwrap();
proxy_handle.abort();
let _ = futures::executor::block_on(proxy_handle);
}
/// Adapter from tokio IO traits to hyper IO traits.
mod tokio_io {
use pin_project_lite::pin_project;
use std::pin::Pin;
use std::task::{Context, Poll};
pin_project! {
#[derive(Debug)]
pub struct TokioIo<T> {
#[pin]
inner: T,
}
}
impl<T> TokioIo<T> {
pub fn new(inner: T) -> Self {
Self { inner }
}
}
impl<T> hyper::rt::Read for TokioIo<T>
where
T: tokio::io::AsyncRead,
{
fn poll_read(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
mut buf: hyper::rt::ReadBufCursor<'_>,
) -> Poll<Result<(), std::io::Error>> {
let n = unsafe {
let mut tbuf = tokio::io::ReadBuf::uninit(buf.as_mut());
match tokio::io::AsyncRead::poll_read(self.project().inner, cx, &mut tbuf) {
Poll::Ready(Ok(())) => tbuf.filled().len(),
other => return other,
}
};
unsafe {
buf.advance(n);
}
Poll::Ready(Ok(()))
}
}
impl<T> hyper::rt::Write for TokioIo<T>
where
T: tokio::io::AsyncWrite,
{
fn poll_write(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &[u8],
) -> Poll<Result<usize, std::io::Error>> {
tokio::io::AsyncWrite::poll_write(self.project().inner, cx, buf)
}
fn poll_flush(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Result<(), std::io::Error>> {
tokio::io::AsyncWrite::poll_flush(self.project().inner, cx)
}
fn poll_shutdown(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Result<(), std::io::Error>> {
tokio::io::AsyncWrite::poll_shutdown(self.project().inner, cx)
}
fn is_write_vectored(&self) -> bool {
tokio::io::AsyncWrite::is_write_vectored(&self.inner)
}
fn poll_write_vectored(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
bufs: &[std::io::IoSlice<'_>],
) -> Poll<Result<usize, std::io::Error>> {
tokio::io::AsyncWrite::poll_write_vectored(self.project().inner, cx, bufs)
}
}
}

View file

@ -9,14 +9,27 @@ description = "GStreamer Rust RTP Plugin"
rust-version.workspace = true
[dependencies]
bitstream-io = "2.0"
gst = { workspace = true, features = ["v1_20"] }
gst-rtp = { workspace = true, features = ["v1_20"]}
anyhow = "1"
atomic_refcell = "0.1"
bitstream-io = "2.1"
byte-slice-cast = "1.2"
chrono = { version = "0.4", default-features = false }
gst = { workspace = true, features = ["v1_20"] }
gst-audio = { workspace = true, features = ["v1_20"] }
gst-rtp = { workspace = true, features = ["v1_20"] }
gst-video = { workspace = true, features = ["v1_20"] }
hex = "0.4.3"
once_cell.workspace = true
rand = { version = "0.8", default-features = false, features = ["std", "std_rng" ] }
rtp-types = { version = "0.1" }
slab = "0.4.9"
smallvec = { version = "1.11", features = ["union", "write", "const_generics", "const_new"] }
thiserror = "1"
time = { version = "0.3", default-features = false, features = ["std"] }
[dev-dependencies]
gst-check = { workspace = true, features = ["v1_20"] }
gst-app = { workspace = true, features = ["v1_20"] }
[build-dependencies]
gst-plugin-version-helper.workspace = true

View file

@ -0,0 +1,193 @@
// SPDX-License-Identifier: MPL-2.0
use gst::{
glib::{self, prelude::*},
prelude::*,
};
#[derive(Debug, Default)]
pub struct AudioDiscont {
/// If last processing detected a discontinuity.
discont_pending: bool,
/// Base PTS to which the offsets below are relative.
base_pts: Option<gst::ClockTime>,
/// Next output sample offset, i.e. offset of the first sample of the queued buffers.
///
/// This is only set once the packet with the base PTS is output.
next_out_offset: Option<u64>,
/// Next expected input sample offset.
next_in_offset: Option<u64>,
/// PTS of the last buffer that was above the alignment threshold.
///
/// This is reset whenever the next buffer is actually below the alignment threshold again.
/// FIXME: Should this be running time?
discont_time: Option<gst::ClockTime>,
/// Last known sample rate.
last_rate: Option<u32>,
}
impl AudioDiscont {
pub fn process_input(
&mut self,
settings: &AudioDiscontConfiguration,
discont: bool,
rate: u32,
pts: gst::ClockTime,
num_samples: usize,
) -> bool {
if self.discont_pending {
return true;
}
if self.last_rate.map_or(false, |last_rate| last_rate != rate) {
self.discont_pending = true;
}
self.last_rate = Some(rate);
if discont {
self.discont_pending = true;
return true;
}
// If we have no base PTS yet, this is the first buffer and there's a discont
let Some(base_pts) = self.base_pts else {
self.discont_pending = true;
return true;
};
// Never detect a discont if alignment threshold is not set
let Some(alignment_threshold) = settings.alignment_threshold else {
return false;
};
let expected_pts = base_pts
+ gst::ClockTime::from_nseconds(
self.next_in_offset
.unwrap_or(0)
.mul_div_ceil(gst::ClockTime::SECOND.nseconds(), rate as u64)
.unwrap(),
);
let mut discont = false;
let diff = pts.into_positive() - expected_pts.into_positive();
if diff.abs() >= alignment_threshold {
let mut resync = false;
if settings.discont_wait.is_zero() {
resync = true;
} else if let Some(discont_time) = self.discont_time {
if (discont_time.into_positive() - pts.into_positive()).abs()
>= settings.discont_wait.into_positive()
{
resync = true;
}
} else if (expected_pts.into_positive() - pts.into_positive()).abs()
>= settings.discont_wait.into_positive()
{
resync = true;
} else {
self.discont_time = Some(expected_pts);
}
if resync {
discont = true;
}
} else {
self.discont_time = None;
}
self.next_in_offset = Some(self.next_in_offset.unwrap_or(0) + num_samples as u64);
if discont {
self.discont_pending = true;
}
discont
}
pub fn resync(&mut self, base_pts: gst::ClockTime, num_samples: usize) {
self.discont_pending = false;
self.base_pts = Some(base_pts);
self.next_in_offset = Some(num_samples as u64);
self.next_out_offset = None;
self.discont_time = None;
self.last_rate = None;
}
pub fn reset(&mut self) {
*self = AudioDiscont::default();
}
pub fn base_pts(&self) -> Option<gst::ClockTime> {
self.base_pts
}
pub fn next_output_offset(&self) -> Option<u64> {
self.next_out_offset
}
pub fn process_output(&mut self, num_samples: usize) {
self.next_out_offset = Some(self.next_out_offset.unwrap_or(0) + num_samples as u64);
}
}
#[derive(Debug, Copy, Clone)]
pub struct AudioDiscontConfiguration {
pub alignment_threshold: Option<gst::ClockTime>,
pub discont_wait: gst::ClockTime,
}
impl Default for AudioDiscontConfiguration {
fn default() -> Self {
AudioDiscontConfiguration {
alignment_threshold: Some(gst::ClockTime::from_mseconds(40)),
discont_wait: gst::ClockTime::from_seconds(1),
}
}
}
impl AudioDiscontConfiguration {
pub fn create_pspecs() -> Vec<glib::ParamSpec> {
vec![
glib::ParamSpecUInt64::builder("alignment-threshold")
.nick("Alignment Threshold")
.blurb("Timestamp alignment threshold in nanoseconds")
.default_value(
Self::default()
.alignment_threshold
.map(gst::ClockTime::nseconds)
.unwrap_or(u64::MAX),
)
.mutable_playing()
.build(),
glib::ParamSpecUInt64::builder("discont-wait")
.nick("Discont Wait")
.blurb("Window of time in nanoseconds to wait before creating a discontinuity")
.default_value(Self::default().discont_wait.nseconds())
.mutable_playing()
.build(),
]
}
pub fn set_property(&mut self, value: &glib::Value, pspec: &glib::ParamSpec) -> bool {
match pspec.name() {
"alignment-threshold" => {
self.alignment_threshold = value.get().unwrap();
true
}
"discont-wait" => {
self.discont_wait = value.get().unwrap();
true
}
_ => false,
}
}
pub fn property(&self, pspec: &glib::ParamSpec) -> Option<glib::Value> {
match pspec.name() {
"alignment-threshold" => Some(self.alignment_threshold.to_value()),
"discont-wait" => Some(self.discont_wait.to_value()),
_ => None,
}
}
}

View file

@ -7,28 +7,36 @@
//
// SPDX-License-Identifier: MPL-2.0
use gst::{glib, subclass::prelude::*};
use gst_rtp::prelude::*;
use gst_rtp::subclass::prelude::*;
use atomic_refcell::AtomicRefCell;
use gst::{glib, prelude::*, subclass::prelude::*};
use std::{
cmp::Ordering,
io::{Cursor, Read, Seek, SeekFrom},
sync::Mutex,
};
use bitstream_io::{BitReader, BitWriter};
use once_cell::sync::Lazy;
use crate::av1::common::{
err_flow, leb128_size, parse_leb128, write_leb128, AggregationHeader, ObuType, SizedObu,
UnsizedObu, CLOCK_RATE, ENDIANNESS,
use crate::{
av1::common::{
err_flow, leb128_size, parse_leb128, write_leb128, AggregationHeader, ObuType, SizedObu,
UnsizedObu, CLOCK_RATE, ENDIANNESS,
},
basedepay::PacketToBufferRelation,
};
use crate::basedepay::RtpBaseDepay2Ext;
// TODO: handle internal size fields in RTP OBUs
#[derive(Debug)]
struct PendingFragment {
ext_seqnum: u64,
obu: UnsizedObu,
bytes: Vec<u8>,
}
struct State {
last_timestamp: Option<u32>,
last_timestamp: Option<u64>,
/// if true, the last packet of a temporal unit has been received
marked_packet: bool,
/// if the next output buffer needs the DISCONT flag set
@ -36,7 +44,7 @@ struct State {
/// if we saw a valid OBU since the last reset
found_valid_obu: bool,
/// holds data for a fragment
obu_fragment: Option<(UnsizedObu, Vec<u8>)>,
obu_fragment: Option<PendingFragment>,
}
impl Default for State {
@ -51,9 +59,9 @@ impl Default for State {
}
}
#[derive(Debug, Default)]
#[derive(Default)]
pub struct RTPAv1Depay {
state: Mutex<State>,
state: AtomicRefCell<State>,
}
static CAT: Lazy<gst::DebugCategory> = Lazy::new(|| {
@ -78,7 +86,7 @@ impl RTPAv1Depay {
impl ObjectSubclass for RTPAv1Depay {
const NAME: &'static str = "GstRtpAv1Depay";
type Type = super::RTPAv1Depay;
type ParentType = gst_rtp::RTPBaseDepayload;
type ParentType = crate::basedepay::RtpBaseDepay2;
}
impl ObjectImpl for RTPAv1Depay {}
@ -107,7 +115,6 @@ impl ElementImpl for RTPAv1Depay {
gst::PadPresence::Always,
&gst::Caps::builder("application/x-rtp")
.field("media", "video")
.field("payload", gst::IntRange::new(96, 127))
.field("clock-rate", CLOCK_RATE as i32)
.field("encoding-name", "AV1")
.build(),
@ -131,87 +138,66 @@ impl ElementImpl for RTPAv1Depay {
PAD_TEMPLATES.as_ref()
}
fn change_state(
&self,
transition: gst::StateChange,
) -> Result<gst::StateChangeSuccess, gst::StateChangeError> {
gst::debug!(CAT, imp: self, "changing state: {}", transition);
if matches!(transition, gst::StateChange::ReadyToPaused) {
let mut state = self.state.lock().unwrap();
self.reset(&mut state);
}
let ret = self.parent_change_state(transition);
if matches!(transition, gst::StateChange::PausedToReady) {
let mut state = self.state.lock().unwrap();
self.reset(&mut state);
}
ret
}
}
impl RTPBaseDepayloadImpl for RTPAv1Depay {
fn set_caps(&self, _caps: &gst::Caps) -> Result<(), gst::LoggableError> {
let element = self.obj();
let src_pad = element.src_pad();
let src_caps = src_pad.pad_template_caps();
src_pad.push_event(gst::event::Caps::builder(&src_caps).build());
impl crate::basedepay::RtpBaseDepay2Impl for RTPAv1Depay {
const ALLOWED_META_TAGS: &'static [&'static str] = &["video"];
fn start(&self) -> Result<(), gst::ErrorMessage> {
let mut state = self.state.borrow_mut();
self.reset(&mut state);
Ok(())
}
fn handle_event(&self, event: gst::Event) -> bool {
match event.view() {
gst::EventView::Eos(_) | gst::EventView::FlushStop(_) => {
let mut state = self.state.lock().unwrap();
self.reset(&mut state);
}
_ => (),
}
fn stop(&self) -> Result<(), gst::ErrorMessage> {
let mut state = self.state.borrow_mut();
self.reset(&mut state);
self.parent_handle_event(event)
Ok(())
}
fn process_rtp_packet(
fn set_sink_caps(&self, _caps: &gst::Caps) -> bool {
self.obj()
.set_src_caps(&self.obj().src_pad().pad_template_caps());
true
}
fn flush(&self) {
let mut state = self.state.borrow_mut();
self.reset(&mut state);
}
fn handle_packet(
&self,
rtp: &gst_rtp::RTPBuffer<gst_rtp::rtp_buffer::Readable>,
) -> Option<gst::Buffer> {
if let Err(err) = self.handle_rtp_packet(rtp) {
packet: &crate::basedepay::Packet,
) -> Result<gst::FlowSuccess, gst::FlowError> {
let res = self.handle_rtp_packet(packet);
if let Err(err) = res {
gst::warning!(CAT, imp: self, "Failed to handle RTP packet: {err:?}");
self.reset(&mut self.state.lock().unwrap());
self.reset(&mut self.state.borrow_mut());
}
None
res
}
}
impl RTPAv1Depay {
fn handle_rtp_packet(
&self,
rtp: &gst_rtp::RTPBuffer<gst_rtp::rtp_buffer::Readable>,
) -> Result<(), gst::FlowError> {
gst::log!(
packet: &crate::basedepay::Packet,
) -> Result<gst::FlowSuccess, gst::FlowError> {
gst::trace!(
CAT,
imp: self,
"processing RTP packet with payload type {} and size {}",
rtp.payload_type(),
rtp.buffer().size(),
"Processing RTP packet {packet:?}",
);
let payload = rtp.payload().map_err(err_flow!(self, payload_buf))?;
let mut state = self.state.borrow_mut();
let mut state = self.state.lock().unwrap();
if rtp.buffer().flags().contains(gst::BufferFlags::DISCONT) {
gst::debug!(CAT, imp: self, "buffer discontinuity");
self.reset(&mut state);
}
let mut reader = Cursor::new(payload);
let mut reader = Cursor::new(packet.payload());
let mut ready_obus = Vec::new();
let aggr_header = {
@ -223,7 +209,7 @@ impl RTPAv1Depay {
};
// handle new temporal units
if state.marked_packet || state.last_timestamp != Some(rtp.timestamp()) {
if state.marked_packet || state.last_timestamp != Some(packet.ext_timestamp()) {
if state.last_timestamp.is_some() && state.obu_fragment.is_some() {
gst::error!(
CAT,
@ -242,8 +228,8 @@ impl RTPAv1Depay {
// the next temporal unit starts with a temporal delimiter OBU
ready_obus.extend_from_slice(&TEMPORAL_DELIMITER);
}
state.marked_packet = rtp.is_marker();
state.last_timestamp = Some(rtp.timestamp());
state.marked_packet = packet.marker_bit();
state.last_timestamp = Some(packet.ext_timestamp());
// parse and prepare the received OBUs
let mut idx = 0;
@ -258,10 +244,20 @@ impl RTPAv1Depay {
self.reset(&mut state);
}
if let Some((obu, ref mut bytes)) = &mut state.obu_fragment {
// If we finish an OBU here, it will start with the ext seqnum of this packet
// but if it also extends a fragment then the start will be set to the start
// of the fragment instead.
let mut start_ext_seqnum = packet.ext_seqnum();
if let Some(PendingFragment {
ext_seqnum,
obu,
ref mut bytes,
}) = state.obu_fragment
{
assert!(aggr_header.leading_fragment);
let (element_size, is_last_obu) = self
.find_element_info(rtp, &mut reader, &aggr_header, idx)
.find_element_info(&mut reader, &aggr_header, idx)
.map_err(err_flow!(self, find_element))?;
let bytes_end = bytes.len();
@ -283,6 +279,7 @@ impl RTPAv1Depay {
&full_obu,
&mut ready_obus,
)?;
start_ext_seqnum = ext_seqnum;
state.obu_fragment = None;
}
@ -290,9 +287,9 @@ impl RTPAv1Depay {
}
// handle other OBUs, including trailing fragments
while reader.position() < rtp.payload_size() as u64 {
while (reader.position() as usize) < reader.get_ref().len() {
let (element_size, is_last_obu) =
self.find_element_info(rtp, &mut reader, &aggr_header, idx)?;
self.find_element_info(&mut reader, &aggr_header, idx)?;
if idx == 0 && aggr_header.leading_fragment {
if state.found_valid_obu {
@ -330,13 +327,17 @@ impl RTPAv1Depay {
// trailing OBU fragments are stored in the state
if is_last_obu && aggr_header.trailing_fragment {
let bytes_left = rtp.payload_size() - (reader.position() as u32);
let mut bytes = vec![0; bytes_left as usize];
let bytes_left = reader.get_ref().len() - (reader.position() as usize);
let mut bytes = vec![0; bytes_left];
reader
.read_exact(bytes.as_mut_slice())
.map_err(err_flow!(self, buf_read))?;
state.obu_fragment = Some((obu, bytes));
state.obu_fragment = Some(PendingFragment {
ext_seqnum: packet.ext_seqnum(),
obu,
bytes,
});
}
// full OBUs elements are translated and appended to the ready OBUs
else {
@ -396,10 +397,13 @@ impl RTPAv1Depay {
drop(state);
if let Some(buffer) = buffer {
self.obj().push(buffer)?;
self.obj().queue_buffer(
PacketToBufferRelation::Seqnums(start_ext_seqnum..=packet.ext_seqnum()),
buffer,
)
} else {
Ok(gst::FlowSuccess::Ok)
}
Ok(())
}
/// Find out the next OBU element's size, and if it is the last OBU in the packet.
@ -408,7 +412,6 @@ impl RTPAv1Depay {
/// and will be at the first byte past the element's size field afterwards.
fn find_element_info(
&self,
rtp: &gst_rtp::RTPBuffer<gst_rtp::rtp_buffer::Readable>,
reader: &mut Cursor<&[u8]>,
aggr_header: &AggregationHeader,
index: u32,
@ -418,7 +421,7 @@ impl RTPAv1Depay {
let element_size = if let Some(count) = aggr_header.obu_count {
is_last_obu = index + 1 == count as u32;
if is_last_obu {
rtp.payload_size() - (reader.position() as u32)
(reader.get_ref().len() - reader.position() as usize) as u32
} else {
let mut bitreader = BitReader::endian(reader, ENDIANNESS);
let (size, _) = parse_leb128(&mut bitreader).map_err(err_flow!(self, leb_read))?;
@ -427,7 +430,11 @@ impl RTPAv1Depay {
} else {
let (size, _) = parse_leb128(&mut BitReader::endian(&mut *reader, ENDIANNESS))
.map_err(err_flow!(self, leb_read))?;
is_last_obu = match rtp.payload_size().cmp(&(reader.position() as u32 + size)) {
is_last_obu = match reader
.get_ref()
.len()
.cmp(&(reader.position() as usize + size as usize))
{
Ordering::Greater => false,
Ordering::Equal => true,
Ordering::Less => {
@ -545,7 +552,9 @@ mod tests {
)
];
let element = <RTPAv1Depay as ObjectSubclass>::Type::new();
// Element exists just for logging purposes
let element = glib::Object::new::<crate::av1::depay::RTPAv1Depay>();
for (idx, (obu, rtp_bytes, out_bytes)) in test_data.into_iter().enumerate() {
println!("running test {idx}...");
let mut reader = Cursor::new(rtp_bytes.as_slice());
@ -563,40 +572,35 @@ mod tests {
fn test_find_element_info() {
gst::init().unwrap();
let test_data: [(Vec<(u32, bool)>, u32, Vec<u8>, AggregationHeader); 4] = [
let test_data: [(Vec<(u32, bool)>, Vec<u8>, AggregationHeader); 4] = [
(
vec![(1, false)], // expected results
100, // RTP payload size
vec![0b0000_0001, 0b0001_0000],
vec![0b0000_0001, 0b0001_0000, 0],
AggregationHeader { obu_count: None, ..AggregationHeader::default() },
), (
vec![(5, true)],
5,
vec![0b0111_1000, 0, 0, 0, 0],
AggregationHeader { obu_count: Some(1), ..AggregationHeader::default() },
), (
vec![(7, true)],
8,
vec![0b0000_0111, 0b0011_0110, 0b0010_1000, 0b0000_1010, 1, 2, 3, 4],
AggregationHeader { obu_count: None, ..AggregationHeader::default() },
), (
vec![(6, false), (4, true)],
11,
vec![0b0000_0110, 0b0111_1000, 1, 2, 3, 4, 5, 0b0011_0000, 1, 2, 3],
AggregationHeader { obu_count: Some(2), ..AggregationHeader::default() },
)
];
let element = <RTPAv1Depay as ObjectSubclass>::Type::new();
// Element exists just for logging purposes
let element = glib::Object::new::<crate::av1::depay::RTPAv1Depay>();
for (idx, (
info,
payload_size,
rtp_bytes,
aggr_header,
)) in test_data.into_iter().enumerate() {
println!("running test {idx}...");
let buffer = gst::Buffer::new_rtp_with_sizes(payload_size, 0, 0).unwrap();
let rtp = gst_rtp::RTPBuffer::from_buffer_readable(&buffer).unwrap();
let mut reader = Cursor::new(rtp_bytes.as_slice());
let mut element_size = 0;
@ -607,7 +611,7 @@ mod tests {
println!("testing element {} with reader position {}...", obu_idx, reader.position());
let actual = element.imp().find_element_info(&rtp, &mut reader, &aggr_header, obu_idx as u32);
let actual = element.imp().find_element_info(&mut reader, &aggr_header, obu_idx as u32);
assert_eq!(actual, Ok(expected));
element_size = actual.unwrap().0;
}

View file

@ -6,22 +6,18 @@
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
#![allow(clippy::new_without_default)]
use gst::glib;
use gst::prelude::*;
pub mod imp;
#[cfg(test)]
mod tests;
glib::wrapper! {
pub struct RTPAv1Depay(ObjectSubclass<imp::RTPAv1Depay>)
@extends gst_rtp::RTPBaseDepayload, gst::Element, gst::Object;
}
impl RTPAv1Depay {
pub fn new() -> Self {
glib::Object::new()
}
@extends crate::basedepay::RtpBaseDepay2, gst::Element, gst::Object;
}
pub fn register(plugin: &gst::Plugin) -> Result<(), glib::BoolError> {

View file

@ -0,0 +1,117 @@
//
// Copyright (C) 2022 Vivienne Watermeier <vwatermeier@igalia.com>
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
use gst::{event::Eos, Caps};
use gst_check::Harness;
fn init() {
use std::sync::Once;
static INIT: Once = Once::new();
INIT.call_once(|| {
gst::init().unwrap();
crate::plugin_register_static().expect("rtpav1 test");
});
}
#[test]
fn test_depayloader() {
#[rustfmt::skip]
let test_packets: [(Vec<u8>, gst::ClockTime, bool, u32); 4] = [
( // simple packet, complete TU
vec![ // RTP payload
0b0001_1000,
0b0011_0000, 1, 2, 3, 4, 5, 6,
],
gst::ClockTime::from_seconds(0),
true, // marker bit
100_000, // timestamp
), ( // 2 OBUs, last is fragmented
vec![
0b0110_0000,
0b0000_0110, 0b0111_1000, 1, 2, 3, 4, 5,
0b0011_0000, 1, 2, 3,
],
gst::ClockTime::from_seconds(1),
false,
190_000,
), ( // continuation of the last OBU
vec![
0b1100_0000,
0b0000_0100, 4, 5, 6, 7,
],
gst::ClockTime::from_seconds(1),
false,
190_000,
), ( // finishing the OBU fragment
vec![
0b1001_0000,
8, 9, 10,
],
gst::ClockTime::from_seconds(1),
true,
190_000,
)
];
#[rustfmt::skip]
let expected: [(gst::ClockTime, Vec<u8>); 3] = [
(
gst::ClockTime::from_seconds(0),
vec![0b0001_0010, 0, 0b0011_0010, 0b0000_0110, 1, 2, 3, 4, 5, 6],
),
(
gst::ClockTime::from_seconds(1),
vec![0b0001_0010, 0, 0b0111_1010, 0b0000_0101, 1, 2, 3, 4, 5],
),
(
gst::ClockTime::from_seconds(1),
vec![0b0011_0010, 0b0000_1010, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
),
];
init();
let mut h = Harness::new("rtpav1depay");
h.play();
let caps = Caps::builder("application/x-rtp")
.field("media", "video")
.field("payload", 96)
.field("clock-rate", 90000)
.field("encoding-name", "AV1")
.build();
h.set_src_caps(caps);
for (idx, (bytes, pts, marker, timestamp)) in test_packets.iter().enumerate() {
let builder = rtp_types::RtpPacketBuilder::new()
.marker_bit(*marker)
.timestamp(*timestamp)
.payload_type(96)
.sequence_number(idx as u16)
.payload(bytes.as_slice());
let buf = builder.write_vec().unwrap();
let mut buf = gst::Buffer::from_mut_slice(buf);
{
buf.get_mut().unwrap().set_pts(*pts);
}
h.push(buf).unwrap();
}
h.push_event(Eos::new());
for (idx, (pts, ex)) in expected.iter().enumerate() {
println!("checking buffer {idx}...");
let buffer = h.pull().unwrap();
assert_eq!(buffer.pts(), Some(*pts));
let actual = buffer.into_mapped_buffer_readable().unwrap();
assert_eq!(actual.as_slice(), ex.as_slice());
}
}

View file

@ -7,20 +7,19 @@
//
// SPDX-License-Identifier: MPL-2.0
use atomic_refcell::AtomicRefCell;
use gst::{glib, subclass::prelude::*};
use gst_rtp::{prelude::*, subclass::prelude::*};
use std::{
cmp,
collections::VecDeque,
io::{Cursor, Read, Seek, SeekFrom, Write},
sync::Mutex,
};
use bitstream_io::{BitReader, BitWriter};
use once_cell::sync::Lazy;
use crate::av1::common::{
err_flow, leb128_size, write_leb128, ObuType, SizedObu, CLOCK_RATE, ENDIANNESS,
use crate::{
av1::common::{err_flow, leb128_size, write_leb128, ObuType, SizedObu, CLOCK_RATE, ENDIANNESS},
basepay::{PacketToBufferRelation, RtpBasePay2Ext},
};
static CAT: Lazy<gst::DebugCategory> = Lazy::new(|| {
@ -31,8 +30,6 @@ static CAT: Lazy<gst::DebugCategory> = Lazy::new(|| {
)
});
// TODO: properly handle `max_ptime` and `min_ptime`
/// Information about the OBUs intended to be grouped into one packet
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
struct PacketOBUData {
@ -60,8 +57,7 @@ struct ObuData {
info: SizedObu,
bytes: Vec<u8>,
offset: usize,
dts: Option<gst::ClockTime>,
pts: Option<gst::ClockTime>,
id: u64,
}
#[derive(Clone, Debug, PartialEq, Eq)]
@ -78,18 +74,13 @@ struct State {
/// (Corresponds to `N` field in the aggregation header)
first_packet_in_seq: bool,
/// The last observed DTS if upstream does not provide DTS for each OBU
last_dts: Option<gst::ClockTime>,
/// The last observed PTS if upstream does not provide PTS for each OBU
last_pts: Option<gst::ClockTime>,
/// If the input is TU or frame aligned.
framed: bool,
}
#[derive(Debug, Default)]
pub struct RTPAv1Pay {
state: Mutex<State>,
state: AtomicRefCell<State>,
}
impl Default for State {
@ -98,8 +89,6 @@ impl Default for State {
obus: VecDeque::new(),
open_obu_fragment: false,
first_packet_in_seq: true,
last_dts: None,
last_pts: None,
framed: false,
}
}
@ -124,11 +113,10 @@ impl RTPAv1Pay {
fn handle_new_obus(
&self,
state: &mut State,
id: u64,
data: &[u8],
marker: bool,
dts: Option<gst::ClockTime>,
pts: Option<gst::ClockTime>,
) -> Result<gst::BufferList, gst::FlowError> {
) -> Result<gst::FlowSuccess, gst::FlowError> {
let mut reader = Cursor::new(data);
while reader.position() < data.len() as u64 {
@ -163,8 +151,7 @@ impl RTPAv1Pay {
info: obu,
bytes: Vec::new(),
offset: 0,
dts,
pts,
id,
});
}
@ -195,23 +182,17 @@ impl RTPAv1Pay {
info: obu,
bytes,
offset: 0,
dts,
pts,
id,
});
}
}
}
let mut list = gst::BufferList::new();
{
let list = list.get_mut().unwrap();
while let Some(packet_data) = self.consider_new_packet(state, false, marker) {
let buffer = self.generate_new_packet(state, packet_data)?;
list.add(buffer);
}
while let Some(packet_data) = self.consider_new_packet(state, false, marker) {
self.generate_new_packet(state, packet_data)?;
}
Ok(list)
Ok(gst::FlowSuccess::Ok)
}
/// Look at the size the currently stored OBUs would require,
@ -237,7 +218,7 @@ impl RTPAv1Pay {
marker,
);
let payload_limit = gst_rtp::calc_payload_len(self.obj().mtu(), 0, 0);
let payload_limit = self.obj().max_payload_size();
// Create information about the packet that can be created now while iterating over the
// OBUs and return this if a full packet can indeed be created now.
@ -361,7 +342,7 @@ impl RTPAv1Pay {
&self,
state: &mut State,
packet: PacketOBUData,
) -> Result<gst::Buffer, gst::FlowError> {
) -> Result<gst::FlowSuccess, gst::FlowError> {
gst::log!(
CAT,
imp: self,
@ -370,186 +351,134 @@ impl RTPAv1Pay {
);
// prepare the outgoing buffer
let mut outbuf =
gst::Buffer::new_rtp_with_sizes(packet.payload_size, 0, 0).map_err(|err| {
gst::element_imp_error!(
self,
gst::ResourceError::Write,
["Failed to allocate output buffer: {}", err]
);
gst::FlowError::Error
})?;
let mut payload = Vec::with_capacity(packet.payload_size as usize);
let mut writer = Cursor::new(&mut payload);
{
// this block enforces that outbuf_mut is dropped before pushing outbuf
let first_obu = state.obus.front().unwrap();
if let Some(dts) = first_obu.dts {
state.last_dts = Some(
state
.last_dts
.map_or(dts, |last_dts| cmp::max(last_dts, dts)),
);
}
if let Some(pts) = first_obu.pts {
state.last_pts = Some(
state
.last_pts
.map_or(pts, |last_pts| cmp::max(last_pts, pts)),
);
}
// construct aggregation header
let w = if packet.omit_last_size_field && packet.obu_count < 4 {
packet.obu_count
} else {
0
};
let outbuf_mut = outbuf
.get_mut()
.expect("Failed to get mutable reference to outbuf");
outbuf_mut.set_dts(state.last_dts);
outbuf_mut.set_pts(state.last_pts);
let mut rtp = gst_rtp::RTPBuffer::from_buffer_writable(outbuf_mut)
.expect("Failed to create RTPBuffer");
rtp.set_marker(packet.ends_temporal_unit);
let payload = rtp
.payload_mut()
.expect("Failed to get mutable reference to RTP payload");
let mut writer = Cursor::new(payload);
{
// construct aggregation header
let w = if packet.omit_last_size_field && packet.obu_count < 4 {
packet.obu_count
} else {
0
};
let aggr_header: [u8; 1] = [
let aggr_header: [u8; 1] = [
(state.open_obu_fragment as u8) << 7 | // Z
((packet.last_obu_fragment_size.is_some()) as u8) << 6 | // Y
(w as u8) << 4 | // W
(state.first_packet_in_seq as u8) << 3 // N
; 1];
writer
.write(&aggr_header)
.map_err(err_flow!(self, aggr_header_write))?;
writer
.write(&aggr_header)
.map_err(err_flow!(self, aggr_header_write))?;
state.first_packet_in_seq = false;
state.first_packet_in_seq = false;
}
let mut start_id = None;
let end_id;
// append OBUs to the buffer
for _ in 1..packet.obu_count {
let obu = loop {
let obu = state.obus.pop_front().unwrap();
// Drop temporal delimiter from here
if obu.info.obu_type != ObuType::TemporalDelimiter {
break obu;
}
};
if start_id.is_none() {
start_id = Some(obu.id);
}
// append OBUs to the buffer
for _ in 1..packet.obu_count {
let obu = loop {
let obu = state.obus.pop_front().unwrap();
write_leb128(
&mut BitWriter::endian(&mut writer, ENDIANNESS),
obu.info.size + obu.info.header_len,
)
.map_err(err_flow!(self, leb_write))?;
writer
.write(&obu.bytes[obu.offset..])
.map_err(err_flow!(self, obu_write))?;
}
state.open_obu_fragment = false;
if let Some(dts) = obu.dts {
state.last_dts = Some(
state
.last_dts
.map_or(dts, |last_dts| cmp::max(last_dts, dts)),
);
}
if let Some(pts) = obu.pts {
state.last_pts = Some(
state
.last_pts
.map_or(pts, |last_pts| cmp::max(last_pts, pts)),
);
}
{
let last_obu = loop {
let obu = state.obus.front_mut().unwrap();
// Drop temporal delimiter from here
if obu.info.obu_type != ObuType::TemporalDelimiter {
break obu;
}
};
// Drop temporal delimiter from here
if obu.info.obu_type != ObuType::TemporalDelimiter {
break obu;
}
let _ = state.obus.pop_front().unwrap();
};
write_leb128(
&mut BitWriter::endian(&mut writer, ENDIANNESS),
obu.info.size + obu.info.header_len,
)
.map_err(err_flow!(self, leb_write))?;
if start_id.is_none() {
start_id = Some(last_obu.id);
}
end_id = last_obu.id;
// do the last OBU separately
// in this instance `obu_size` includes the header length
let obu_size = if let Some(size) = packet.last_obu_fragment_size {
state.open_obu_fragment = true;
size
} else {
last_obu.bytes.len() as u32 - last_obu.offset as u32
};
if !packet.omit_last_size_field {
write_leb128(&mut BitWriter::endian(&mut writer, ENDIANNESS), obu_size)
.map_err(err_flow!(self, leb_write))?;
}
// if this OBU is not a fragment, handle it as usual
if packet.last_obu_fragment_size.is_none() {
writer
.write(&obu.bytes[obu.offset..])
.write(&last_obu.bytes[last_obu.offset..])
.map_err(err_flow!(self, obu_write))?;
let _ = state.obus.pop_front().unwrap();
}
state.open_obu_fragment = false;
// otherwise write only a slice, and update the element
// to only contain the unwritten bytes
else {
writer
.write(&last_obu.bytes[last_obu.offset..last_obu.offset + obu_size as usize])
.map_err(err_flow!(self, obu_write))?;
{
let last_obu = loop {
let obu = state.obus.front_mut().unwrap();
if let Some(dts) = obu.dts {
state.last_dts = Some(
state
.last_dts
.map_or(dts, |last_dts| cmp::max(last_dts, dts)),
);
}
if let Some(pts) = obu.pts {
state.last_pts = Some(
state
.last_pts
.map_or(pts, |last_pts| cmp::max(last_pts, pts)),
);
}
// Drop temporal delimiter from here
if obu.info.obu_type != ObuType::TemporalDelimiter {
break obu;
}
let _ = state.obus.pop_front().unwrap();
let new_size = last_obu.bytes.len() as u32 - last_obu.offset as u32 - obu_size;
last_obu.info = SizedObu {
size: new_size,
header_len: 0,
leb_size: leb128_size(new_size) as u32,
is_fragment: true,
..last_obu.info
};
// do the last OBU separately
// in this instance `obu_size` includes the header length
let obu_size = if let Some(size) = packet.last_obu_fragment_size {
state.open_obu_fragment = true;
size
} else {
last_obu.bytes.len() as u32 - last_obu.offset as u32
};
if !packet.omit_last_size_field {
write_leb128(&mut BitWriter::endian(&mut writer, ENDIANNESS), obu_size)
.map_err(err_flow!(self, leb_write))?;
}
// if this OBU is not a fragment, handle it as usual
if packet.last_obu_fragment_size.is_none() {
writer
.write(&last_obu.bytes[last_obu.offset..])
.map_err(err_flow!(self, obu_write))?;
let _ = state.obus.pop_front().unwrap();
}
// otherwise write only a slice, and update the element
// to only contain the unwritten bytes
else {
writer
.write(
&last_obu.bytes[last_obu.offset..last_obu.offset + obu_size as usize],
)
.map_err(err_flow!(self, obu_write))?;
let new_size = last_obu.bytes.len() as u32 - last_obu.offset as u32 - obu_size;
last_obu.info = SizedObu {
size: new_size,
header_len: 0,
leb_size: leb128_size(new_size) as u32,
is_fragment: true,
..last_obu.info
};
last_obu.offset += obu_size as usize;
}
last_obu.offset += obu_size as usize;
}
}
// OBUs were consumed above so start_id will be set now
let start_id = start_id.unwrap();
gst::log!(
CAT,
imp: self,
"generated RTP packet of size {}",
outbuf.size()
payload.len()
);
Ok(outbuf)
self.obj().queue_packet(
PacketToBufferRelation::Ids(start_id..=end_id),
rtp_types::RtpPacketBuilder::new()
.marker_bit(packet.ends_temporal_unit)
.payload(&payload),
)?;
Ok(gst::FlowSuccess::Ok)
}
}
@ -557,7 +486,7 @@ impl RTPAv1Pay {
impl ObjectSubclass for RTPAv1Pay {
const NAME: &'static str = "GstRtpAv1Pay";
type Type = super::RTPAv1Pay;
type ParentType = gst_rtp::RTPBasePayload;
type ParentType = crate::basepay::RtpBasePay2;
}
impl ObjectImpl for RTPAv1Pay {}
@ -610,64 +539,58 @@ impl ElementImpl for RTPAv1Pay {
PAD_TEMPLATES.as_ref()
}
fn change_state(
&self,
transition: gst::StateChange,
) -> Result<gst::StateChangeSuccess, gst::StateChangeError> {
gst::debug!(CAT, imp: self, "changing state: {}", transition);
if matches!(transition, gst::StateChange::ReadyToPaused) {
let mut state = self.state.lock().unwrap();
self.reset(&mut state, true);
}
let ret = self.parent_change_state(transition);
if matches!(transition, gst::StateChange::PausedToReady) {
let mut state = self.state.lock().unwrap();
self.reset(&mut state, true);
}
ret
}
}
impl RTPBasePayloadImpl for RTPAv1Pay {
fn set_caps(&self, caps: &gst::Caps) -> Result<(), gst::LoggableError> {
gst::debug!(CAT, imp: self, "received caps {caps:?}");
impl crate::basepay::RtpBasePay2Impl for RTPAv1Pay {
const ALLOWED_META_TAGS: &'static [&'static str] = &["video"];
{
let mut state = self.state.lock().unwrap();
let s = caps.structure(0).unwrap();
match s.get::<&str>("alignment").unwrap() {
"tu" | "frame" => {
state.framed = true;
}
_ => {
state.framed = false;
}
}
}
self.obj().set_options("video", true, "AV1", CLOCK_RATE);
fn start(&self) -> Result<(), gst::ErrorMessage> {
let mut state = self.state.borrow_mut();
self.reset(&mut state, true);
Ok(())
}
fn handle_buffer(&self, buffer: gst::Buffer) -> Result<gst::FlowSuccess, gst::FlowError> {
gst::trace!(CAT, imp: self, "received buffer of size {}", buffer.size());
fn stop(&self) -> Result<(), gst::ErrorMessage> {
let mut state = self.state.borrow_mut();
self.reset(&mut state, true);
let mut state = self.state.lock().unwrap();
Ok(())
}
if buffer.flags().contains(gst::BufferFlags::DISCONT) {
gst::debug!(CAT, imp: self, "buffer discontinuity");
self.reset(&mut state, false);
fn set_sink_caps(&self, caps: &gst::Caps) -> bool {
gst::debug!(CAT, imp: self, "received caps {caps:?}");
self.obj().set_src_caps(
&gst::Caps::builder("application/x-rtp")
.field("media", "video")
.field("clock-rate", CLOCK_RATE as i32)
.field("encoding-name", "AV1")
.build(),
);
let mut state = self.state.borrow_mut();
let s = caps.structure(0).unwrap();
match s.get::<&str>("alignment").unwrap() {
"tu" | "frame" => {
state.framed = true;
}
_ => {
state.framed = false;
}
}
let dts = buffer.dts();
let pts = buffer.pts();
true
}
fn handle_buffer(
&self,
buffer: &gst::Buffer,
id: u64,
) -> Result<gst::FlowSuccess, gst::FlowError> {
gst::trace!(CAT, imp: self, "received buffer of size {}", buffer.size());
let mut state = self.state.borrow_mut();
let map = buffer.map_readable().map_err(|_| {
gst::element_imp_error!(
self,
@ -680,49 +603,31 @@ impl RTPBasePayloadImpl for RTPAv1Pay {
// Does the buffer finished a full TU?
let marker = buffer.flags().contains(gst::BufferFlags::MARKER) || state.framed;
let list = self.handle_new_obus(&mut state, map.as_slice(), marker, dts, pts)?;
let res = self.handle_new_obus(&mut state, id, map.as_slice(), marker)?;
drop(map);
drop(state);
if !list.is_empty() {
self.obj().push_list(list)
} else {
Ok(gst::FlowSuccess::Ok)
}
Ok(res)
}
fn sink_event(&self, event: gst::Event) -> bool {
gst::log!(CAT, imp: self, "sink event: {}", event.type_());
fn drain(&self) -> Result<gst::FlowSuccess, gst::FlowError> {
// flush all remaining OBUs
let mut res = Ok(gst::FlowSuccess::Ok);
match event.view() {
gst::EventView::Eos(_) => {
// flush all remaining OBUs
let mut list = gst::BufferList::new();
{
let mut state = self.state.lock().unwrap();
let list = list.get_mut().unwrap();
while let Some(packet_data) = self.consider_new_packet(&mut state, true, true) {
match self.generate_new_packet(&mut state, packet_data) {
Ok(buffer) => list.add(buffer),
Err(_) => break,
}
}
self.reset(&mut state, false);
}
if !list.is_empty() {
let _ = self.obj().push_list(list);
}
let mut state = self.state.borrow_mut();
while let Some(packet_data) = self.consider_new_packet(&mut state, true, true) {
res = self.generate_new_packet(&mut state, packet_data);
if res.is_err() {
break;
}
gst::EventView::FlushStop(_) => {
let mut state = self.state.lock().unwrap();
self.reset(&mut state, false);
}
_ => (),
}
self.parent_sink_event(event)
res
}
fn flush(&self) {
let mut state = self.state.borrow_mut();
self.reset(&mut state, false);
}
}
@ -927,12 +832,14 @@ mod tests {
),
];
let element = <RTPAv1Pay as ObjectSubclass>::Type::new();
// Element exists just for logging purposes
let element = glib::Object::new::<crate::av1::pay::RTPAv1Pay>();
let pay = element.imp();
for idx in 0..input_data.len() {
println!("running test {idx}...");
let mut state = pay.state.lock().unwrap();
let mut state = pay.state.borrow_mut();
*state = input_data[idx].1.clone();
assert_eq!(

View file

@ -6,22 +6,17 @@
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
#![allow(clippy::new_without_default)]
use gst::glib;
use gst::prelude::*;
pub mod imp;
#[cfg(test)]
mod tests;
glib::wrapper! {
pub struct RTPAv1Pay(ObjectSubclass<imp::RTPAv1Pay>)
@extends gst_rtp::RTPBasePayload, gst::Element, gst::Object;
}
impl RTPAv1Pay {
pub fn new() -> Self {
glib::Object::new()
}
@extends crate::basepay::RtpBasePay2, gst::Element, gst::Object;
}
pub fn register(plugin: &gst::Plugin) -> Result<(), glib::BoolError> {

View file

@ -9,7 +9,6 @@
use gst::{event::Eos, prelude::*, Buffer, Caps, ClockTime};
use gst_check::Harness;
use gst_rtp::{rtp_buffer::RTPBufferExt, RTPBuffer};
fn init() {
use std::sync::Once;
@ -17,101 +16,13 @@ fn init() {
INIT.call_once(|| {
gst::init().unwrap();
gstrsrtp::plugin_register_static().expect("rtpav1 test");
crate::plugin_register_static().expect("rtpav1 test");
});
}
#[test]
#[rustfmt::skip]
fn test_depayloader() {
let test_packets: [(Vec<u8>, bool, u32); 4] = [
( // simple packet, complete TU
vec![ // RTP payload
0b0001_1000,
0b0011_0000, 1, 2, 3, 4, 5, 6,
],
true, // marker bit
100_000, // timestamp
), ( // 2 OBUs, last is fragmented
vec![
0b0110_0000,
0b0000_0110, 0b0111_1000, 1, 2, 3, 4, 5,
0b0011_0000, 1, 2, 3,
],
false,
190_000,
), ( // continuation of the last OBU
vec![
0b1100_0000,
0b0000_0100, 4, 5, 6, 7,
],
false,
190_000,
), ( // finishing the OBU fragment
vec![
0b1001_0000,
8, 9, 10,
],
true,
190_000,
)
];
let expected: [Vec<u8>; 3] = [
vec![
0b0001_0010, 0,
0b0011_0010, 0b0000_0110, 1, 2, 3, 4, 5, 6,
],
vec![
0b0001_0010, 0,
0b0111_1010, 0b0000_0101, 1, 2, 3, 4, 5,
],
vec![
0b0011_0010, 0b0000_1010, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
],
];
init();
let mut h = Harness::new("rtpav1depay");
h.play();
let caps = Caps::builder("application/x-rtp")
.field("media", "video")
.field("payload", 96)
.field("clock-rate", 90000)
.field("encoding-name", "AV1")
.build();
h.set_src_caps(caps);
for (idx, (bytes, marker, timestamp)) in test_packets.iter().enumerate() {
let mut buf = Buffer::new_rtp_with_sizes(bytes.len() as u32, 0, 0).unwrap();
{
let buf_mut = buf.get_mut().unwrap();
let mut rtp_mut = RTPBuffer::from_buffer_writable(buf_mut).unwrap();
rtp_mut.set_marker(*marker);
rtp_mut.set_timestamp(*timestamp);
rtp_mut.set_payload_type(96);
rtp_mut.set_seq(idx as u16);
rtp_mut.payload_mut().unwrap().copy_from_slice(bytes);
}
h.push(buf).unwrap();
}
h.push_event(Eos::new());
for (idx, ex) in expected.iter().enumerate() {
println!("checking buffer {idx}...");
let buffer = h.pull().unwrap();
let actual = buffer.into_mapped_buffer_readable().unwrap();
assert_eq!(actual.as_slice(), ex.as_slice());
}
}
#[test]
#[rustfmt::skip]
fn test_payloader() {
#[rustfmt::skip]
let test_buffers: [(u64, Vec<u8>); 3] = [
(
0,
@ -136,6 +47,7 @@ fn test_payloader() {
)
];
#[rustfmt::skip]
let expected = [
(
false, // marker bit
@ -183,7 +95,7 @@ fn test_payloader() {
let pay = h.element().unwrap();
pay.set_property(
"mtu",
gst_rtp::calc_packet_len(25, 0, 0)
25u32 + rtp_types::RtpPacket::MIN_RTP_PACKET_LEN as u32,
);
}
h.play();
@ -203,7 +115,10 @@ fn test_payloader() {
buffer.copy_from_slice(bytes);
let mut buffer = buffer.into_buffer();
buffer.get_mut().unwrap().set_pts(ClockTime::try_from(*pts).unwrap());
buffer
.get_mut()
.unwrap()
.set_pts(ClockTime::from_nseconds(*pts));
h.push(buffer).unwrap();
}
@ -214,13 +129,14 @@ fn test_payloader() {
println!("checking packet {idx}...");
let buffer = h.pull().unwrap();
let packet = RTPBuffer::from_buffer_readable(&buffer).unwrap();
let map = buffer.map_readable().unwrap();
let packet = rtp_types::RtpPacket::parse(&map).unwrap();
if base_ts.is_none() {
base_ts = Some(packet.timestamp());
}
assert_eq!(packet.payload().unwrap(), payload.as_slice());
assert_eq!(packet.is_marker(), *marker);
assert_eq!(packet.payload(), payload.as_slice());
assert_eq!(packet.marker_bit(), *marker);
assert_eq!(packet.timestamp(), base_ts.unwrap() + ts_offset);
}
}

View file

@ -0,0 +1,518 @@
//
// Copyright (C) 2023 Sebastian Dröge <sebastian@centricular.com>
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
use std::{cmp, collections::VecDeque, sync::Mutex};
use atomic_refcell::AtomicRefCell;
use gst::{glib, prelude::*, subclass::prelude::*};
use once_cell::sync::Lazy;
use crate::{
audio_discont::{AudioDiscont, AudioDiscontConfiguration},
basepay::{PacketToBufferRelation, RtpBasePay2Ext, RtpBasePay2ImplExt, TimestampOffset},
};
#[derive(Clone)]
struct Settings {
max_ptime: Option<gst::ClockTime>,
min_ptime: gst::ClockTime,
ptime_multiple: gst::ClockTime,
audio_discont: AudioDiscontConfiguration,
}
impl Default for Settings {
fn default() -> Self {
Settings {
max_ptime: None,
min_ptime: gst::ClockTime::ZERO,
ptime_multiple: gst::ClockTime::ZERO,
audio_discont: AudioDiscontConfiguration::default(),
}
}
}
struct QueuedBuffer {
/// ID of the buffer.
id: u64,
/// The mapped buffer itself.
buffer: gst::MappedBuffer<gst::buffer::Readable>,
/// Offset into the buffer that was not consumed yet.
offset: usize,
}
#[derive(Default)]
struct State {
/// Currently configured clock rate.
clock_rate: Option<u32>,
/// Number of bytes per frame.
bpf: Option<usize>,
/// Desired "packet time", i.e. packet duration, from the caps, if set.
ptime: Option<gst::ClockTime>,
max_ptime: Option<gst::ClockTime>,
/// Currently queued buffers.
queued_buffers: VecDeque<QueuedBuffer>,
/// Currently queued number of bytes.
queued_bytes: usize,
audio_discont: AudioDiscont,
}
#[derive(Default)]
pub struct RtpBaseAudioPay2 {
settings: Mutex<Settings>,
state: AtomicRefCell<State>,
}
static CAT: Lazy<gst::DebugCategory> = Lazy::new(|| {
gst::DebugCategory::new(
"rtpbaseaudiopay2",
gst::DebugColorFlags::empty(),
Some("Base RTP Audio Payloader"),
)
});
#[glib::object_subclass]
impl ObjectSubclass for RtpBaseAudioPay2 {
const ABSTRACT: bool = true;
const NAME: &'static str = "GstRtpBaseAudioPay2";
type Type = super::RtpBaseAudioPay2;
type ParentType = crate::basepay::RtpBasePay2;
}
impl ObjectImpl for RtpBaseAudioPay2 {
fn properties() -> &'static [glib::ParamSpec] {
static PROPERTIES: Lazy<Vec<glib::ParamSpec>> = Lazy::new(|| {
let mut properties = vec![
// Using same type/semantics as C payloaders
glib::ParamSpecInt64::builder("max-ptime")
.nick("Maximum Packet Time")
.blurb("Maximum duration of the packet data in ns (-1 = unlimited up to MTU)")
.default_value(
Settings::default()
.max_ptime
.map(gst::ClockTime::nseconds)
.map(|x| x as i64)
.unwrap_or(-1),
)
.minimum(-1)
.maximum(i64::MAX)
.mutable_playing()
.build(),
// Using same type/semantics as C payloaders
glib::ParamSpecInt64::builder("min-ptime")
.nick("Minimum Packet Time")
.blurb("Minimum duration of the packet data in ns (can't go above MTU)")
.default_value(Settings::default().min_ptime.nseconds() as i64)
.minimum(0)
.maximum(i64::MAX)
.mutable_playing()
.build(),
// Using same type/semantics as C payloaders
glib::ParamSpecInt64::builder("ptime-multiple")
.nick("Packet Time Multiple")
.blurb("Force buffers to be multiples of this duration in ns (0 disables)")
.default_value(Settings::default().ptime_multiple.nseconds() as i64)
.minimum(0)
.maximum(i64::MAX)
.mutable_playing()
.build(),
];
properties.extend_from_slice(&AudioDiscontConfiguration::create_pspecs());
properties
});
PROPERTIES.as_ref()
}
fn set_property(&self, _id: usize, value: &glib::Value, pspec: &glib::ParamSpec) {
if self
.settings
.lock()
.unwrap()
.audio_discont
.set_property(value, pspec)
{
return;
}
match pspec.name() {
"max-ptime" => {
let v = value.get::<i64>().unwrap();
self.settings.lock().unwrap().max_ptime =
(v != -1).then_some(gst::ClockTime::from_nseconds(v as u64));
}
"min-ptime" => {
let v = gst::ClockTime::from_nseconds(value.get::<i64>().unwrap() as u64);
let mut settings = self.settings.lock().unwrap();
let changed = settings.min_ptime != v;
settings.min_ptime = v;
drop(settings);
if changed {
let _ = self
.obj()
.post_message(gst::message::Latency::builder().src(&*self.obj()).build());
}
}
"ptime-multiple" => {
self.settings.lock().unwrap().ptime_multiple =
gst::ClockTime::from_nseconds(value.get::<i64>().unwrap() as u64);
}
_ => unimplemented!(),
};
}
fn property(&self, _id: usize, pspec: &glib::ParamSpec) -> glib::Value {
if let Some(value) = self.settings.lock().unwrap().audio_discont.property(pspec) {
return value;
}
match pspec.name() {
"max-ptime" => (self
.settings
.lock()
.unwrap()
.max_ptime
.map(gst::ClockTime::nseconds)
.map(|x| x as i64)
.unwrap_or(-1))
.to_value(),
"min-ptime" => (self.settings.lock().unwrap().min_ptime.nseconds() as i64).to_value(),
"ptime-multiple" => {
(self.settings.lock().unwrap().ptime_multiple.nseconds() as i64).to_value()
}
_ => unimplemented!(),
}
}
}
impl GstObjectImpl for RtpBaseAudioPay2 {}
impl ElementImpl for RtpBaseAudioPay2 {}
impl crate::basepay::RtpBasePay2Impl for RtpBaseAudioPay2 {
const ALLOWED_META_TAGS: &'static [&'static str] = &["audio"];
fn start(&self) -> Result<(), gst::ErrorMessage> {
*self.state.borrow_mut() = State::default();
Ok(())
}
fn stop(&self) -> Result<(), gst::ErrorMessage> {
*self.state.borrow_mut() = State::default();
Ok(())
}
fn negotiate(&self, mut src_caps: gst::Caps) {
// Fixate here as a first step
src_caps.fixate();
let s = src_caps.structure(0).unwrap();
// Negotiate ptime/maxptime with downstream and use them in combination with the
// properties. See https://datatracker.ietf.org/doc/html/rfc4566#section-6
let ptime = s
.get::<u32>("ptime")
.ok()
.map(u64::from)
.map(gst::ClockTime::from_mseconds);
let max_ptime = s
.get::<u32>("maxptime")
.ok()
.map(u64::from)
.map(gst::ClockTime::from_mseconds);
let clock_rate = match s.get::<i32>("clock-rate") {
Ok(clock_rate) if clock_rate > 0 => clock_rate as u32,
_ => {
panic!("RTP caps {src_caps:?} without 'clock-rate'");
}
};
self.parent_negotiate(src_caps);
// Draining happened above if the clock rate has changed
let mut state = self.state.borrow_mut();
state.ptime = ptime;
state.max_ptime = max_ptime;
state.clock_rate = Some(clock_rate);
drop(state);
}
fn drain(&self) -> Result<gst::FlowSuccess, gst::FlowError> {
let settings = self.settings.lock().unwrap().clone();
let mut state = self.state.borrow_mut();
self.drain_packets(&settings, &mut state, true)
}
fn flush(&self) {
let mut state = self.state.borrow_mut();
state.queued_buffers.clear();
state.queued_bytes = 0;
state.audio_discont.reset();
}
fn handle_buffer(
&self,
buffer: &gst::Buffer,
id: u64,
) -> Result<gst::FlowSuccess, gst::FlowError> {
let buffer = buffer.clone().into_mapped_buffer_readable().map_err(|_| {
gst::error!(CAT, imp: self, "Can't map buffer readable");
gst::FlowError::Error
})?;
let pts = buffer.buffer().pts().unwrap();
let settings = self.settings.lock().unwrap().clone();
let mut state = self.state.borrow_mut();
let Some(bpf) = state.bpf else {
return Err(gst::FlowError::NotNegotiated);
};
let Some(clock_rate) = state.clock_rate else {
return Err(gst::FlowError::NotNegotiated);
};
let num_samples = buffer.size() / bpf;
let discont = state.audio_discont.process_input(
&settings.audio_discont,
buffer.buffer().flags().contains(gst::BufferFlags::DISCONT),
clock_rate,
pts,
num_samples,
);
if discont {
if state.audio_discont.base_pts().is_some() {
gst::debug!(CAT, imp: self, "Draining because of discontinuity");
self.drain_packets(&settings, &mut state, true)?;
}
state.audio_discont.resync(pts, num_samples);
}
state.queued_bytes += buffer.buffer().size();
state.queued_buffers.push_back(QueuedBuffer {
id,
buffer,
offset: 0,
});
self.drain_packets(&settings, &mut state, false)
}
#[allow(clippy::single_match)]
fn src_query(&self, query: &mut gst::QueryRef) -> bool {
let res = self.parent_src_query(query);
if !res {
return false;
}
match query.view_mut() {
gst::QueryViewMut::Latency(query) => {
let (is_live, mut min, mut max) = query.result();
let min_ptime = self.settings.lock().unwrap().min_ptime;
min += min_ptime;
max.opt_add_assign(min_ptime);
query.set(is_live, min, max);
}
_ => (),
}
true
}
}
impl RtpBaseAudioPay2 {
/// Returns the minimum, maximum and chunk/multiple packet sizes
fn calculate_packet_sizes(
&self,
settings: &Settings,
state: &State,
clock_rate: u32,
bpf: usize,
) -> (usize, usize, usize) {
let min_pframes = settings
.min_ptime
.nseconds()
.mul_div_ceil(clock_rate as u64, gst::ClockTime::SECOND.nseconds())
.unwrap() as u32;
let max_ptime = match (settings.max_ptime, state.max_ptime) {
(Some(max_ptime), Some(caps_max_ptime)) => Some(cmp::min(max_ptime, caps_max_ptime)),
(None, Some(max_ptime)) => Some(max_ptime),
(Some(max_ptime), None) => Some(max_ptime),
_ => None,
};
let max_pframes = max_ptime.map(|max_ptime| {
max_ptime
.nseconds()
.mul_div_ceil(clock_rate as u64, gst::ClockTime::SECOND.nseconds())
.unwrap() as u32
});
let pframes_multiple = cmp::max(
1,
settings
.ptime_multiple
.nseconds()
.mul_div_ceil(clock_rate as u64, gst::ClockTime::SECOND.nseconds())
.unwrap() as u32,
);
gst::trace!(
CAT,
imp: self,
"min ptime {} (frames: {}), max ptime {} (frames: {}), ptime multiple {} (frames {})",
settings.min_ptime,
min_pframes,
settings.max_ptime.display(),
max_pframes.unwrap_or(0),
settings.ptime_multiple,
pframes_multiple,
);
let psize_multiple = pframes_multiple as usize * bpf;
let mut max_packet_size = self.obj().max_payload_size() as usize;
max_packet_size -= max_packet_size % psize_multiple;
if let Some(max_pframes) = max_pframes {
max_packet_size = cmp::min(max_pframes as usize * bpf, max_packet_size)
}
let mut min_packet_size = cmp::min(
cmp::max(min_pframes as usize * bpf, psize_multiple),
max_packet_size,
);
if let Some(ptime) = state.ptime {
let pframes = ptime
.nseconds()
.mul_div_ceil(clock_rate as u64, gst::ClockTime::SECOND.nseconds())
.unwrap() as u32;
let psize = pframes as usize * bpf;
min_packet_size = cmp::max(min_packet_size, psize);
max_packet_size = cmp::min(max_packet_size, psize);
}
(min_packet_size, max_packet_size, psize_multiple)
}
fn drain_packets(
&self,
settings: &Settings,
state: &mut State,
force: bool,
) -> Result<gst::FlowSuccess, gst::FlowError> {
// Always set when caps are set
let Some(clock_rate) = state.clock_rate else {
return Ok(gst::FlowSuccess::Ok);
};
let bpf = state.bpf.unwrap();
let (min_packet_size, max_packet_size, psize_multiple) =
self.calculate_packet_sizes(settings, state, clock_rate, bpf);
gst::trace!(
CAT,
imp: self,
"Currently {} bytes queued, min packet size {min_packet_size}, max packet size {max_packet_size}, force {force}",
state.queued_bytes,
);
while state.queued_bytes >= min_packet_size || (force && state.queued_bytes > 0) {
let packet_size = {
let mut packet_size = cmp::min(max_packet_size, state.queued_bytes);
packet_size -= packet_size % psize_multiple;
packet_size
};
gst::trace!(
CAT,
imp: self,
"Creating packet of size {packet_size} ({} frames), marker {}",
packet_size / bpf,
state.audio_discont.next_output_offset().is_none(),
);
// Set marker bit on the first packet after a discontinuity
let mut packet_builder = rtp_types::RtpPacketBuilder::new()
.marker_bit(state.audio_discont.next_output_offset().is_none());
let front = state.queued_buffers.front().unwrap();
let start_id = front.id;
let mut end_id = front.id;
// Fill payload from all relevant buffers and collect start/end ids that apply.
let mut remaining_packet_size = packet_size;
for buffer in state.queued_buffers.iter() {
let this_buffer_payload_size =
cmp::min(buffer.buffer.size() - buffer.offset, remaining_packet_size);
end_id = buffer.id;
packet_builder = packet_builder
.payload(&buffer.buffer[buffer.offset..][..this_buffer_payload_size]);
remaining_packet_size -= this_buffer_payload_size;
if remaining_packet_size == 0 {
break;
}
}
// Then create the packet.
self.obj().queue_packet(
PacketToBufferRelation::IdsWithOffset {
ids: start_id..=end_id,
timestamp_offset: {
if let Some(next_out_offset) = state.audio_discont.next_output_offset() {
TimestampOffset::Rtp(next_out_offset)
} else {
TimestampOffset::Pts(gst::ClockTime::ZERO)
}
},
},
packet_builder,
)?;
// And finally dequeue or update all currently queued buffers.
let mut remaining_packet_size = packet_size;
while remaining_packet_size > 0 {
let buffer = state.queued_buffers.front_mut().unwrap();
if buffer.buffer.size() - buffer.offset > remaining_packet_size {
buffer.offset += remaining_packet_size;
remaining_packet_size = 0;
} else {
remaining_packet_size -= buffer.buffer.size() - buffer.offset;
let _ = state.queued_buffers.pop_front();
}
}
state.queued_bytes -= packet_size;
state.audio_discont.process_output(packet_size / bpf);
}
gst::trace!(CAT, imp: self, "Currently {} bytes / {} frames queued", state.queued_bytes, state.queued_bytes / bpf);
Ok(gst::FlowSuccess::Ok)
}
}
/// Wrapper functions for public API.
#[allow(dead_code)]
impl RtpBaseAudioPay2 {
pub(super) fn set_bpf(&self, bpf: usize) {
let mut state = self.state.borrow_mut();
state.bpf = Some(bpf);
}
}

View file

@ -0,0 +1,40 @@
//
// Copyright (C) 2023 Sebastian Dröge <sebastian@centricular.com>
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
use gst::{glib, prelude::*, subclass::prelude::*};
use crate::basepay::RtpBasePay2Impl;
pub mod imp;
glib::wrapper! {
pub struct RtpBaseAudioPay2(ObjectSubclass<imp::RtpBaseAudioPay2>)
@extends crate::basepay::RtpBasePay2, gst::Element, gst::Object;
}
/// Trait containing extension methods for `RtpBaseAudioPay2`.
pub trait RtpBaseAudioPay2Ext: IsA<RtpBaseAudioPay2> {
/// Sets the number of bytes per frame.
///
/// Should always be called together with `RtpBasePay2Ext::set_src_caps()`.
fn set_bpf(&self, bpf: usize) {
self.upcast_ref::<RtpBaseAudioPay2>().imp().set_bpf(bpf)
}
}
impl<O: IsA<RtpBaseAudioPay2>> RtpBaseAudioPay2Ext for O {}
/// Trait to implement in `RtpBaseAudioPay2` subclasses.
pub trait RtpBaseAudioPay2Impl: RtpBasePay2Impl {}
unsafe impl<T: RtpBaseAudioPay2Impl> IsSubclassable<T> for RtpBaseAudioPay2 {
fn class_init(class: &mut glib::Class<Self>) {
Self::parent_class_init::<T>(class);
}
}

1934
net/rtp/src/basedepay/imp.rs Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,544 @@
//
// Copyright (C) 2023 Sebastian Dröge <sebastian@centricular.com>
//
// This Source Code Form is subject to the terms of the Mozilla Public License, v2.0.
// If a copy of the MPL was not distributed with this file, You can obtain one at
// <https://mozilla.org/MPL/2.0/>.
//
// SPDX-License-Identifier: MPL-2.0
use std::ops::{Range, RangeBounds, RangeInclusive};
use gst::{glib, prelude::*, subclass::prelude::*};
pub mod imp;
glib::wrapper! {
pub struct RtpBaseDepay2(ObjectSubclass<imp::RtpBaseDepay2>)
@extends gst::Element, gst::Object;
}
/// Trait containing extension methods for `RtpBaseDepay2`.
pub trait RtpBaseDepay2Ext: IsA<RtpBaseDepay2> {
/// Sends a caps event with the given caps downstream before the next output buffer.
fn set_src_caps(&self, src_caps: &gst::Caps) {
assert!(src_caps.is_fixed());
self.upcast_ref::<RtpBaseDepay2>()
.imp()
.set_src_caps(src_caps);
}
/// Drop the packets of the given packet range.
///
/// This should be called when packets are dropped because they either don't make up any output
/// buffer or because they're corrupted in some way.
///
/// All pending packets up to the end of the range are dropped, i.e. the start of the range is
/// irrelevant.
///
/// It is not necessary to call this as part of `drain()` or `flush()` as all still pending packets are
/// considered dropped afterwards.
///
/// The next buffer that is finished will automatically get the `DISCONT` flag set.
// FIXME: Allow subclasses to drop explicit packet ranges while keeping older packets around to
// allow for continuing to reconstruct a frame despite broken packets in the middle.
fn drop_packets(&self, ext_seqnum: impl RangeBounds<u64>) {
self.upcast_ref::<RtpBaseDepay2>()
.imp()
.drop_packets(ext_seqnum)
}
/// Drop a single packet
///
/// Convenience wrapper for calling drop_packets() for a single packet.
fn drop_packet(&self, packet: &Packet) {
self.drop_packets(packet.ext_seqnum()..=packet.ext_seqnum())
}
/// Queue a buffer made from a given range of packet seqnums and timestamp offsets.
///
/// All buffers that are queued during one call of `handle_packet()` are collected in a
/// single buffer list and forwarded once `handle_packet()` has returned with a successful flow
/// return or `finish_pending_buffers()` was called.
///
/// All pending packets for which buffers were queued are released once `handle_packets()`
/// returned except for the last one. This means that it is possible for subclasses to queue a
/// buffer and output a remaining chunk of that buffer together with data from the next buffer.
///
/// If passing `OutOfBand` then the buffer is assumed to be produced using some other data,
/// e.g. from the caps, and not associated with any packets. In that case it will be pushed
/// right before the next buffer with the timestamp of that buffer, or at EOS with the
/// timestamp of the previous buffer.
///
/// In all other cases a seqnum range is provided and needs to be valid.
///
/// If the seqnum range doesn't start with the first pending packet then all packets up to the
/// first one given in the range are considered dropped.
///
/// Together with the seqnum range it is possible to provide a timestamp offset relative to
/// which the outgoing buffers's timestamp should be set.
///
/// The timestamp offset is provided in nanoseconds relative to the PTS of the packet that the
/// first seqnum refers to. This mode is mostly useful for subclasses that consume a packet
/// with multiple frames and send out one buffer per frame.
///
/// Both a PTS and DTS offset can be provided. If no DTS offset is provided then no DTS will be
/// set at all. If no PTS offset is provided then the first buffer for a given start seqnum
/// will get the PTS of the corresponding packet and all following buffers that start with the
/// same seqnum will get no PTS set.
///
/// Note that the DTS offset is relative to the final PTS and as such should not include the
/// PTS offset.
fn queue_buffer(
&self,
packet_to_buffer_relation: PacketToBufferRelation,
buffer: gst::Buffer,
) -> Result<gst::FlowSuccess, gst::FlowError> {
self.upcast_ref::<RtpBaseDepay2>()
.imp()
.queue_buffer(packet_to_buffer_relation, buffer)
}
/// Finish currently pending buffers and push them downstream in a single buffer list.
#[allow(unused)]
fn finish_pending_buffers(&self) -> Result<gst::FlowSuccess, gst::FlowError> {
self.upcast_ref::<RtpBaseDepay2>()
.imp()
.finish_pending_buffers()
}
/// Returns a reference to the sink pad.
fn sink_pad(&self) -> &gst::Pad {
self.upcast_ref::<RtpBaseDepay2>().imp().sink_pad()
}
/// Returns a reference to the src pad.
fn src_pad(&self) -> &gst::Pad {
self.upcast_ref::<RtpBaseDepay2>().imp().src_pad()
}
}
impl<O: IsA<RtpBaseDepay2>> RtpBaseDepay2Ext for O {}
/// Trait to implement in `RtpBaseDepay2` subclasses.
pub trait RtpBaseDepay2Impl: ElementImpl {
/// By default only metas without any tags are copied. Adding tags here will also copy the
/// metas that *only* have exactly one of these tags.
///
/// If more complex copying of metas is needed then [`RtpBaseDepay2Impl::transform_meta`] has
/// to be implemented.
const ALLOWED_META_TAGS: &'static [&'static str] = &[];
/// Called when streaming starts (READY -> PAUSED state change)
///
/// Optional, can be used to initialise streaming state.
fn start(&self) -> Result<(), gst::ErrorMessage> {
self.parent_start()
}
/// Called after streaming has stopped (PAUSED -> READY state change)
///
/// Optional, can be used to clean up streaming state.
fn stop(&self) -> Result<(), gst::ErrorMessage> {
self.parent_stop()
}
/// Called when new caps are received on the sink pad.
///
/// Can be used to configure the caps on the src pad or to configure caps-specific state.
/// If draining is necessary because of the caps change then the subclass will have to do that.
///
/// Optional, by default does nothing.
fn set_sink_caps(&self, caps: &gst::Caps) -> bool {
self.parent_set_sink_caps(caps)
}
/// Called whenever a new packet is available.
fn handle_packet(&self, packet: &Packet) -> Result<gst::FlowSuccess, gst::FlowError> {
self.parent_handle_packet(packet)
}
/// Called whenever a discontinuity or EOS is observed.
///
/// The subclass should output any pending buffers it can output at this point.
///
/// This will be followed by a call to [`Self::flush`].
///
/// Optional, by default drops all still pending packets and forwards all still pending buffers
/// with the last known timestamp.
fn drain(&self) -> Result<gst::FlowSuccess, gst::FlowError> {
self.parent_drain()
}
/// Called on `FlushStop` or whenever all pending data should simply be discarded.
///
/// The subclass should reset its internal state as necessary.
///
/// Optional.
fn flush(&self) {
self.parent_flush()
}
/// Called whenever a new event arrives on the sink pad.
///
/// Optional, by default does the standard event handling of the base class.
fn sink_event(&self, event: gst::Event) -> Result<gst::FlowSuccess, gst::FlowError> {
self.parent_sink_event(event)
}
/// Called whenever a new event arrives on the src pad.
///
/// Optional, by default does the standard event handling of the base class.
fn src_event(&self, event: gst::Event) -> Result<gst::FlowSuccess, gst::FlowError> {
self.parent_src_event(event)
}
/// Called whenever a new query arrives on the sink pad.
///
/// Optional, by default does the standard query handling of the base class.
fn sink_query(&self, query: &mut gst::QueryRef) -> bool {
self.parent_sink_query(query)
}
/// Called whenever a new query arrives on the src pad.
///
/// Optional, by default does the standard query handling of the base class.
fn src_query(&self, query: &mut gst::QueryRef) -> bool {
self.parent_src_query(query)
}
/// Called whenever a meta from an input buffer has to be copied to the output buffer.
///
/// Optional, by default simply copies over all metas.
fn transform_meta(
&self,
in_buf: &gst::BufferRef,
meta: &gst::MetaRef<gst::Meta>,
out_buf: &mut gst::BufferRef,
) {
self.parent_transform_meta(in_buf, meta, out_buf);
}
}
/// Trait containing extension methods for `RtpBaseDepay2Impl`, specifically methods for chaining
/// up to the parent implementation of virtual methods.
pub trait RtpBaseDepay2ImplExt: RtpBaseDepay2Impl {
fn parent_set_sink_caps(&self, caps: &gst::Caps) -> bool {
unsafe {
let data = Self::type_data();
let parent_class = &*(data.as_ref().parent_class() as *mut Class);
(parent_class.set_sink_caps)(self.obj().unsafe_cast_ref(), caps)
}
}
fn parent_start(&self) -> Result<(), gst::ErrorMessage> {
unsafe {
let data = Self::type_data();
let parent_class = &*(data.as_ref().parent_class() as *mut Class);
(parent_class.start)(self.obj().unsafe_cast_ref())
}
}
fn parent_stop(&self) -> Result<(), gst::ErrorMessage> {
unsafe {
let data = Self::type_data();
let parent_class = &*(data.as_ref().parent_class() as *mut Class);
(parent_class.stop)(self.obj().unsafe_cast_ref())
}
}
fn parent_handle_packet(&self, packet: &Packet) -> Result<gst::FlowSuccess, gst::FlowError> {
unsafe {
let data = Self::type_data();
let parent_class = &*(data.as_ref().parent_class() as *mut Class);
(parent_class.handle_packet)(self.obj().unsafe_cast_ref(), packet)
}
}
fn parent_drain(&self) -> Result<gst::FlowSuccess, gst::FlowError> {
unsafe {
let data = Self::type_data();
let parent_class = &*(data.as_ref().parent_class() as *mut Class);
(parent_class.drain)(self.obj().unsafe_cast_ref())
}
}
fn parent_flush(&self) {
unsafe {
let data = Self::type_data();
let parent_class = &*(data.as_ref().parent_class() as *mut Class);
(parent_class.flush)(self.obj().unsafe_cast_ref());
}
}
fn parent_sink_event(&self, event: gst::Event) -> Result<gst::FlowSuccess, gst::FlowError> {
unsafe {
let data = Self::type_data();
let parent_class = &*(data.as_ref().parent_class() as *mut Class);
(parent_class.sink_event)(self.obj().unsafe_cast_ref(), event)
}
}
fn parent_src_event(&self, event: gst::Event) -> Result<gst::FlowSuccess, gst::FlowError> {
unsafe {
let data = Self::type_data();
let parent_class = &*(data.as_ref().parent_class() as *mut Class);
(parent_class.src_event)(self.obj().unsafe_cast_ref(), event)
}
}
fn parent_sink_query(&self, query: &mut gst::QueryRef) -> bool {
unsafe {
let data = Self::type_data();
let parent_class = &*(data.as_ref().parent_class() as *mut Class);
(parent_class.sink_query)(self.obj().unsafe_cast_ref(), query)
}
}
fn parent_src_query(&self, query: &mut gst::QueryRef) -> bool {
unsafe {
let data = Self::type_data();
let parent_class = &*(data.as_ref().parent_class() as *mut Class);
(parent_class.src_query)(self.obj().unsafe_cast_ref(), query)
}
}
fn parent_transform_meta(
&self,
in_buf: &gst::BufferRef,
meta: &gst::MetaRef<gst::Meta>,
out_buf: &mut gst::BufferRef,
) {
unsafe {
let data = Self::type_data();
let parent_class = &*(data.as_ref().parent_class() as *mut Class);
(parent_class.transform_meta)(self.obj().unsafe_cast_ref(), in_buf, meta, out_buf)
}
}
}
impl<T: RtpBaseDepay2Impl> RtpBaseDepay2ImplExt for T {}
#[derive(Debug)]
pub struct Packet {
buffer: gst::MappedBuffer<gst::buffer::Readable>,
discont: bool,
ext_seqnum: u64,
ext_timestamp: u64,
marker: bool,
payload_range: Range<usize>,
}
impl Packet {
pub fn ext_seqnum(&self) -> u64 {
self.ext_seqnum
}
pub fn ext_timestamp(&self) -> u64 {
self.ext_timestamp
}
pub fn marker_bit(&self) -> bool {
self.marker
}
pub fn discont(&self) -> bool {
self.discont
}
pub fn pts(&self) -> Option<gst::ClockTime> {
self.buffer.buffer().pts()
}
pub fn payload(&self) -> &[u8] {
&self.buffer[self.payload_range.clone()]
}
pub fn payload_buffer(&self) -> gst::Buffer {
self.buffer
.buffer()
.copy_region(
gst::BufferCopyFlags::MEMORY,
self.payload_range.start..self.payload_range.end,
)
.expect("Failed copying buffer")
}
/// Note: This function will panic if the range is out of bounds.
pub fn payload_subbuffer(&self, range: impl RangeBounds<usize>) -> gst::Buffer {
let range_start = match range.start_bound() {
std::ops::Bound::Included(&start) => start,
std::ops::Bound::Excluded(&start) => start.checked_add(1).unwrap(),
std::ops::Bound::Unbounded => 0,
}
.checked_add(self.payload_range.start)
.unwrap();
let range_end = match range.end_bound() {
std::ops::Bound::Included(&range_end) => range_end
.checked_add(self.payload_range.start)
.and_then(|v| v.checked_add(1))
.unwrap(),
std::ops::Bound::Excluded(&range_end) => {
range_end.checked_add(self.payload_range.start).unwrap()
}
std::ops::Bound::Unbounded => self.payload_range.end,
};
self.buffer
.buffer()
.copy_region(gst::BufferCopyFlags::MEMORY, range_start..range_end)
.expect("Failed to create subbuffer")
}
/// Note: For offset with unspecified length just use `payload_subbuffer(off..)`.
/// Note: This function will panic if the offset or length are out of bounds.
pub fn payload_subbuffer_from_offset_with_length(
&self,
start: usize,
length: usize,
) -> gst::Buffer {
self.payload_subbuffer(start..start + length)
}
}
/// Class struct for `RtpBaseDepay2`.
#[repr(C)]
pub struct Class {
parent: gst::ffi::GstElementClass,
start: fn(&RtpBaseDepay2) -> Result<(), gst::ErrorMessage>,
stop: fn(&RtpBaseDepay2) -> Result<(), gst::ErrorMessage>,
set_sink_caps: fn(&RtpBaseDepay2, caps: &gst::Caps) -> bool,
handle_packet: fn(&RtpBaseDepay2, packet: &Packet) -> Result<gst::FlowSuccess, gst::FlowError>,
drain: fn(&RtpBaseDepay2) -> Result<gst::FlowSuccess, gst::FlowError>,
flush: fn(&RtpBaseDepay2),
sink_event: fn(&RtpBaseDepay2, event: gst::Event) -> Result<gst::FlowSuccess, gst::FlowError>,
src_event: fn(&RtpBaseDepay2, event: gst::Event) -> Result<gst::FlowSuccess, gst::FlowError>,
sink_query: fn(&RtpBaseDepay2, query: &mut gst::QueryRef) -> bool,
src_query: fn(&RtpBaseDepay2, query: &mut gst::QueryRef) -> bool,
transform_meta: fn(
&RtpBaseDepay2,
in_buf: &gst::BufferRef,
meta: &gst::MetaRef<gst::Meta>,
out_buf: &mut gst::BufferRef,
),
allowed_meta_tags: &'static [&'static str],
}
unsafe impl ClassStruct for Class {
type Type = imp::RtpBaseDepay2;
}
impl std::ops::Deref for Class {
type Target = glib::Class<<<Self as ClassStruct>::Type as ObjectSubclass>::ParentType>;
fn deref(&self) -> &Self::Target {
unsafe { &*(&self.parent as *const _ as *const _) }
}
}
unsafe impl<T: RtpBaseDepay2Impl> IsSubclassable<T> for RtpBaseDepay2 {
fn class_init(class: &mut glib::Class<Self>) {
Self::parent_class_init::<T>(class);
let class = class.as_mut();
class.start = |obj| unsafe {
let imp = obj.unsafe_cast_ref::<T::Type>().imp();
imp.start()
};
class.stop = |obj| unsafe {
let imp = obj.unsafe_cast_ref::<T::Type>().imp();
imp.stop()
};
class.set_sink_caps = |obj, caps| unsafe {
let imp = obj.unsafe_cast_ref::<T::Type>().imp();
imp.set_sink_caps(caps)
};
class.handle_packet = |obj, packet| unsafe {
let imp = obj.unsafe_cast_ref::<T::Type>().imp();
imp.handle_packet(packet)
};
class.drain = |obj| unsafe {
let imp = obj.unsafe_cast_ref::<T::Type>().imp();
imp.drain()
};
class.flush = |obj| unsafe {
let imp = obj.unsafe_cast_ref::<T::Type>().imp();
imp.flush()
};
class.sink_event = |obj, event| unsafe {
let imp = obj.unsafe_cast_ref::<T::Type>().imp();
imp.sink_event(event)
};
class.src_event = |obj, event| unsafe {
let imp = obj.unsafe_cast_ref::<T::Type>().imp();
imp.src_event(event)
};
class.sink_query = |obj, query| unsafe {
let imp = obj.unsafe_cast_ref::<T::Type>().imp();
imp.sink_query(query)
};
class.src_query = |obj, query| unsafe {
let imp = obj.unsafe_cast_ref::<T::Type>().imp();
imp.src_query(query)
};
class.transform_meta = |obj, in_buf, meta, out_buf| unsafe {
let imp = obj.unsafe_cast_ref::<T::Type>().imp();
imp.transform_meta(in_buf, meta, out_buf)
};
class.allowed_meta_tags = T::ALLOWED_META_TAGS;
}
}
/// Timestamp offset between this buffer and the reference packet.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
#[allow(dead_code)]
pub enum TimestampOffset {
/// Offset in nanoseconds relative to the first seqnum this buffer belongs to.
Pts(gst::Signed<gst::ClockTime>),
/// Offset in nanoseconds relative to the first seqnum this buffer belongs to.
PtsAndDts(gst::Signed<gst::ClockTime>, gst::Signed<gst::ClockTime>),
}
/// Relation between queued buffer and input packet seqnums.
#[derive(Debug, Clone, PartialEq, Eq)]
#[allow(dead_code)]
pub enum PacketToBufferRelation {
Seqnums(RangeInclusive<u64>),
SeqnumsWithOffset {
seqnums: RangeInclusive<u64>,
timestamp_offset: TimestampOffset,
},
OutOfBand,
}
impl<'a> From<&'a Packet> for PacketToBufferRelation {
fn from(packet: &'a Packet) -> Self {
PacketToBufferRelation::Seqnums(packet.ext_seqnum()..=packet.ext_seqnum())
}
}
impl From<u64> for PacketToBufferRelation {
fn from(ext_seqnum: u64) -> Self {
PacketToBufferRelation::Seqnums(ext_seqnum..=ext_seqnum)
}
}

2117
net/rtp/src/basepay/imp.rs Normal file

File diff suppressed because it is too large Load diff

Some files were not shown because too many files have changed in this diff Show more