We would create another chunk that ends after the fragment end, and
would from then on consider the stream always filled for the chunk
because it starts after the current fragment end (i.e. nothing would go
into this fragment).
This is obviously wrong because the actual fragment end moved further
ahead because of the additional chunk.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1285>
This works around non-determinism in aggregator where depending on
timing it can happen that it consumes all buffers from both pads or
waits for another buffer on one pad while the other one already has one.
The effect in this test was that it sometimes timed out. By providing
one more buffer it is guaranteed now that at this point the muxer is
beyond the end of the first fragment.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1086>
Allow outputting sub-fragments (chunks in CMAF terms) that are shorter
than the fragment duration and don't usually start on a keyframe. By
this the buffering requirements of the element is reduced to one chunk
duration, as is the latency.
This is used for formats like low-latency / LL-HLS and DASH.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1086>
Due to how aggregator works, it depends on how buffers are pulled
whether aggregate() is called again or it is waiting for a timeout or EOS:
works:
- pad 1: 4 buffers, pad 2: 4 buffers
- aggregate ready: take all 4/4 buffers
- pad 1: 1 buffers, pad 2: 1 buffer
- aggregate ready: take all 1/1 buffers
waits:
- pad 1: 5 buffers, pad 2: 4 buffers
- aggregate ready: take all 5/4 buffers
- pad 1: 0 buffers, pad 2: 1 buffer
- aggregate not ready: waiting for timeout or EOS
Also don't manually set the clock time as that's unnecessary.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/274
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/999>