Currently for buffer splitting only output duration can be specified.
Allow specifying a buffer size in bytes for splitting.
Consider a use case of the below pipeline
appsrc ! rptL16pay ! capsfilter ! rtpbin ! udpsink
Maintaining MTU for RTP transfer is desirable but in a scenario
where the buffers being pushed to appsrc do not adhere to this,
an audiobuffersplit element placed between appsrc and rtpL16pay
with output buffer size specified considering the MTU can help
mitigate this.
While rtpL16pay already has a MTU setting, in case of where an
incoming buffer has a size close to MTU, for eg. with a MTU of
1280, a buffer of size 1276 bytes would be split into two buffers,
one of 1268 and other of 8 bytes considering RTP header size of
12 bytes. Putting audiobuffersplit between appsrc and rtpL16pay
can take care of this.
While buffer duration could still be used being able to specify
the size in bytes is helpful here.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1578>
And also set/unset the RESYNC flag accordingly.
It can happen that the flag is preserved by GstAdapter from the input
buffer. For example if a big input buffer is split into many small ones,
each of the small ones would have the flag set.
All other buffer flags seem safe to keep here if they were set,
including the GAP flag.
Also ensure that the buffer is actually writable before changing any
flags or metadata on it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1298>
We might have to drain already queued input based on the old segment
before forwarding the new segment event. The new segment is only
forwarded after a discont as otherwise we might cause unnecessary
timestamp jumps as we output buffers timestamped based on sample counts.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1254>
If we drain after a discont, the discont time given by the stream
synchronizer is already the time after the discontinuity. But we need to
drain all pending data based on the previous discont time instead.
This happens if we had no CAPS event yet but e.g. got an EOS event. We
would then try to output a 0-sized buffer, but getting that from the
adapter will give an assertion, return NULL and then crash.
This is useful e.g. if audio buffers should be exactly the duration of a
video frame, or if a audio buffers should never be too large because of
latency constraints.
The element is taking a fractional buffer duration, to allow working
with e.g. 1001/30000 as output duration and it accumulates rounding
errors in the buffer durations and compensates for them by making some
buffers one sample larger than the others.
https://bugzilla.gnome.org/show_bug.cgi?id=774689