Currently for buffer splitting only output duration can be specified.
Allow specifying a buffer size in bytes for splitting.
Consider a use case of the below pipeline
appsrc ! rptL16pay ! capsfilter ! rtpbin ! udpsink
Maintaining MTU for RTP transfer is desirable but in a scenario
where the buffers being pushed to appsrc do not adhere to this,
an audiobuffersplit element placed between appsrc and rtpL16pay
with output buffer size specified considering the MTU can help
mitigate this.
While rtpL16pay already has a MTU setting, in case of where an
incoming buffer has a size close to MTU, for eg. with a MTU of
1280, a buffer of size 1276 bytes would be split into two buffers,
one of 1268 and other of 8 bytes considering RTP header size of
12 bytes. Putting audiobuffersplit between appsrc and rtpL16pay
can take care of this.
While buffer duration could still be used being able to specify
the size in bytes is helpful here.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1578>
For Dolby AC4 audio experience, parsing PMTs/APD from transport stream layer for all available presentations.
Refer to ETSI EN 300 468 V1.16.1 (2019-05)
1. 6.4.1 Audio preselection descriptor
2. Table M.1: Mapping of codec specific values to the audio preselection descriptor
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1555>
Adding vp9parse element to parse various stream information such as
resolution, profile, and so on. If upstream does not provide resolution and/or
profile, this would be useful for decodebin pipeline for autoplugging
suitable decoder element depending on template caps of each decoder element.
In addition, vp9parse element supports unpacking superframe into
single frame for decoders. The vp9 superframe is a frame which consists
of multiple frames (or superframe with one frame is allowed) followed by superframe
index block. Then unpacked each frame will be considered as normal frame
by decoder. The decision for unpacking will be done by downstream element's
"alignment" caps field, which can be "super-frame" or "frame".
If downstream specifies the "alignment" as "frame",
then vp9parse element will split an incoming superframe into single frames
and the superframe index (located at the end of the superframe) data
will be discarded by vp9parse element.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1041>
Making the thread receiving the stats wait on the loop to respond was
not a good idea, as the latter can get blocked on the streaming thread.
Have get_stats read the values directly, adding a lock to ensure we
don't read garbage.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1550>
Ensure we take the object lock while accessing `GstElement.sinkpads`.
Use an iterator when the code isn't simple to avoid deadlock.
When we find the best pad, take a reference so a concurrent pad
release doesn't destroy the pad before we're done with it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1553>
There are quite a few reserved PID in the various MPEG-TS (and derivate)
specifications which we should definitely not use. Those PID have a certain
meaning and purpose.
Furthermore, a lot of the code in the muxer implementation also makes assumption
on the purpose of streams based on their PID.
Therefore, when requesting a pad with a specific PID, make sure it is not a
restricted PID.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1561>
Regardless if it's multicast or not, set the address property to match
the element address. This is the address of the interface to listen to,
which is expected to be ANY in most cases, but should be honnored even
for RTCP non-multicast case.
This also fixes an assertion if the address is not a parsable IPv4 or
IPv6 string.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1433>
Previously, "en" (should have actually been "eng") was assumed
for the ISO-639 language descriptor if no language was explicitely given.
Neither ETSI EN 300 468 nor ATSC A/52 mandate for a language descriptor,
so we should simply not set it, if it's unknown.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1386>
The audio/mpeg,mpegversion=2 caps in GStreamer refer to
MPEG-2 AAC (ISO 13818-7), not to the extended MP3 (ISO 13818-3),
which is audio/mpeg,mpegversion=1,mpegaudioversion=2/3
Fix the caps, and add handling for MPEG-2 AAC in both ADTS and raw
form, adding ADTS headers for the latter.
rtpbin can still emit signals when it is being disposed, and while
rtpbin is inside ristsrc/ristsink it can still live longer.
So we either have disconnect all signals at some point, or let GObject
take care of that automatically.
Related to !1412
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1413>
rtpbin can still emit signals when it is being disposed, and while
rtpbin is inside rtpsrc/rtpsink it can still live longer.
So we either have disconnect all signals at some point, or let GObject
take care of that automatically.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1412>