The current optimization when input align and out out align are
the same is not very correct. We simply copy the data from input
buffer to output buffer, but we failed to consider the dropping of
OBUs. When we need to drop some OBUs(such as filter out the OBUs
of some temporal ID), we can not do simple copy. So we need to
always copy the input OBUs into a cache.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1979>
1. Add the mono_chrome to identify 4:0:0 chroma-format.
2. Correct the mapping between subsampling_x/y and chroma-format.
There is no 4:4:0 format definition in AV1. And 4:4:4 should
let both subsampling_x/y be equal to 0.
3. Send the chroma-format when the color space is not RGB.
Fixes: #1502
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1974>
This AV1 parse implements the conversion between alignment of obu,
tu and frame, and the conversion between stream-format of obu-stream
and annexb.
TODO:
1. May need a property of operating_point to filter the OBUs
2. May add a property to disable deep parse.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1614>
When carrying over existing GstStream to a new GstStreamCollection we need to
check whether they *actually* were being used in the previous collection.
This avoids adding unknown streams (metadata, PSI, etc...) to the collection on
updates.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1880>
Otherwise, when rtpm2src cancels an inflight operation that has a queued
message stored, then the rtmp connection operation is not stopped.
If the cancellation occurs during rtmp connection start up, then
rtpm2src does not have any way of accessing the connection object as it
has not been returned yet. As a result, rtpm2src will cancel, the
connection will still be processing things and the
GMainContext/GMainLoop associated with the outstanding operation will be
destroyed. All outstanding operations and the rtmpconnection object will
therefore be leaked in this case.
Fixes: https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/1425
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1862>
Instead of waiting so that we can simply use a clocksync element as
filter, otherwise we won't know the pipeline is live as it won't
return NO_PREROLL as one would expect in that case.
Adding it right away shouldn't create any issue, both ways are fine.
A lot of content producers out there targetting "adaptive streaming" are riddled
with non-compliant PCR streams (essentially all the players out there just use
PTS/DTS and don't care about the PCR).
In order to gracefully cope with these, we detect them appropriately and any
small (< 15s) PCR resets get gracefully ignored.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1785>
Currently for buffer splitting only output duration can be specified.
Allow specifying a buffer size in bytes for splitting.
Consider a use case of the below pipeline
appsrc ! rptL16pay ! capsfilter ! rtpbin ! udpsink
Maintaining MTU for RTP transfer is desirable but in a scenario
where the buffers being pushed to appsrc do not adhere to this,
an audiobuffersplit element placed between appsrc and rtpL16pay
with output buffer size specified considering the MTU can help
mitigate this.
While rtpL16pay already has a MTU setting, in case of where an
incoming buffer has a size close to MTU, for eg. with a MTU of
1280, a buffer of size 1276 bytes would be split into two buffers,
one of 1268 and other of 8 bytes considering RTP header size of
12 bytes. Putting audiobuffersplit between appsrc and rtpL16pay
can take care of this.
While buffer duration could still be used being able to specify
the size in bytes is helpful here.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1578>
For Dolby AC4 audio experience, parsing PMTs/APD from transport stream layer for all available presentations.
Refer to ETSI EN 300 468 V1.16.1 (2019-05)
1. 6.4.1 Audio preselection descriptor
2. Table M.1: Mapping of codec specific values to the audio preselection descriptor
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1555>
Adding vp9parse element to parse various stream information such as
resolution, profile, and so on. If upstream does not provide resolution and/or
profile, this would be useful for decodebin pipeline for autoplugging
suitable decoder element depending on template caps of each decoder element.
In addition, vp9parse element supports unpacking superframe into
single frame for decoders. The vp9 superframe is a frame which consists
of multiple frames (or superframe with one frame is allowed) followed by superframe
index block. Then unpacked each frame will be considered as normal frame
by decoder. The decision for unpacking will be done by downstream element's
"alignment" caps field, which can be "super-frame" or "frame".
If downstream specifies the "alignment" as "frame",
then vp9parse element will split an incoming superframe into single frames
and the superframe index (located at the end of the superframe) data
will be discarded by vp9parse element.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1041>
Making the thread receiving the stats wait on the loop to respond was
not a good idea, as the latter can get blocked on the streaming thread.
Have get_stats read the values directly, adding a lock to ensure we
don't read garbage.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1550>