Currently for buffer splitting only output duration can be specified.
Allow specifying a buffer size in bytes for splitting.
Consider a use case of the below pipeline
appsrc ! rptL16pay ! capsfilter ! rtpbin ! udpsink
Maintaining MTU for RTP transfer is desirable but in a scenario
where the buffers being pushed to appsrc do not adhere to this,
an audiobuffersplit element placed between appsrc and rtpL16pay
with output buffer size specified considering the MTU can help
mitigate this.
While rtpL16pay already has a MTU setting, in case of where an
incoming buffer has a size close to MTU, for eg. with a MTU of
1280, a buffer of size 1276 bytes would be split into two buffers,
one of 1268 and other of 8 bytes considering RTP header size of
12 bytes. Putting audiobuffersplit between appsrc and rtpL16pay
can take care of this.
While buffer duration could still be used being able to specify
the size in bytes is helpful here.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1578>
For Dolby AC4 audio experience, parsing PMTs/APD from transport stream layer for all available presentations.
Refer to ETSI EN 300 468 V1.16.1 (2019-05)
1. 6.4.1 Audio preselection descriptor
2. Table M.1: Mapping of codec specific values to the audio preselection descriptor
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1555>
Adding vp9parse element to parse various stream information such as
resolution, profile, and so on. If upstream does not provide resolution and/or
profile, this would be useful for decodebin pipeline for autoplugging
suitable decoder element depending on template caps of each decoder element.
In addition, vp9parse element supports unpacking superframe into
single frame for decoders. The vp9 superframe is a frame which consists
of multiple frames (or superframe with one frame is allowed) followed by superframe
index block. Then unpacked each frame will be considered as normal frame
by decoder. The decision for unpacking will be done by downstream element's
"alignment" caps field, which can be "super-frame" or "frame".
If downstream specifies the "alignment" as "frame",
then vp9parse element will split an incoming superframe into single frames
and the superframe index (located at the end of the superframe) data
will be discarded by vp9parse element.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1041>
Making the thread receiving the stats wait on the loop to respond was
not a good idea, as the latter can get blocked on the streaming thread.
Have get_stats read the values directly, adding a lock to ensure we
don't read garbage.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1550>
Ensure we take the object lock while accessing `GstElement.sinkpads`.
Use an iterator when the code isn't simple to avoid deadlock.
When we find the best pad, take a reference so a concurrent pad
release doesn't destroy the pad before we're done with it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1553>
There are quite a few reserved PID in the various MPEG-TS (and derivate)
specifications which we should definitely not use. Those PID have a certain
meaning and purpose.
Furthermore, a lot of the code in the muxer implementation also makes assumption
on the purpose of streams based on their PID.
Therefore, when requesting a pad with a specific PID, make sure it is not a
restricted PID.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1561>
Regardless if it's multicast or not, set the address property to match
the element address. This is the address of the interface to listen to,
which is expected to be ANY in most cases, but should be honnored even
for RTCP non-multicast case.
This also fixes an assertion if the address is not a parsable IPv4 or
IPv6 string.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1433>
Previously, "en" (should have actually been "eng") was assumed
for the ISO-639 language descriptor if no language was explicitely given.
Neither ETSI EN 300 468 nor ATSC A/52 mandate for a language descriptor,
so we should simply not set it, if it's unknown.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1386>
The audio/mpeg,mpegversion=2 caps in GStreamer refer to
MPEG-2 AAC (ISO 13818-7), not to the extended MP3 (ISO 13818-3),
which is audio/mpeg,mpegversion=1,mpegaudioversion=2/3
Fix the caps, and add handling for MPEG-2 AAC in both ADTS and raw
form, adding ADTS headers for the latter.
rtpbin can still emit signals when it is being disposed, and while
rtpbin is inside ristsrc/ristsink it can still live longer.
So we either have disconnect all signals at some point, or let GObject
take care of that automatically.
Related to !1412
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1413>
rtpbin can still emit signals when it is being disposed, and while
rtpbin is inside rtpsrc/rtpsink it can still live longer.
So we either have disconnect all signals at some point, or let GObject
take care of that automatically.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1412>
Try to negotiate if the framerates on either sides of the interlace are
decided using capsfilters and the framerates are correct. Otherwise the
following pipelines would fail to negotiate:
gst-launch-1.0 videotestsrc !
video/x-raw,framerate=24/1,interlace-mode=progressive ! interlace
field-pattern=2 ! video/x-raw,framerate =30/1 ! fakesink
gst-launch-1.0 videotestsrc !
video/x-raw,framerate=60/1,interlace-mode=progressive ! interlace
field-pattern=0 ! video/x-raw,framerate=30/1 ! fakesink
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1349>
This reverts commit b75a61342f.
The parser would only set the mode to progressive or mixed, missing the
cases where it should have been interleaved. Interleaved is more
difficult to detect because in h264 it happens per frame. On the other
hand, h264 decoders detect the interlacing information per-frame and set
the caps correctly. By giving potentially incorrect interlacing
information in the parser already, it's being enforced downstream even
after decoding, breaking some use cases (e.g. an encoder can't properly
mark the stream as TFF or BFF). On the other hand, there's no valid use
case for having interlacing information on the caps at the parsing
stage, so after a lot of discussion, it was decided to revert this.
Initial commit message:
=========================
Those are the rules:
In the SPS:
* if frame_mbs_only_flag=1 => all frame progressive
* if frame_mbs_only_flag=0 => field_pic_flag defines if each frame is
progressive or interlaced, thus the mode is 'mixed' in GStreamer
terms.
https://bugzilla.gnome.org/show_bug.cgi?id=779309
=========================
Fixes#1313
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1335>
Add an element that converts AYUV video frames to a DVB
subpicture stream.
It's fairly simple for now. Later it would be good to support
input via a stream that contains only GstVideoOverlayComposition
meta.
The element searches each input video frame for the largest
sub-region containing non-transparent pixels and encodes that
as a single DVB subpicture region. It can also do palette
reduction of the input frames using code taken from
libimagequant.
There are various FIXME for potential improvements for now, but
it works.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1227>
If the src_peer_caps are EMPTY (e.g. negotiation failed somewhere), the
assertion inside gst_video_info_from_caps would fail and the whole
pipeline would crash. Check for gst_caps_is_empty before
gst_video_info_from_caps and gracefully fail if it's empty.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1333>
34af8ed66a changed the code to use the
packetizer's packets instead of the incoming buffers, but mpegtsbase
didn't actually push all packets to the subclass. As a result, padding
(PID 0x1FFF) packets got lost.
Add a new boolean to toggle pushing unknown packets to mpegtsbase and
have mpegtsparse make use of it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1300>
And also set/unset the RESYNC flag accordingly.
It can happen that the flag is preserved by GstAdapter from the input
buffer. For example if a big input buffer is split into many small ones,
each of the small ones would have the flag set.
All other buffer flags seem safe to keep here if they were set,
including the GAP flag.
Also ensure that the buffer is actually writable before changing any
flags or metadata on it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1298>
tsparse leaked input buffers quite badly:
GST_TRACERS=leaks GST_DEBUG=GST_TRACER:9 gst-launch-1.0 audiotestsrc num-buffers=3 ! avenc_aac ! mpegtsmux ! tsparse ! fakesink
The input_done vfunc was passed the input buffer, which it had to
consume. For this reason, the base class takes a reference on the buffer
if and only if input_done is not NULL.
Before 34af8ed66a, input_done was used in
tsparse to pass on the input buffer on the "src" pad. That commit
changed the code to packetize for that pad as well and removed the use
of input_done.
Afterwards, 0d2e908523 set input_done
again in order to handle automatic alignment of the output buffers to
the input buffers. However, it ignored the provided buffer and did not
even unref it, causing a leak.
Since no code makes use of the buffer provided with input_done, just
remove the argument in order to simplify things a bit.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1274>
We might have to drain already queued input based on the old segment
before forwarding the new segment event. The new segment is only
forwarded after a discont as otherwise we might cause unnecessary
timestamp jumps as we output buffers timestamped based on sample counts.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1254>
If the input has a miss-placed filler zero byte (e.g. a filler without a 4
bytes start code on the next NAL), we would endup using the same timestamp
twice. Ask the base class to read the timestamp from the buffer were the NAL
actually starts.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1251>
This will stop stripping four bytes start code. This was fixed and broken
again as it was causing the a timestamp shift. We now call
gst_base_parse_set_ts_at_offset() with the offset of the first NAL to ensure
that fixing a moderatly broken input stream won't affect the timestamps. We
also fixes the unit test, removing a comment about the stripping behaviour not
being correct.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1251>
The volatile is not needed here and causes compiler warnings
with newer GLib versions.
gstautoconvert.c: In function ‘gst_auto_convert_dispose’ (and elsewhere):
glib/gatomic.h:108:3: warning: initialization discards ‘volatile’ qualifier from pointer target type [-Wdiscarded-qualifiers]
gstautoconvert.c:224:24: note: in expansion of macro ‘g_atomic_pointer_get’
224 | GList *factories = g_atomic_pointer_get (&autoconvert->factories);
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1237>
Otherwise we may endup pushing incomplete caps, which cause a renegotiation.
Note that this has the effect that caps are no longer pushed twice in presence
of valid framerate in the headers.
Otherwise we may endup pushing incomplete caps. Note that this has the side
effect that caps are no longer pushed twice in presence of VUI with valid
framerate.
There is some code to fixup broken stream that uses the SEI location,
this code is meant to locate SUFFIX SEI only. This should prevent
unwanted side effect if SUFFIX SEI is used.
Waiting for the next NAL increases the latency. If alignment=nal/au
has been negotiated, assumes the the buffer contains a complete
NAL and don't expect a second start-code. This way, nal -> nal,
au -> au and au -> nal no longer introduce latency.
As a side effect, the collect_pad() function was not able to poke at the
following NAL. This call is now moved before processing the NAL, so
it's looking at the current NAL before it's ingested into the parser
state in order to dermin if the end of an AU has been reached. The AUD
injection state as been adapted to support this.
This change will break pipelines if alignment=nal is used without respecting the
alignment. Effectively, the parser will no longer fix the broken aligment
which will result in parser error and the termination of the pipeline. Such
issue existed in tsdemux element and might exist in any forks of that code.
Related to https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1193
Waiting for the next NAL increases the latency. If alignment=nal/au
has been negotiated, assumes that the buffer contains a complete
NAL and don't expect a second start-code. This way, nal -> nal,
au -> au and au -> nal no longer introduce latency.
As a side effect, the collect_pad() function was not able to poke at the
following NAL. This call is now moved before processing the NAL, so
it's looking at the current NAL before it's ingested into the parser
state in order to dermin if the end of an AU has been reached. The AUD
injection state as been adapted to support this.
This change will break pipelines if alignment=nal is used without respecting the
alignment. Effectively, the parser will no longer fix the broken aligment
which will result in parser error and the termination of the pipeline. Such
issue existed in tsdemux element and might exist in any forks of that code.
Related to https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1193
Until now, any streams in tsmux had to be present when the element
started its first buffer. Now they can appear at any point during the
stream, or even disappear and reappear later using the same PID.
Per specification in 2.14.2 "For PES packetization, no specific data
alignment constraints apply". So we should not advertise NAL
alignment.
This bug was introduced at the same moment the alignment field was introduced
10 years ago. The plan was that alignment=none (or no alignment field) was to
be used for mpegtsdemux, but no one noticed the error. The reason is that at
the same moment, everything dealing with H264 started defaulting to AU
alignment.
https://bugzilla.gnome.org/show_bug.cgi?id=606662#c22
This patch will have a side effect that a parser is now needed after the
tsdemux element. The following pipeline will not negotiate anymore as the
mpegtsmux element requires alignment={nal,au}.
... ! tsdemux ! mpegtsmux ! ...
As a side effect, anyone that forked from tsdemux should updated their code to
fix this bug.
The comparision was not testing anything meaninful. This fixes the comparision
so we now update the caps whenever the value differ. This was detected by
coverity.
CID 1461291
We need to do this without holding the lock as the `g_async_queue_pop`
waits on the loop thread to deliver the stats. The loop thread might
attempt to take the lock as well, leading to a deadlock.
Taking a reference to the connection should be enough to keep this
safe.
Add new property "update-timecode" to allow updating timecode
in picture timing SEI depending on timecode meta. Since the picture
timing SEI message requires proper VUI setting but we don't support
re-writing SPS, this might not work for some streams
The rtpbin sends signals for all SSRCs. Don't send an EOS when the SSRC
does not match the stream SSRC.
This avoids problems when an SSRC from another receiver times out.
In some scenarios the fakevideosink shouldn't advertize the overlay-composition
meta for instance, so that overlay elements perform subtitles blending
themselves.
According to the specification, the adaptation field length must be 183 if
there is no payload data and < 183 if the packet contains an adaptation
field and payload data.
Unfortunately some payloaders always set the flag for payload data, even if
the adaptation field length is 183.
Don't return with an error in this case. Clear the payload data flag
instead and parse the adaptation field as usual. This avoids visual
artefacts for such streams.
For interlaced video:
* set the interlace mode in the src caps
* double the height from SPS in the caps.
* set field latency, instead of frame latency.
Fix#778
Add new property to signalling that there is no incoming data
from peer. This can be useful if users want to stop the streaming
when the connection is alive but no packet is arriving.
Both 2 and 4 are supported version of AAC ADTS format stream.
So we need to set correct version to help negotiation
especially for non-autopluggable pipeline.