Add a new property "do-aggregate"* to the H.264 RTP payloader which
enables STAP-A aggregation as per [RFC-6184][1]. With aggregation enabled,
packets are bundled instead of sent immediately, up until the MTU size.
Bundles also end at access unit boundaries or when packets have to be
fragmented.
*: The property-name is kept generic since it might apply more widely,
e.g. STAP-B or MTAP.
[1]: https://tools.ietf.org/html/rfc6184#section-5.7
Closes https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/434
This macro is not longer used. It was secretly checking if that nal was
a slice, and confusingly name to that one may think it was checking if
the nal is an AUD.
The code was reading the timestamp from the adapter before pushing the
new buffer into it. As a side effect, if the adapter was empty, we'd end
up using an older timestamp. In alignment=au, it means that all
timestamp was likely one frame in the past, while in alignment=nal, with
multiple slices per frame, the first slice would have the timestamp of
the previous one.
The marker bit is used for efficient decoding. The assumption that
it should be set on the AUD is wrong, since the AUD is conceptually
starts the frame, while the marker is to indicate the end.
So properly set the marker bit as soon as we know we are ending an
AU and also whenever upstream have set the GST_BUFFER_FLAG_MARKER
flag.
Don't allow external encoder to use one of the reserved NAL type
implicated in NAL aggreation. These out-of-spec NAL types, if passed
from the outside world will lead to an invalid RTP payload being
created.
rtph264pay and rtph265pay skip updating the parameter set timestamp if
the units they see contain no new configuration. This can result in
them injecting duplicate parameters.
https://bugzilla.gnome.org/show_bug.cgi?id=796748
This would happen if input is byte-stream with four-byte
sync markers instead of three-byte ones. The code that
scans for sync markers will place the start of the NALU
on the third-last byte of the NALU sync marker, which
means that any additional zeros may be counted as belonging
to the previous NALU instead of being part of the next sync
marker. Fix that so we don't send SPS/PPS with trailing
zeros in this case.
https://bugzilla.gnome.org/show_bug.cgi?id=732758
Every g_quark_from_static_string() is a hash table lookup serialised
on the global quark lock in GLib. Let's just look up the two quarks
we need once and cache them locally for future use. While we're at it,
add new utility functions for the two most commonly used tags
(audio + video). Make first argument a gpointer so we don't have to
cast and make the code ugly. These are used for logging purposes
only anyway.
Always intersect with the filter caps in the getcaps function
to make sure we return a subset of what was requested.
Other payloaders also have this problem and need fixing
in future commits.
This is supposed to be either in the codec_data (avc stream format) or inside
the stream, and we extract it from there. It should not be set from a
property as it's stream specific.
https://bugzilla.gnome.org/show_bug.cgi?id=767789
It's not enough to have timeout or event based SPS/PPS information sent
in RTP packets. There are some scenarios when key frames may appear
more frequently than once a second, in which case the minimum timeout
for "config-interval" of 1 second for sending SPS/PPS is not sufficient.
It might also be desirable in general to make sure the SPS/PPS is
available with every keyframe (packet loss aside), so receivers can
actually pick up decoding immediately from the first keyframe if
SPS/PPS is not signaled out of band.
This patch adds the possibility to send SPS/PPS with every key frame. This
mode can be enabled by setting "config-interval" property to -1. In this
case the payloader will add SPS and PPS before every key (IDR) frame.
https://bugzilla.gnome.org/show_bug.cgi?id=757892
This way we can use -1 as special value, which is nicer than MAXUINT.
This is backwards compatible even with the GValue API, as shown by
a unit test.
https://bugzilla.gnome.org/show_bug.cgi?id=757892
The payloader didn't copy anything so far, the depayloader copied every
possible meta. Let's make it consistent and just copy all metas without
tags or with only the video tag.
https://bugzilla.gnome.org/show_bug.cgi?id=751774
A race condition in the state change function may cause buffers
to be unreffed while they are still used by the streaming thread
in gst_rtp_h264_pay_send_sps_pps() resulting in a crash. Chain
up to the parent class first in the state change function to
make sure streaming has stopped and only then free those buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=741381
Similarly to what we did with the DELTA_UNIT flag, this patch
propagates the DISCONT flag to the first RTP packet being used to transfer a
DISCONT buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=730563
Downstream elements may be interested knowing if a RTP packet is the start
of a key frame (to implement a RTP extension as defined in the
ONVIF Streaming Spec for example).
We do this by checking the GST_BUFFER_FLAG_DELTA_UNIT flag we receive from
upstream and propagate it to the *first* RTP packet outputted to transfer this
buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=730563
This fixes an issue with gst-rtsp-server where no sps and pps are
sent for the first intra frame, because the payloader starts working
already when receiving DESCRIBE but there is no transports so it tries
to send sps and pps, but that fails with a FLUSHING flow. But the time
for last sent sps and pps would still be set, so when PLAY arrives and
the first intra frame is to be sent there is no sps and pps sent due to
that time since last sps pps is less than spspps_interval.
https://bugzilla.gnome.org/show_bug.cgi?id=724213