Offset and size are stored as 32 bit guint and might overflow when
adding the nal_length_size, so let's avoid that.
For the size this would happen if the AVC/HEVC NAL unit size happens to
be stored in 4 bytes and is 4294967292 or higher, which is likely
corrupted data anyway.
For the offset this is something for the caller of these functions to
take care of but is unlikely to happen as it would require parsing on a
>4GB buffer.
Allowing these overflows causes all kinds of follow-up bugs in the
h2645parse elements, ranging from infinite loops and memory leaks to
potential memory corruptions.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2103>
Prior to that, cccombiner's behaviour was essentially that of
a funnel: it strictly looked at input timestamps to associate
together video and caption buffers.
This patch instead exposes a "schedule" property, with a default
of TRUE, to control whether caption buffers should be smoothly
scheduled, in order to have exactly one per output video buffer.
This can involve rewriting input captions, for example when the
input is CDP sequence counters are rewritten, time codes are dropped
and potentially re-injected if the input video frame had a time code
meta.
Caption buffers may also get split up in order to assign captions to
the correct field when the input is interlaced.
This can also imply that the input will drift from synchronization,
when there isn't enough padding in the input stream to catch up. In
that case the element will start dropping old caption buffers once
the number of buffers in its internal queue reaches a certain limit
(configurable).
The property is exposed so that existing users of cccombiner can
revert back to the original behaviour, but should eventually be
removed, as that behaviour was simply inadequate.
This commit also disallows changing the input caption type, as
this would needlessly complicate implementation, and removes
the corresponding test.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2076>
It was possible to generate a SDP that had an RTX payload type
that matched one of the media payload types when providing caps via
codec_preferences without any sink pads.
Fixes
m=video 9 UDP/TLS/RTP/SAVPF 96
...
a=rtpmap:96 VP8/90000
a=rtcp-fb:96 nack pli
a=fmtp:96 apt=96
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2046>
default min port == 0, max port == 65535 -- if min port == 0, uses existing random port selection (range ignored)
add 'gathering_started' flag to avoid changing ports after gathering has started
validity checks: min port <= max port enforced, error thrown otherwise
include tests to ensure port range is being utilized (by @hhardy)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/119>
Note that newly added formats (YUY2, UYVY, and VYUY) are not supported
render target view formats. So such formats can be only input of d3d11convert
or d3d11videosink. Another note is that YUY2 format is a very common
format for hardware en/decoders on Windows.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1581>
Adding vp9parse element to parse various stream information such as
resolution, profile, and so on. If upstream does not provide resolution and/or
profile, this would be useful for decodebin pipeline for autoplugging
suitable decoder element depending on template caps of each decoder element.
In addition, vp9parse element supports unpacking superframe into
single frame for decoders. The vp9 superframe is a frame which consists
of multiple frames (or superframe with one frame is allowed) followed by superframe
index block. Then unpacked each frame will be considered as normal frame
by decoder. The decision for unpacking will be done by downstream element's
"alignment" caps field, which can be "super-frame" or "frame".
If downstream specifies the "alignment" as "frame",
then vp9parse element will split an incoming superframe into single frames
and the superframe index (located at the end of the superframe) data
will be discarded by vp9parse element.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1041>
Call gst_aggregator_selected_samples() after identifying the
caption buffers that will be added as a meta on the next video
buffer.
Implement GstAggregator.peek_next_sample.
Add an example that demonstrates usage of the new API in
combination with the existing buffer-consumed signal.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1390>
A pipeline like this:
closedcaption/x-cea-708,format=cdp,framerate=30000/1001 ! ccconverter ! closedcaption/x-cea-708,format=cc_data
would produce a critical/assert:
GStreamer-CRITICAL **: 14:21:11.509: gst_util_fraction_multiply: assertion 'a_d != 0' failed
because there would be no framerate field on ccconverter's output.
Fixed by always fixating a framerate if the input has a framerate.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1393>
Extensions and layers can be enabled before calling
gst_vulkan_instance_open() but after calling
gst_vulkan_instance_fill_info().
Use the list of available extensions to better choose a default display
implementation to use based on the available Vulkan extensions for surface
output.
Defaults are still the same.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1341>