Offset and size are stored as 32 bit guint and might overflow when
adding the nal_length_size, so let's avoid that.
For the size this would happen if the AVC/HEVC NAL unit size happens to
be stored in 4 bytes and is 4294967292 or higher, which is likely
corrupted data anyway.
For the offset this is something for the caller of these functions to
take care of but is unlikely to happen as it would require parsing on a
>4GB buffer.
Allowing these overflows causes all kinds of follow-up bugs in the
h2645parse elements, ranging from infinite loops and memory leaks to
potential memory corruptions.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2103>
Prior to that, cccombiner's behaviour was essentially that of
a funnel: it strictly looked at input timestamps to associate
together video and caption buffers.
This patch instead exposes a "schedule" property, with a default
of TRUE, to control whether caption buffers should be smoothly
scheduled, in order to have exactly one per output video buffer.
This can involve rewriting input captions, for example when the
input is CDP sequence counters are rewritten, time codes are dropped
and potentially re-injected if the input video frame had a time code
meta.
Caption buffers may also get split up in order to assign captions to
the correct field when the input is interlaced.
This can also imply that the input will drift from synchronization,
when there isn't enough padding in the input stream to catch up. In
that case the element will start dropping old caption buffers once
the number of buffers in its internal queue reaches a certain limit
(configurable).
The property is exposed so that existing users of cccombiner can
revert back to the original behaviour, but should eventually be
removed, as that behaviour was simply inadequate.
This commit also disallows changing the input caption type, as
this would needlessly complicate implementation, and removes
the corresponding test.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2076>
If the driver supports it (iHD, so far) and the parameter -d is set,
the direction of the video will be changed randomly.
In the code you can select, at compilation time, if the direction
change is done by element's property or by pipeline events.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2074>
It was possible to generate a SDP that had an RTX payload type
that matched one of the media payload types when providing caps via
codec_preferences without any sink pads.
Fixes
m=video 9 UDP/TLS/RTP/SAVPF 96
...
a=rtpmap:96 VP8/90000
a=rtcp-fb:96 nack pli
a=fmtp:96 apt=96
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2046>
default min port == 0, max port == 65535 -- if min port == 0, uses existing random port selection (range ignored)
add 'gathering_started' flag to avoid changing ports after gathering has started
validity checks: min port <= max port enforced, error thrown otherwise
include tests to ensure port range is being utilized (by @hhardy)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/119>
Add two examples to demonstrate "draw-on-shared-texture" use cases.
d3d11videosink will draw application's own texture without copy
by using:
- Enable "draw-on-shared-texture" property
- make use of "begin-draw" and "draw" signals
And then, application will render the shared application's texture
to swapchain's backbuffer by using
1) Direct3D11 APIs
2) Or, Direct3D9Ex + interop APIs
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1873>
Note that newly added formats (YUY2, UYVY, and VYUY) are not supported
render target view formats. So such formats can be only input of d3d11convert
or d3d11videosink. Another note is that YUY2 format is a very common
format for hardware en/decoders on Windows.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1581>
Adding vp9parse element to parse various stream information such as
resolution, profile, and so on. If upstream does not provide resolution and/or
profile, this would be useful for decodebin pipeline for autoplugging
suitable decoder element depending on template caps of each decoder element.
In addition, vp9parse element supports unpacking superframe into
single frame for decoders. The vp9 superframe is a frame which consists
of multiple frames (or superframe with one frame is allowed) followed by superframe
index block. Then unpacked each frame will be considered as normal frame
by decoder. The decision for unpacking will be done by downstream element's
"alignment" caps field, which can be "super-frame" or "frame".
If downstream specifies the "alignment" as "frame",
then vp9parse element will split an incoming superframe into single frames
and the superframe index (located at the end of the superframe) data
will be discarded by vp9parse element.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1041>
This a GTK+ example will share, through GstContext, a custom X11
VADisplay to a pipeline using vah264dec and appsink.
When the frames are processed for rendering, the VASurfaceID is
fetched from the buffer and it is rendered using vaPutSurface in a X11
widget.
Call gst_aggregator_selected_samples() after identifying the
caption buffers that will be added as a meta on the next video
buffer.
Implement GstAggregator.peek_next_sample.
Add an example that demonstrates usage of the new API in
combination with the existing buffer-consumed signal.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1390>