As part of this
- Add accessors for the stream ID and selection API based on the
stream ID
- Deprecate the old index-based APIs
- Remove playbin support
- Implement the track enable API based on stream selection
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7648>
The gst_srtp_dec_decode_buffer() function modifies the input buffer after making
it writable, so the pointer might change as well, depending on the refcount of
the buffer.
This issue was detected using a netsim element upstream of the decoder in a
WebRTC pipeline.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8198>
We can use gst_uri_from_string_with_base () to join base url
and the fragment url path.
The previous method of forming base url in update_base_url(),
by looking for the string 'manifest' or 'Manifest' is insufficient.
A query may include these string in their paths and thus an invalid
base url string will be kept.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8193>
The main difference with the WIP av1-in-mpegts mapping is that the payload data
is not startcode-escaped. Most of the rest is sensible usage of it:
* Custom AV1G (AV1 Gstreamer) registration descriptor instead of AV01
* AV1CodecConfigurationRecord is stored in the same 0x80 custom descriptor and
conforms fully to the isobmff spec (i.e. does not the HDR fields from the
provisional mpegts specification which conflict with that one).
* Data is stored as OBU
* Access Unit is the frame level (same as provisional mpegts mapping)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4442>
When a TD is being processed, it is not always pushed immediatly. Resetting
the time information lead to lost of timestamp in TU to Frame conversion. The
TU would be formed by buffer of [TD][Frame], and the timestamp taken from
the TU buffer was lost then the TD was handled.
The handling of TS should be entirely done by the 3 functions:
- gst_av1_parse_handle_obu_to_obu() (direct input to output)
- gst_av1_parse_handle_to_big_align() Reset DTS on detected TU or TD
- gst_av1_parse_handle_to_small_and_equal_align() PTS on show frame, flat DTS
Fixes: 79312357a6
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8026>
V4L2 and DRM choose different, incompatible ways to represent
tiled/compressed etc. formats. While the later uses combinations of
format fourccs and opaque, vendor/hardware specific modifiers, for the
later every such combination is a distinct new format.
Traditionally Gst implemented each of the V4L2 formats if needed.
Given the large number of tiling and compression modes, this is
quite work intensive - and often actually not needed.
In many situations Gst just needs to pass buffers from V4L2 to DRM in
the form of EGL, VK, Wayland or KMS.
Thus implement a direct translation for some V4L2 formats to DRM ones,
limited to the DMA_DRM API, allowing much quicker enablement of formats
while requiring peers to use external implementations (usually Mesa or
KMS) for tiling etc.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7355>
Use GST_VIDEO_DECODER_ERROR instead of just erroring out
unconditionally, so that the error handling behaviour is
determined by the "max-errors" property and we'll just
continue after decoding errors now instead of erroring out.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8163>
Allows reducing the initial stack size of GPU threads. Cuda should
automatically increase this value if a kernel requires a larger stack.
Can save roughly 40MB of GPU memory for a single nvh264enc instance.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8158>
Extended is identical to main but allows for FMO/ASO features to be
used, and prevent using CABAC. Using similar logic to "baseline",
assume that if we support main, we can also do extended.
This fixes the following fluster vectors, which otherwise would fail
when trying to link the parsebin pad.
- BA3_SVA_C
- MR6_BT_B
- MR7_BT_B
- MR8_BT_B
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8164>
Contains the following updates:
* New properties on avfvideosrc: screen-crop-*
* H265 and H265 Alpha support in vtdec and vtenc (VideoToolbox)
* ProRes support in vtenc
* New properties on vtenc elements: rate-control, data-rate-limits,
max-frame-delay
* New plugin atenc (AudioToolbox) with support for encoding AAC
* Plugin move: atdec moved from -bad to -good
* New property on osxaudio elements: unique-id
* OS X -> macOS
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8120>
1. Add some comments explaining what headers and libs are expected on
what systems
2. Only look in default incdirs if no incdir is specified
3. Require libnvbufsurface.so on Jetson when cuda-nvmm=enabled
4. Require libatomic on Jetson when cuda-nvmm=enabled
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8021>
Parsing the whole caps as SDP media only to retrieve the fmtp field afterwards
seems a bit superfluous. By looking up the a-fmtp attribute directly the number
of allocations in this function gets down a bit.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8125>
People might have GST_TRACERS=leaks set in their environment
by default, which will now trigger criticals during the build
when calling g-ir-scanner, because we unset GST_PLUGIN_SYSTEM_PATH
so that the scanner doesn't load any plugins.
Fixes#4093
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8121>
This commit enables the usage of inline queries, if and only if, the
provided
pNext structure, in gst_vulkan_opeation_enable_query(), chains a
VK_STRUCTURE_TYPE_VIDEO_PROFILE_INFO_KHR typed structure.
Also it guards "gstvkvideo-private.h" include
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8112>
This relation type define relations between each components of two groups.
First component of first group relate to first component of second group,
Second component of second group relate to second component of second group,
and so on. It's a denser way to express relations in this context.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8087>
There was different behaviour if the proxy was configured through
properties or environment. For properties libcurl would be configured
with any auth, but for environment libcurl would default to using basic.
Now any auth is set for both configuration methods.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7935>
The Content-Length header would unconditionally be included when the
proxy property was set. This would result in requests with both
Content-Length and Transfer-Encoding header. Now we rely on the
use-content-length property in the proxy case aswell. This also makes
sure that Content-Type is set correctly, since before that would be
skipped if proxy was used.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7935>
We get loads of warnings when parsing videos from users:
gsth264parser.c:1115:gst_h264_parser_parse_user_data_unregistered: No more remaining payload data to store
gsth264parse.c:646:gst_h264_parse_process_sei:<h264parse0> failed to parse one or more SEI message
Those are raised because of unregistered SEI without user data.
The spec does not explicitly state that unregistered SEI needs to have
data and I suppose the UUID by itself can carry valuable information.
FFmpeg also parses and exposes such SEI so there is no reason for us no
too as well.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7931>
It creates a new structure for passing the codec quality structure at _start(),
where it will be filled. The quality level can be set or changed according
encoder limits.
Later the quality level will be set at _update_session_parameters() and at each
frame encoding. That's why it has to be set at _start().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007>
The algorithm for generating the current slot index is a simple round robin,
nonetheless it's not assured that the next slot index it's not still used by a
still living encode picture.
This new way holds an array with the still living encode pictures and the next
slot index looks for a released index in the array.
Its downside is deallocating a picture need to be removed from the array, so the
helper has to be passed to the uninit() function
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007>
In GStreamer that buffer information is decoupled, holding other structures to
describe the stream: GstCaps. So, to keep the GStreamer design this patch
removes these information from GstVulkanEncoderPicture and pass to
gst_vulkan_encoder_encode() a pointer to GstVideoInfo.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007>
Instead of holding all headers in an external array and add them into the
bitstream buffer before the encoding operation, adding extra memory and extra
copy operations, the encoder picture should specify the offset where the Vulkan
will start to add the bitstream slices/frame, because the element has written
already the headers until that offset.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007>
That's the number of references that gst_vulkan_encoder_encode() receives to
process, so it has to go as a parameter, because it's part of the reference
list, not of the picture.
This commit also modified unit tests accordingly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007>
The structure already stored the generic video capabilities and the specific
codec capabilities both for encoding an decoding. The generic decoder
capabilities weren't stored because it was only used internally in the decoder
helper object. Nonetheless, for the encoder, the elements will need the generic
encoder capabilities to configure the encoding. That's why it's required to
expose it as part of GstVulkanVideoCapabilities. And the generic decoder is
included for the sake of symmetry.
While updating the API vkvideoencodeh265 test got some code-style fixes.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007>
First remove validations since they will fail if there isn't a write operation.
It's valid to pass data without write operations.
Finally, it should check for hasOverride in feedback info. Nonetheless, there's
a NVIDIA bug returning always FALSE for hasOverride, that's why we currently
force it to TRUE.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7993>
It can be used to discard closed captions from the input pad if the
matching video buffer already held closed captions.
It is useful in a scenario where captions are generated for an AV
stream, but the incoming stream already has embedded captions for
some intervals, and those original captions should be preferred.
It can also be used to make sure input CC meta is always dropped,
the default behavior remains to append aggregated CC to whatever
CC meta was already present on the input video buffer
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6072>
When the driver does not implement ENUM_FRMESIZE for some specific
formats, the caps limiting the sizes may endup empty, which results in
assuming the driver can scale to any sizes.
Ensure that the original size is in the caps to prevent this assumption.
This happens with Hantro drive, since it only reply to that call if the
format is postprocessed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7849>
Instead of the libc ceil() and pow() machinery for double types, since the
library uses it for unsigned integers use a simple math function for for ceil
division and bit left shift for integer power of two.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7869>
This is a custom mapping. There isn't much needed apart from that to store vp9
in mpeg-ts since the bitstream is self contained.
Since there are no official specification we don't want people to be mistaken in
believing that. Therefore that mapping is only used in the muxer if the (new)
property `enable-custom-mappings` is set to TRUE.
* The MPEG-TS Stream Type is Private Data (0x6) with the registration descriptor
set to `VP09`.
* The Access Unit are VP9 frames stored in PES packets
* As there is no emulation prevention byte in VP9 elementary stream, the can be
misdetection of PES start code. To avoid this, the start of a PES packet must
be signalled using the Payload Unit Start Indicator in the transport packet
header
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7707>
When encoding an image to mpeg2 video, with something like:
gst-launch-1.0 encodebin name=e profile=mpegpsmux:video/mpeg,mpegversion=2,systemstream=false ! \
filesink location=sample.mpg filesrc num-buffers=1 blocksize=$(stat -c%s sample.png) \
location=sample/dts.png ! pngdec ! e.
The only frame's type is set to an invalid value 0
The consequence is that mpegvideoparse sets the delta unit flag on the buffer because
it is not an I frame, then decodebin3 drops this only frame because the delta
unit flag is set and the decoder receives eos before it was able to receive any
encoded data
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7832>
Tensor can be row or col major, but it's also possible that the order by we need
to read the tensor with more than two dimension need to be described. The
reserved field in GstTensorDim is there for this purpose. If we need this we
can add GST_TENSOR_DIM_ORDER_INDEXED, and follow an index defining order for
each dimension.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6000>
GstTensor contained two fields (data, dims) that were dynamicallay allocated. For
data it's for a GstBuffer and we have pool for efficient memory management. For
dims it's a small array to store the dimension of the tensor. The dims field
can be allocated inplace by moving it at the end of the structure. This will
allow a better memory management when GstTensor is stored in an analytics meta
which will take advantage of the _clear interface for re-use.
- New api to allocate and free GstTensor
To continue to support use-cases where GstTensor is not stored in an
analytics-meta we provide gst_tensor_alloc, gst_tensor_alloc_n and
gst_tensor_free that will facilitate memory management.
- Make GstTensor a boxed type
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6000>
Only in LTC mode we introduce additional latency that is depending on only on a
property and not on the framerate, so waiting for the framerate is not necessary.
In all other modes no latency is introduced at all and the latency query can
simply be proxied.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7831>
The gst_dep.get_variable('libexecdir') may fail in some scenarios
(e.g. building a module alone inside an uninstalled devenv) and
it shouldn't really be reached in the first place if docs are
disabled via options.
Also to avoid confusing meson messages when cross-compiling or
doing a static build.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7818>