We can use gst_uri_from_string_with_base () to join base url
and the fragment url path.
The previous method of forming base url in update_base_url(),
by looking for the string 'manifest' or 'Manifest' is insufficient.
A query may include these string in their paths and thus an invalid
base url string will be kept.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8193>
The main difference with the WIP av1-in-mpegts mapping is that the payload data
is not startcode-escaped. Most of the rest is sensible usage of it:
* Custom AV1G (AV1 Gstreamer) registration descriptor instead of AV01
* AV1CodecConfigurationRecord is stored in the same 0x80 custom descriptor and
conforms fully to the isobmff spec (i.e. does not the HDR fields from the
provisional mpegts specification which conflict with that one).
* Data is stored as OBU
* Access Unit is the frame level (same as provisional mpegts mapping)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4442>
When a TD is being processed, it is not always pushed immediatly. Resetting
the time information lead to lost of timestamp in TU to Frame conversion. The
TU would be formed by buffer of [TD][Frame], and the timestamp taken from
the TU buffer was lost then the TD was handled.
The handling of TS should be entirely done by the 3 functions:
- gst_av1_parse_handle_obu_to_obu() (direct input to output)
- gst_av1_parse_handle_to_big_align() Reset DTS on detected TU or TD
- gst_av1_parse_handle_to_small_and_equal_align() PTS on show frame, flat DTS
Fixes: 79312357a6
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8026>
V4L2 and DRM choose different, incompatible ways to represent
tiled/compressed etc. formats. While the later uses combinations of
format fourccs and opaque, vendor/hardware specific modifiers, for the
later every such combination is a distinct new format.
Traditionally Gst implemented each of the V4L2 formats if needed.
Given the large number of tiling and compression modes, this is
quite work intensive - and often actually not needed.
In many situations Gst just needs to pass buffers from V4L2 to DRM in
the form of EGL, VK, Wayland or KMS.
Thus implement a direct translation for some V4L2 formats to DRM ones,
limited to the DMA_DRM API, allowing much quicker enablement of formats
while requiring peers to use external implementations (usually Mesa or
KMS) for tiling etc.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7355>
Use GST_VIDEO_DECODER_ERROR instead of just erroring out
unconditionally, so that the error handling behaviour is
determined by the "max-errors" property and we'll just
continue after decoding errors now instead of erroring out.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8163>
Allows reducing the initial stack size of GPU threads. Cuda should
automatically increase this value if a kernel requires a larger stack.
Can save roughly 40MB of GPU memory for a single nvh264enc instance.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8158>
Extended is identical to main but allows for FMO/ASO features to be
used, and prevent using CABAC. Using similar logic to "baseline",
assume that if we support main, we can also do extended.
This fixes the following fluster vectors, which otherwise would fail
when trying to link the parsebin pad.
- BA3_SVA_C
- MR6_BT_B
- MR7_BT_B
- MR8_BT_B
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8164>
Contains the following updates:
* New properties on avfvideosrc: screen-crop-*
* H265 and H265 Alpha support in vtdec and vtenc (VideoToolbox)
* ProRes support in vtenc
* New properties on vtenc elements: rate-control, data-rate-limits,
max-frame-delay
* New plugin atenc (AudioToolbox) with support for encoding AAC
* Plugin move: atdec moved from -bad to -good
* New property on osxaudio elements: unique-id
* OS X -> macOS
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8120>
1. Add some comments explaining what headers and libs are expected on
what systems
2. Only look in default incdirs if no incdir is specified
3. Require libnvbufsurface.so on Jetson when cuda-nvmm=enabled
4. Require libatomic on Jetson when cuda-nvmm=enabled
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8021>
Parsing the whole caps as SDP media only to retrieve the fmtp field afterwards
seems a bit superfluous. By looking up the a-fmtp attribute directly the number
of allocations in this function gets down a bit.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8125>
People might have GST_TRACERS=leaks set in their environment
by default, which will now trigger criticals during the build
when calling g-ir-scanner, because we unset GST_PLUGIN_SYSTEM_PATH
so that the scanner doesn't load any plugins.
Fixes#4093
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8121>
This commit enables the usage of inline queries, if and only if, the
provided
pNext structure, in gst_vulkan_opeation_enable_query(), chains a
VK_STRUCTURE_TYPE_VIDEO_PROFILE_INFO_KHR typed structure.
Also it guards "gstvkvideo-private.h" include
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8112>
This relation type define relations between each components of two groups.
First component of first group relate to first component of second group,
Second component of second group relate to second component of second group,
and so on. It's a denser way to express relations in this context.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8087>
There was different behaviour if the proxy was configured through
properties or environment. For properties libcurl would be configured
with any auth, but for environment libcurl would default to using basic.
Now any auth is set for both configuration methods.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7935>
The Content-Length header would unconditionally be included when the
proxy property was set. This would result in requests with both
Content-Length and Transfer-Encoding header. Now we rely on the
use-content-length property in the proxy case aswell. This also makes
sure that Content-Type is set correctly, since before that would be
skipped if proxy was used.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7935>
We get loads of warnings when parsing videos from users:
gsth264parser.c:1115:gst_h264_parser_parse_user_data_unregistered: No more remaining payload data to store
gsth264parse.c:646:gst_h264_parse_process_sei:<h264parse0> failed to parse one or more SEI message
Those are raised because of unregistered SEI without user data.
The spec does not explicitly state that unregistered SEI needs to have
data and I suppose the UUID by itself can carry valuable information.
FFmpeg also parses and exposes such SEI so there is no reason for us no
too as well.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7931>
It creates a new structure for passing the codec quality structure at _start(),
where it will be filled. The quality level can be set or changed according
encoder limits.
Later the quality level will be set at _update_session_parameters() and at each
frame encoding. That's why it has to be set at _start().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007>
The algorithm for generating the current slot index is a simple round robin,
nonetheless it's not assured that the next slot index it's not still used by a
still living encode picture.
This new way holds an array with the still living encode pictures and the next
slot index looks for a released index in the array.
Its downside is deallocating a picture need to be removed from the array, so the
helper has to be passed to the uninit() function
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007>
In GStreamer that buffer information is decoupled, holding other structures to
describe the stream: GstCaps. So, to keep the GStreamer design this patch
removes these information from GstVulkanEncoderPicture and pass to
gst_vulkan_encoder_encode() a pointer to GstVideoInfo.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007>
Instead of holding all headers in an external array and add them into the
bitstream buffer before the encoding operation, adding extra memory and extra
copy operations, the encoder picture should specify the offset where the Vulkan
will start to add the bitstream slices/frame, because the element has written
already the headers until that offset.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007>
That's the number of references that gst_vulkan_encoder_encode() receives to
process, so it has to go as a parameter, because it's part of the reference
list, not of the picture.
This commit also modified unit tests accordingly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007>
The structure already stored the generic video capabilities and the specific
codec capabilities both for encoding an decoding. The generic decoder
capabilities weren't stored because it was only used internally in the decoder
helper object. Nonetheless, for the encoder, the elements will need the generic
encoder capabilities to configure the encoding. That's why it's required to
expose it as part of GstVulkanVideoCapabilities. And the generic decoder is
included for the sake of symmetry.
While updating the API vkvideoencodeh265 test got some code-style fixes.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/8007>
First remove validations since they will fail if there isn't a write operation.
It's valid to pass data without write operations.
Finally, it should check for hasOverride in feedback info. Nonetheless, there's
a NVIDIA bug returning always FALSE for hasOverride, that's why we currently
force it to TRUE.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7993>
It can be used to discard closed captions from the input pad if the
matching video buffer already held closed captions.
It is useful in a scenario where captions are generated for an AV
stream, but the incoming stream already has embedded captions for
some intervals, and those original captions should be preferred.
It can also be used to make sure input CC meta is always dropped,
the default behavior remains to append aggregated CC to whatever
CC meta was already present on the input video buffer
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6072>
When the driver does not implement ENUM_FRMESIZE for some specific
formats, the caps limiting the sizes may endup empty, which results in
assuming the driver can scale to any sizes.
Ensure that the original size is in the caps to prevent this assumption.
This happens with Hantro drive, since it only reply to that call if the
format is postprocessed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7849>
Instead of the libc ceil() and pow() machinery for double types, since the
library uses it for unsigned integers use a simple math function for for ceil
division and bit left shift for integer power of two.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7869>
This is a custom mapping. There isn't much needed apart from that to store vp9
in mpeg-ts since the bitstream is self contained.
Since there are no official specification we don't want people to be mistaken in
believing that. Therefore that mapping is only used in the muxer if the (new)
property `enable-custom-mappings` is set to TRUE.
* The MPEG-TS Stream Type is Private Data (0x6) with the registration descriptor
set to `VP09`.
* The Access Unit are VP9 frames stored in PES packets
* As there is no emulation prevention byte in VP9 elementary stream, the can be
misdetection of PES start code. To avoid this, the start of a PES packet must
be signalled using the Payload Unit Start Indicator in the transport packet
header
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7707>
When encoding an image to mpeg2 video, with something like:
gst-launch-1.0 encodebin name=e profile=mpegpsmux:video/mpeg,mpegversion=2,systemstream=false ! \
filesink location=sample.mpg filesrc num-buffers=1 blocksize=$(stat -c%s sample.png) \
location=sample/dts.png ! pngdec ! e.
The only frame's type is set to an invalid value 0
The consequence is that mpegvideoparse sets the delta unit flag on the buffer because
it is not an I frame, then decodebin3 drops this only frame because the delta
unit flag is set and the decoder receives eos before it was able to receive any
encoded data
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7832>
Tensor can be row or col major, but it's also possible that the order by we need
to read the tensor with more than two dimension need to be described. The
reserved field in GstTensorDim is there for this purpose. If we need this we
can add GST_TENSOR_DIM_ORDER_INDEXED, and follow an index defining order for
each dimension.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6000>
GstTensor contained two fields (data, dims) that were dynamicallay allocated. For
data it's for a GstBuffer and we have pool for efficient memory management. For
dims it's a small array to store the dimension of the tensor. The dims field
can be allocated inplace by moving it at the end of the structure. This will
allow a better memory management when GstTensor is stored in an analytics meta
which will take advantage of the _clear interface for re-use.
- New api to allocate and free GstTensor
To continue to support use-cases where GstTensor is not stored in an
analytics-meta we provide gst_tensor_alloc, gst_tensor_alloc_n and
gst_tensor_free that will facilitate memory management.
- Make GstTensor a boxed type
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6000>
Only in LTC mode we introduce additional latency that is depending on only on a
property and not on the framerate, so waiting for the framerate is not necessary.
In all other modes no latency is introduced at all and the latency query can
simply be proxied.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7831>
The gst_dep.get_variable('libexecdir') may fail in some scenarios
(e.g. building a module alone inside an uninstalled devenv) and
it shouldn't really be reached in the first place if docs are
disabled via options.
Also to avoid confusing meson messages when cross-compiling or
doing a static build.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7818>
The calculated position was off. I'm not sure of the exact cause;
possibly because we're in AU-aligned byte-stream mode, which means
`transform` is true.
Replacing the math that calculates the NALU positions with code more
similar to what is already in use for `idr_pos` seems to have fixed it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7318>
Add missing pixel format constants, and mappings for
P010, packed variants of 420 and RGBA layouts to GStreamer
buffer formats. This fixes problems with android decoders
refusing to output raw video frames with decoders that
announce support for these common pixel formats and
only allowing the 'hardware surfaces output' path.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7743>
Some file format standards don't require mastering-display-info
and content-light-level values to be provided.
Decklink however requires the static HDR metdata for the PQ transfer
function which we may not have.
CTA-861-G mentions that in this case, 0 may provided as an 'unknown'
value which is what we use here.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7742>
* Consistently name parse functions according to their message type and
deprecate the misnamed ones,
* Add missing parse functions,
* Check for the correct message type when parsing
* Use correct field name for warning message details
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7754>
The diff between compared timestamps might be outside the gint range
resulting in wrong sorting results. This patch corrects that by
comparing the timestamps and then returning -1, 0 or 1 depending on the
result.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7726>
The wraparound handling code assumes that the PCR gets updated regularly for
being able to detect wraparounds. With ignore-pcr=true that was not the case and
it stayed initialized at 1h forever.
To avoid this problem, update the fake PCR whenever the PTS advanced by more
than 5s, and also detect wraparounds in these fake PCRs.
Problem can be reproduced with
$ gst-launch-1.0 videotestsrc pattern=black ! video/x-raw,framerate=1/5 ! \
x264enc speed-preset=ultrafast tune=zerolatency ! mpegtsmux ! \
tsdemux ignore-pcr=true ! fakesink
which restarts timestamps at 0 after around 26h30m.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7588>
When the stream resolution change it is needed to negotiate
a new pools and to update the caps.
Resolution change could occurs on a new sequence or a new
picture so move resolution change detection code in a common
function.
For memory allocation reasons, only allows resolution change
on non keyframe if the driver support remove buffer feature.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7684>
We must drain the pending output picture so that subclass can renegotiate
the caps. Not doing so while still renegotiating would mean that the
subclass would have to do an allocation query before pushing the caps.
Pushing the caps now without this would also not work since these caps
won't match the pending buffers format.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7684>
Add helpers function to call VIDIOC_REMOVE_BUFS ioctl.
If the driver support this feature buffers are removed from the queue when:
- the pool when is detached from the decoded.
- the pool is released.
- allocation failed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7684>
Use VIDIOC_CREATE_BUFS ioctl to create buffers instead of VIDIOC_REQBUFS
because it allows to create buffers also while streaming.
To prepare the introduction of VIDIOC_REMOVE_BUFFERS create
the buffers one per one instead of a range of them. This way
it can, in the futur, fill the holes.
gst_v4l2_decoder_request_buffers() is stil used to remove all
the buffers of the queue.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7684>
When a datachannel within a session is removed after proper close,
reference to the error_ignore_bin elements of the datachannel
appsrc/appsink were left in webrtcbin.
This caused the bin-objects to be left and not freed until the whole
webrtc session was terminated. Among other things that includes a thread
from the appsrc.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7675>
- Add mtd_meta_clear to allow specific analytics-meta to handle their clear
operation specific to their type.
- Clear mtd's attached when analytic-meta is freed. When the buffer where
analytics-meta is attached is not from a buffer pool
gst_analytics_relation_meta_clear will not be called unless we explicitly call
it in _free. This important otherwise _mtd_clear are not called and lead to
leak if embedded mtd's allocated memory
- Un-ref in transform if it's a copy
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6026>
In my tests with the new GCC 14 compiler for Cerbero, I got the
following error:
> In file included from include/directxmath/DirectXMath.h:2275,
> from ../gst-libs/gst/d3d11/gstd3d11converter.cpp:46:
> include/directxmath/DirectXMathMatrix.inl: In function 'bool
> DirectX::XMMatrixDecompose(XMVECTOR*, XMVECTOR*, XMVECTOR*, FXMMATRIX)':
> include/directxmath/DirectXMathMatrix.inl:1161:16:
> error: variable 'aa' set but not used [-Werror=unused-but-set-variable]
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7658>
Check and generate remote reception statistics from the info stored on
internal sources, as they are stored there when running against newer rtpbin
since MR !7424
This fixes cases where statistics are incomplete when
peers send RR reports from a single remote ssrc, which GStreamer does
when bundling is enabled and other RTP stacks may too.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7425>
In some cases, decodebin3 will send us incomplete caps (not containing
codec_data), and then a GAP event, which will force a negotiation.
This segfaults due to a null pointer deref because self->input_state
is NULL.
The only possible fix is to avoid negotiating when we get incomplete
caps (to avoid re-negotiationg immediately afterwards, which isn't
supported by some muxers), but also set as much input state as
possible so that a renegotiation triggered by a GAP event can complete
successfully.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7634>
This new LCEVC encoder plugin is meant to implement all LCEVC encoder elements.
For now, it only implements the LCEVC H264 encoder (lcevch264enc) element. This
element essentially encodes raw video frames using a specific EIL plugin, and
outputs H264 frames with LCEVC data. Depending on the encoder properties, the
LCEVC data can be either part of the video stream as SEI NAL Units, or attached
to buffers as GstMeta.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7330>
This new element wraps both the base H264 decoder and lcevcdec elements into a
bin so that LCEVC decoding works with auto-plugging elements such as decodebin.
By default, the H264 decoder element with higher rank is used as base decoder,
but any particular H264 decoder can be used by manually setting the base-decoder
property.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7330>
This new LCEVC decoder plugin is meant to implement all LCEVC decoder elements.
For now, it only implements the LCEVC enhancement decoder (lcevcdec) element.
This element essentially enhances raw video frames using the LCEVC metadata
attached to input buffers into a higher resolution frame. The element is only
meant to be used after any base decoder (eg avdec_h264).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7330>
There was an override to fake an IDR as soon as a SPS/PPS
is encountered, but that's not valid, at least an i-slice is needed.
Amend the visl result, as the output is slightly more correct, not
duplicating frame_num.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7330>
This improves the h264parse element to attach LCEVC enhancement data to buffers
using the new GstLcevcMeta API. This metadata will eventually be used downstream
by LCEVC decoders to enhance the RAW video frame.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7330>
This new metadata API allows elements to attach LCEVC enhancement data to video
buffers. Usually, the video parser elements are charged to parse the LCEVC
enhancement data from SEI Nal units (Supplemental enhancement Information).
However, other elements such as demuxers can also use this API if the LCEVC
enhancement data of the video is stored in a separate stream in the container.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7330>