This was already being used in handle_frame() for errors that happen when queueing a frame for decoding,
let's do the same when a frame is flagged with an error in the output callback.
From quick testing, this makes seeking more reliable (previously, it would sometimes cause a decoding error
and shut the whole decoder down due to GST_FLOW_ERROR).
Also manually sets the max error count to actually stop processing if too many errors occur.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6446>
ReferenceMissingErr is not critical and the simplest solution is to just ignore it. The frame has
the FrameDropped flag set when it occurs, so we can just drop it as usual.
BadDataErr is also not immediately critical, but in its case let's set the ERROR flag,
so the output loop can use GST_VIDEO_DECODER_ERROR to count and error out if it happens too many times.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6446>
Adding NVIDIA nvCOMP library based plugin for lossless raw video
compression/decompression. To build this plugin, user should
install nvCOMP SDK first and specify the SDK path via
"nvcomp-sdk-path" build option or NVCOMP_SDK_PATH env.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6912>
Musl does not implement GNU basename and have fixed a bug where the
prototype was leaked into string.h [1], which resullts in compile errors
with GCC-14 and Clang-17+
| sys/uvcgadget/configfs.c:262:21: error: call to undeclared function 'basename'
ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
| 262 | const char *v = basename (globbuf.gl_pathv[i]);
| | ^
Use glib function instead makes it portable across musl and glibc on
linux
[1] https://git.musl-libc.org/cgit/musl/commit/?id=725e17ed6dff4d0cd22487bb64470881e86a92e7a
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7006>
It can be seen as a WA in the case of multi-channel transcoding (like
decoder output to two channels, one for encoder and one for vpp).
Normally, encoder sets min pts of a huge value to avoid negative dts,
while vpp set pts without this addtional huge value, which are likely to
cause input surface pts does not fit with encoder (since both encoder
and vpp accept the same buffer from decoder, means they modify the timestamp
of one mfx surface). So we add this huge value to vpp to ensure enc and
vpp set the same value to input mfx surface meanwhile does not break
encoder's setting min pts for dts protection.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6971>
Adding a property to control error reporting behavior when output
window is closed in playing or paused state. This can be useful
for apps where an app wants to close window even if it's playing
a stream, and the closed window is expected.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6939>
Not all GPUs can support arbitrary offset of
D3D12_PLACED_SUBRESOURCE_FOOTPRINT when copying GPU memory between
texture and buffer. Instead of calculating size/offset per plane,
calculate the entire size and offsets at once.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6967>
drm_fourcc.h should be picked up via the pkgconfig include, not the
system includedir directly.
Also consolidate the libdrm usage in va and msdk.
All this allows it to be picked up consistently (via the subproject,
for example).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6932>
This patch fixes this critical warning when registering MSDK:
_dma_fmt_to_dma_drm_fmts: assertion 'fmt != GST_VIDEO_FORMAT_UNKNOWN' failed
It was because the HEVC string with possible output formats has an extra space
that could not be parsed correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6853>
When transforming from unknown alignment to frame or obu, the TU timestamp
was not properly transferred. Fix this by saving the TU DTS as the first
DTS seen within the the TU data, and the PTS as the last PTS seen in that
TU data. Finally, reset the TU timestamp after each TU have completed.
Fixes#1496
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6895>
Adds a separate vtenc_h265a element (with a _hw variant as usual) for the HEVCWithAlpha codec type.
Decided to go with a separate element to not break existing uses of the normal HEVC encoder.
The preserve_alpha property is still only used for ProRes, no need for it here because we explicitly say we want alpha
when using the new element.
For now, the HEVCWithAlpha has an issue where it does not throttle the amount of input frames queued internally.
I added a quick workaround where encode_frame() will block until enqueue_frame() callback notifies it that some space
has been freed up in the internal queue. The limit was set to 5, which should be enough I guess? Hopefully this is not
too prone to race conditions.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6664>
`StdVideoDecodeH265PictureInfo.flags.IsReference` refers to section 3.132 ITU-T
H.265 specification:
reference picture: A picture that is a short-term reference picture or a
long-term reference picture.
`GstH265Picture.ref` doesn't reflect this, but we need to query the NAL type of
the processed slice.
This patch fixes the validation layer error
`VUID-vkCmdBeginVideoCodingKHR-slotIndex-07239` while using the NVIDIA driver.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6901>
Conceptually identical to the present signal of d3d11videosink.
This signal will be emitted with current render target
(i.e., swapchain backbuffer) and command queue. Signal handler
can record GPU commands for an overlay image or to blend
an image to the render target.
In addition to d3d12 resources, videosink will send
d3d11 and d2d resources depending on "overlay-mode"
property, so that signal handler can render by using
preferred/required DirectX API.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6838>
The idea of using separate command queue per videosink was that
swapchain is bound to a command queue and we need to flush the
command queue when window size is changed. But the separate
queue does not seem to improve performance a lot.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6838>
The prime_fds for multi planes may be the same. For example, on Intel's
platform, the NV12 surface may have the same FD for the plane0 and the
plane1. Then, the DRM_IOCTL_GEM_CLOSE will close the same handle twice
and get an "Invalid argument 22" error the second time.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6914>
From the spec (chapter 34, v1.3.283):
````
UNORM: the components are unsigned normalized values in the range [0, 1]
SRGB: the R, G and B components are unsigned normalized value that represent
values using sRGB nonlinear encoding, while the A component (if one
exists) is a regular unsigned normalized value
```
The difference is the storage encoding, the first one is aimed for image
transfers, while the second is for shaders, mostly in the swapchain stage in the
pipeline, and it's done automatically if needed [1].
As far as I have checked, other frameworks (FFmpeg, GTK+), when import or export
images from/to Vulkan, use exclusively UNORM formats, while SRGB formats are
ignored.
My conclusion is that Vulkan formats are related on how bits are stored in
memory rather their transfer functions (colorimetry).
This patch does two interrelated changes:
1. It swaps certain color format maps to try first, in both
gst_vulkan_format_from_video_info() and gst_vulkan_format_from_video_info_2(),
the UNORM formats, when comparing its usage, and later check for SRGB.
2. It removes the code that check for colorimetry in
gst_vulkan_format_from_video_info_2(), since it not storage related.
1. https://community.khronos.org/t/noob-difference-between-unorm-and-srgb/106132/7
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6797>
This mirrors the behaviour in vp8enc / vp9enc and is generally more
useful than using any framerate from the caps as it provides some degree
of accuracy if the stream doesn't have timestamps perfectly according to
the framerate.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6891>
The encoder can specify the a preferred_output_delay value to get better throughput
performance. The higher delay may get better HW performance, but it may increases
the encoder and pipeline latency.
When the output queue length is smaller than preferred_output_delay, the encoder
will not block to waiting for the encoding output. It will continue to prepare and
send more commands to GPU, which may improve the encoder throughput performance.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4359>
If the muxer times out because of the latency deadline it can happen
that some pads have no caps yet. In that case skip creation of streams
for these pads and create updated section tables once the first buffer
arrives later.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6823>
This makes sure that for sparse streams (KLV, DVB subtitles, ...) the
muxer does not wait until the next buffer is available for them but
times out on the latency deadline and outputs data.
For non-live pipelines it will still be necessary for upstream to
correctly produce gap events for sparse streams.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6823>
The FORCE_KEYFRAME frame which has GST_VIDEO_CODEC_FRAME_FLAG_FORCE_KEYFRAME
bit set should be the sync point. So we should let it be an IDR frame to begin
a new GOP, rather than just promote it to an I frame.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6619>
The FORCE_KEYFRAME frame which has GST_VIDEO_CODEC_FRAME_FLAG_FORCE_KEYFRAME
bit set should be the sync point. So we should let it be an IDR frame to begin
a new GOP, rather than just promote it to an I frame.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6619>
Set the P and B frame qp to I frame value to avoid generating delta
QP between different frame types. For ICQ and QVBR modes, we can
only set the qpi value, so the qpp and qpb values should be set to
the same value as the qpi.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6841>
Address below message reported by SDK debug layer.
ID3D12Device::CheckFeatureSupport: Unsupported Decode Profile Specified.
Use ID3D12VideoDevice::CheckFeatureSupport with D3D12_FEATURE_VIDEO_DECODE_PROFILES
to retrieve a list of supported profiles
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6839>
Certain V4L2 fourccs don't (yet) have DRM counter parts, in which case
we can't create DMA_DRM caps for them. This is usually the case for
specific tilings, which are represented as modifiers for DMA formats.
While using these tilings is generally preferable - because of e.g.
lower memory usage - it can result in additional conversion steps when
interacting with DMA based APIs such as GL, Vulkan or KMS. In such cases
using a DMA compatible format usually ends up being the better option.
Before the addition of DMA_DRM caps, this was what playbin3 ended up
requesting in various cases - e.g. prefering NV12 over NV12_4L4 - but
the addition of DMA_DRM caps seems to confuse the selection logic.
As a simple and quite robust solution, assume that peers supporting
DMA_DRM caps always prefer these and reorder the caps accordingly.
In the future we plan to have a translation layer for cases where
there is a matching fourcc+modifier pair for a V4L2 fourcc, ensuring
optimal results.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6645>