webrtcsrc first creates recvonly transceivers with codec-preferences
and expects that after applying a remote description, the
previously created transceivers are used rather than having new
transceivers created.
When pairing webrtcsink + webrtcsrc, the offer sdp from webrtcsink has a media
section with sendonly direction. In !7156, which was implemented following
RFC9429 Section 5.10, we only reuse a unassociated transceiver when applying a
remote description if the media is sendrecv or recvonly, and that caused creation
of new transceivers when applying a remote offer in webrtcsrc, thus losing
information from codec preferences like the RTP extension headers in the
previously created transceivers.
Since the change in !7156 broke existing code from webrtcsrc, relax the condition
for reusing unassociated transceivers and add a test to document this behavior which
wasn't covered by any tests before.
Fixes#3753.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7417>
Before trying to retrieve a GMainContext from a provided
GstPlayerSignalDispatcher, check that it is actually
GstPlayerGMainContextSignalDispatcher. If not, use the
default GMainContext for dispatching signals via the adapter
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7392>
Don't reuse the same stats state structure across multiple
get-stats calls. Make each callback take a copy of the
non-changing fields it needs and use a local working copy
to avoid crashing.
Fixes problems with the unit test crashing sometimes for the
unit test introduced in MR !7338
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7387>
Since bins can set the context of their children elements, the set_context()
vmethod shouldn't call bus messages post methods, since it locks the parent
object, the bin, which might be already locked, leading to a deadlock.
Fixes: #3706
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7378>
When multiple streams are bundled on the same transport,
the statistics would end up incorrectly generated,
as each pad would regenerate stats for every ssrc on the
transport, overwriting previous iterations and assigning
bogus media kind and other values to the wrong ssrc.
Fix by making sure each pad only loops and generates
statistics for the one ssrc that pad is receiving / sending.
Add a unit test that the codec kind field in RTP statistics
are now generated correctly.
Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/2555
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7338>
Adding a new videosink element for Windows composition API based
applications. Unlike d3d12videosink, this element will create only
DXGI swapchain by using IDXGIFactory2::CreateSwapChainForComposition()
without actual window handle, so that video scene can be composed
via Windows native composition API, such as DirectComposition.
Note that this videosink does not support GstVideoOverlay interface
because of the design.
The swapchain created by this element can be used with
* DirectComposition's IDCompositionVisual in Win32 app
* WinRT and WinUI3's UI.Composition in Win32/UWP app
* UWP and WinUI3 XAML's SwapChainPanel
See also examples in this commit which show usage of the videosink
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7287>
Prevent the default webrtc test machinery from attempting to
create and set an answer when we're just testing rollback
of the offers. Add some locking / waiting to ensure the test
is complete before exiting.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7365>
Use pattern matching against expected error strings that
might include internal element names, where the names
are default assigned with incrementing integers. When running
with CK_FORK=no, there may have been previous tests that
ran in the same process and incremented the counters more
than when running in the default fork-per-test mode.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7365>
num_backward_references > 0 means we need to cache several frames
after the current frame. But the basetransform class does not
provide any _drain() kind function, so we do not have the chance
to push out our cached frames when EOS or set caps event comes.
Rather than losing the last several frames, we should just give up
the backward reference here.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7348>
The current code forgets to push the first several frames if the forward
reference > 0. They are just cached in history array and will never be
deinterlaced and pushed.
For the first several frames, even the forward reference frames are not
enough, we still need to deinterlace them as normal and push them after that.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7348>
"adobe" in app14 marker seem not a null-terminted string. so, when
we use gst_byte_reader_get_string_utf8, more bytes will be read until
null. and "gst_byte_reader_get_uint8 (&reader, &transform)" will almost fail
to read transform
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7356>
fix playback fail, when some file with length_size_minus_one == 2
According to the spec 2 cannot be a valid value, so that stream has a
bad config record. but breaking the decoding because of that, perhaps is too much.
and ffmpeg seem not check this
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7213>
librtmp allows for attaching arbitrary AMF objects to the end of the
connect packet, and this is commonly used for authenticating with
servers.
Add a new property, extra-connect-args, that mimics librtmp's behavior.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7054>
While analyzing gst_vulkan_get_or_create_image_view_with_info() it
seems obvious that this function returns NULL, and that this should be
covered in the return annotations. However, closer inspection indicates
that this is only a precondition check when the incoming arguments are
incompatible with each other, and should not be considered as a function
that optionally returns a pointer.
Signify this by using precondition checks instead of an opencoded
if-return-NULL.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5736>
When checking for renegotiation against a local offer,
reverse the remote direction in the corresponding answer
to fix falsely not triggering on-negotiation needed when
switching (for example) from local sendrecv -> recvonly
against a peer that answered 'recvonly'.
In the other direction, when the local was the answerer,
renegotiation might trigger when it didn't need to -
whenever the local transceiver direction differs from
the intersected direction we chose. Instead what we want
is to check if the intersected direction we would now
choose differs from what was previously chosen.
This makes the behaviour in both cases match the
behaviour described in
https://www.w3.org/TR/webrtc/#dfn-check-if-negotiation-is-needed
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7303>
Fixes for basic rollback (from have-local-offer or have-remote-offer to
stable). Allow having no SDP attached to the webrtc session description
in that case, and avoid all the transceiver and ICE update logic
normally applied when entering the stable signalling state
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7304>
Fix an inverted condition when checking if sink pad caps match
the codec-preference of an unassociated transceiver, and
fix a condition check for transceiver media kind to
avoid matching sinkpad requests where caps aren't provided
against unassociated transceivers where the caps might
not match later.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7237>
With MR 7156, transceivers and transports are created earlier,
but for sendrecv media we could get `not-linked` errors due to
transportreceivebin not being connected to rtpbin yet when incoming
data arrives.
This condition wasn't being tested in elements_webrtcbin, but could be
reproduced in the webrtcbidirectional example. This commit now also
adds a test for this, so that this doesn't regress anymore.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7294>
A previous fix, a275e1e029, is correct but was too
permissive since it treats all un-matched NAL units the same as AU delimiters
even though some other NAL unit types can be encountered in the processing loop.
The problem this can cause is that some hardware decoders experience bad
performance when handling FD units that precede the SPS.
This change restores the original behavior for FDs so that they're ignored until
the SPS is received and it preserves the codec conformance test gains that the
fix has achieved.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7166>
According to https://w3c.github.io/webrtc-pc/#set-the-session-description
(steps in 4.6.10.), we should be creating and associating transceivers when
setting session descriptions.
Before this commit, webrtcbin deviated from the spec:
1. Transceivers from sink pads where created when the sink pad was
requested, but not associated after setting local description, only
when signaling is STABLE.
2. Transceivers from remote offers were not created after applying the
the remote description, only when the answer is created, and were then
only associated once signaling is STABLE.
This commit makes webrtcbin follow the spec more closely with regards to
timing of transceivers creation and association.
A unit test is added, checking that the transceivers are created and
associated after every session description is set.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7156>
If a downstream buffer pool is offered, vulkanupload checks its allocation
parameters to honor them. Only adds to usage the TRANSFER bits, which are
required to upload buffers.
Also, fail if the buffer pool cannot be configured with the current parameters.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7219>
Now that driver version is expected to be equal or superior to 1.3.275 the bug
in NVIDIA and RADV regarding usage is solved, we can revert commit b7ded81f7b.
Also this patch sets the internal usage variable after all the validation are
run, thus the state don't keep an invalid usage.
Finally, the now unused supported_usage variable is dropped.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7247>
Virtual method set_config() can be called several times, and if the number of
profiles counter isn't reset the pool will reach an error state.
The purpose of number of profiles is to check the number of valid vulkan video
profiles (two in the case of transcoding use-case, for example) so it's local to
set_config() virtual method.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7247>
Fixing warnings
GStreamer-CRITICAL **: 01:21:25.862: gst_value_set_int_range_step:
assertion 'start < end' failed
Although when QSV runtime reports a codec is supported, resolution query
fails sometimes, espeically VP9 encoder case on Windows.
Don't try to register an element if resolution query returned an error
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7250>
A fence configured in GstD3D12Memory should be used only for
write access to be completed. And because d3d12 -> d3d11 copy path
is read access to d3d12 resource, we should not set fence to
memory. Otherwise another read access to the d3d12 resource
will wait for d3d11 device context's copy operation although
simultaneous read access is allowed.
Use background thread to keep d3d12 resource and wait for d3d11 device's
copy operation instead.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7243>
When configured in constant bitrate mode, the muxer computes timing information
using the configured bitrate and the byte counter (now = bytes sent / byterate).
When an application changes the bitrate in CBR mode during playback, the
relationship between bytes sent and bitrate is no longer valid so new timing
values will be off by the ratio of the old bitrate to the new bitrate.
Furthermore, it will upset the way that padding is generated.
pad_stream() works by trying to fit the byte counter to now * byterate.
The result is that when decreasing bitrate, the muxer stalls, waiting until the
byte counter is in agreement with now * byterate. Also, when increasing
bitrate, the padding will spike in volume until the byte counter fits with
now * byterate.
If the byte counter is scaled by the ratio of new bitrate / old bitrate when
adjusting bitrate, then padding is generated in a way that applications would
more likely expect.
One detail this change doesn't yet address is whether the next PCR will match up
optimally with the previous PCR right after the byte counter is scaled. In that
case, some correction may be necessary. Also, perhaps the user should be
prevented from changing from bitrate=0 to bitrate=nonzero during playback since
it's not straightforward how to scale the byte counter in that case.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7158>
This is to avoid a regression in validation layer (introduced by commit
916c4e70cd) when using vulkandownload
VUID-VkImageMemoryBarrier2-srcAccessMask-03914 .. vkCmdPipelineBarrier2():
pDependencyInfo->pImageMemoryBarriers[1].srcAccessMask (VK_ACCESS_TRANSFER_READ_BIT)
is not supported by stage mask (VK_PIPELINE_STAGE_2_VIDEO_DECODE_BIT_KHR)
since vulkandownload set DPB memories' access mask to
VK_ACCESS_TRANSFER_READ_BIT, while they are retain by the DPB queue, so when
they are used as DPB after been shown, this validation error is raised.
Must of the barrier values are set ignoring the previous state of the vulkan
images.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7211>
None of the symbols in webrtc-audio-coding-1 are marked with
`__declspec(dllexport)`, rendering the library usable only if
it was built with GCC/Clang.
The only fix available (as the pulseaudio copy has not been updated
with Google's upstream) is to ensure the fallback builds statically.
Although this change will also affect webrtcdsp's dependency on
webrtc-audio-processing-1, it does not break its compilation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6407>
The access flags are kept around the operations, but when the buffer is
released, the access flag should be reset to its original value, since queue
transfers can be done along the pipeline and, when reusing the buffer, the new
queue might not support the latest access flag.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7165>
Instead of dragging the last destination pipeline stage as current barrier
source pipeline stage (which isn't a valid semantic) this patch adds a parameter
to gst_vulkan_operation_add_frame_barrier() to set the source pipeline stage to
define the barrier.
The previous logic brought problems particularly with queue transfers, when the
new queue doesn't support the stage set during a previous operation in a
different queue.
Now the operation API is closer to Vulkan semantics.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7165>
Doing so resets the stride from the VideoMeta and it wasn't done before
the commit below. While on it, drop the plane size check as we can't
reliably predict the correct size when using DRM modifiers.
Fixes: 89b0a6fa23 ("va: refactor buffer import")
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7187>
null event NT handle to ID3D12Fence::SetEventOnCompletion()
will block the calling CPU thread already, thus it has no point that
creating an event NT handle in order to immediate wait for fence at CPU-side.
Note that passing a valid event NT handle to the fence API might be useful
when we need to wait for the fence value later (or timeout is required),
or want to wait for multiple fences at once via WaitForMultipleObjects().
But it's not a considered use case for now.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7176>
GstD3D12Window.priv.input_info is referenced by mouse event handler
in order to calculate corresponding original position
if scene is rotated/flipped by the videosink.
Fixing regression introduced by recent d3d12videosink refactoring
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7177>
The driver - AKA intel-vaapi-driver - has been unmaintained for four years
now and encoding appears to be broken in various cases. As it's unlikely
that the situation will improve, blocklist the driver for encoding.
Decoding appears to be stable enough to keep it enabled.
The driver can still be used by setting the `GST_VA_ALL_DRIVERS` env
variable.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7170>
The driver frame counters (processed, dropped, buffer level) are not
always correct apparently, and don't allow reliably assigning a frame
number to captured frames.
Instead of relying on them, count the number of frames directly here and
detect dropped frames based on the capture times of the frames: if more
than 1.75 frame durations are between two frames, then there must've
been a dropped frame.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7163>
Define a default usage and use it instead of repeating the same bitwise
addition.
Therefore, when usage is defined as zero, the usage is defined with the
format's supported usage and the default usage, now without the storage
bit, but with color and input attachment bits.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6798>
If a d3d12 memory holds non-direct-queue fence but the fence was
created with D3D12_FENCE_FLAG_SHARED flag, use the fence instead of
waiting for fence at CPU side. Note that d3d12ipcsrc or
d3d12screencapture elements will hold such sharable fence.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7139>
This needs significant work to use the new Metal→Vulkan integration
extension `VK_EXT_metal_objects`
```
MoltenVK/mvk_deprecated_api.h:132:1: note: 'vkGetMTLDeviceMVK' has been explicitly marked deprecated here
MVK_DEPRECATED_USE_MTL_OBJS
^
MoltenVK/mvk_deprecated_api.h:74:52: note: expanded from macro 'MVK_DEPRECATED_USE_MTL_OBJS'
#define MVK_DEPRECATED_USE_MTL_OBJS VKAPI_ATTR [[deprecated("Use the VK_EXT_metal_objects extension instead.")]]
^
../sys/applemedia/videotexturecache-vulkan.mm:303:20: error: 'vkSetMTLTextureMVK' is deprecated:
Use the VK_EXT_metal_objects extension instead.
VkResult err = vkSetMTLTextureMVK (memory->vulkan_mem.image, texture);
^
MoltenVK/mvk_deprecated_api.h:151:1: note: 'vkSetMTLTextureMVK' has been explicitly marked deprecated here
MVK_DEPRECATED_USE_MTL_OBJS
^
MoltenVK/mvk_deprecated_api.h:74:52: note: expanded from macro 'MVK_DEPRECATED_USE_MTL_OBJS'
#define MVK_DEPRECATED_USE_MTL_OBJS VKAPI_ATTR [[deprecated("Use the VK_EXT_metal_objects extension instead.")]]
^
2 errors generated.
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7091>
With clang on macOS:
```
error: format specifies type 'long' but the argument has type 'uint64_t' (aka 'unsigned long long')
...
error: format specifies type 'unsigned long' but the argument has type 'VkImageView' (aka 'unsigned long long')
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7091>
When building for iOS in Cerbero, as of MoltenVK SDK 1.3.283, we have
to statically link to libMoltenVK since it no longer ships a dylib.
This requires linking to libc++, so we find the dep with the objc++
compiler to ensure that meson uses the right linker.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7091>
since the encoded output is changing based on version
it does not make sense to check the output bitstream with a fixed
bytearray since the version in the target might vary. So sticking
to checking the number of output buffers and encoded frame size
similar to the other tests
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7141>
The code is simplified by using GQuarks for looking for caps features, and
removing inner loops.
Also, it's used the pad template caps to compare with the incoming caps because
is cheaper at the beginning of negotiation, where the pad template caps is used.
And, since the ANY caps where removed, there's no need to check for an initial
intersection.
Finally, the completion of caps features is done through a loop.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6698>
The ANY caps in pad template caps seems to mess up the DMA negotiation.
The command of:
GST_GL_API=opengl gst-launch-1.0 -vf videotestsrc ! video/x-raw,format=NV12 !
vapostproc ! "video/x-raw(memory:DMABuf)" ! glimagesink
fails to negotiate, but in fact, the vapostproc can convert the input NV12
formant into the RGBA format to render.
The ANY may help the passthough mode, but we should make the negotiate correct
first.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6698>
After signal recovery the capture times for the next frames are simply
wrong. Experimentally this affected 2-3 frames and seemed to be related
to the buffer fill level after signal recovery, so drop at least 5
frames and up to fill level + 1 frames in this situation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7106>
Since a buffer resource will occupy at least 64KB,
allocating upload resource per decoding command might not be
an optimal approach. Instead, use sub-region of a upload resource
for multiple decoding command if sub-regions are not overlapped
each other.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7108>
- Align `glib_debug`, `glib_assert` and `glib_checks` options with GLib,
otherwise glib subproject won't inherit their value. Previous names
and values are preserved using Meson's deprecation mechanism.
- Add `extra-checks` and `benchmarks` options in the main project so it
can be inherited in GStreamer subprojects.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1165>
When enable parallel encoding, it is possible that the unshown frame
is not output but it is already be marked as a repeated frame header.
So we need to use a dedicated buffer to hold the repeat frame header,
don't mix it with the orignal frame data.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6867>
Schedule (semi-)static resource upload at converter creation time.
And use single resource for all vertex, index, and constant
buffers, since separate resources will waste GPU memory.
Note that size and address of a committed resource are 64K aligned
even if requested buffer size is small.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7081>
As NVIDIA Amperium. In this case the each output buffer is also a DPB,
but using a different view layer.
Still pending a validation layer issue:
VUID-VkVideoBeginCodingInfoKHR-flags-07244
Co-authored-by: Victor Jaquez <vjaquez@igalia.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6954>
query_result_status can be optional so we should not create
the query pool if the queue does not support it,
ie, AMD does not support VK_QUERY_TYPE_RESULT_STATUS_ONLY_KHR
In other use case such as VK_QUERY_TYPE_VIDEO_ENCODE_FEEDBACK_KHR, the
query pool must be created.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7043>
As only queryResultStatusSupport can be optional,
the variable name should be more specific.
queryResultStatusSupport reports VK_TRUE if query type
VK_QUERY_TYPE_RESULT_STATUS_ONLY_KHR and use of
VK_QUERY_RESULT_WITH_STATUS_BIT_KHR are supported.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7043>
According to the SPEC:
The frame id numbers (represented in display_frame_id, current_frame_id,
and RefFrameId[ i ]) are not needed by the decoding process, but allow
decoders to spot when frames have been missed and take an appropriate action.
So we should just print out warning and should not return error in parser when
mismatching. The decoder itself is already robust to handle the reference missing.
Fixes#3622
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7047>
Because DXGI flip mode swapchain will disallow GDI operation
to a HWND once swapchain is configured, videosink has been creating
child window of application's window. However, since window creation
would take a few milliseconds, it can cause performance issue such as
UI freezing. Adding a property so that videosink can attach
DXGI swapchain diretly to application's window in order to improve
performance.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7013>
A large refactoring commit for adding features and improve performance
* Reuse internal converter and overlay compositor:
Converter can be reused as long as input and display formats are not
changed. Also overlay compositor reconstruction is required only if
display format is changed
* Don't wait for full GPU flush on resize or close:
D3D12 swapchain requires GPU idle in order to resize backbuffer.
Thus CPU side waiting is required for swapchain related commands
to be finished. However, don't need to wait for full GPU flushing.
* Support multiple sink on a single external window
Keep installed subclass window procedure even if there's no associated
our internal HWND. This will make window procedure hooking less racy.
Then parent HWND's message will be transferred to our internal HWNDs
if needed.
* Adding support for window handle update
Application can change target HWND even when videosink is playing or
paused state. So, users can call gst_video_overlay_set_window_handle()
against d3d12videosink anytime. The videosink will be able to update
internal state and setup resource upon requested.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7013>
Observed Intel GPU driver crash when multiple decoders are
configured in a process. It might be because of frequent
command queue alloc/free or too many in-flight decoding commands.
In order to make command queue persistent and limit the number of
in-flight command lists, holds global decoding command queue.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7019>
This was already being used in handle_frame() for errors that happen when queueing a frame for decoding,
let's do the same when a frame is flagged with an error in the output callback.
From quick testing, this makes seeking more reliable (previously, it would sometimes cause a decoding error
and shut the whole decoder down due to GST_FLOW_ERROR).
Also manually sets the max error count to actually stop processing if too many errors occur.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6446>
ReferenceMissingErr is not critical and the simplest solution is to just ignore it. The frame has
the FrameDropped flag set when it occurs, so we can just drop it as usual.
BadDataErr is also not immediately critical, but in its case let's set the ERROR flag,
so the output loop can use GST_VIDEO_DECODER_ERROR to count and error out if it happens too many times.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6446>
Adding NVIDIA nvCOMP library based plugin for lossless raw video
compression/decompression. To build this plugin, user should
install nvCOMP SDK first and specify the SDK path via
"nvcomp-sdk-path" build option or NVCOMP_SDK_PATH env.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6912>
Musl does not implement GNU basename and have fixed a bug where the
prototype was leaked into string.h [1], which resullts in compile errors
with GCC-14 and Clang-17+
| sys/uvcgadget/configfs.c:262:21: error: call to undeclared function 'basename'
ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
| 262 | const char *v = basename (globbuf.gl_pathv[i]);
| | ^
Use glib function instead makes it portable across musl and glibc on
linux
[1] https://git.musl-libc.org/cgit/musl/commit/?id=725e17ed6dff4d0cd22487bb64470881e86a92e7a
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7006>
It can be seen as a WA in the case of multi-channel transcoding (like
decoder output to two channels, one for encoder and one for vpp).
Normally, encoder sets min pts of a huge value to avoid negative dts,
while vpp set pts without this addtional huge value, which are likely to
cause input surface pts does not fit with encoder (since both encoder
and vpp accept the same buffer from decoder, means they modify the timestamp
of one mfx surface). So we add this huge value to vpp to ensure enc and
vpp set the same value to input mfx surface meanwhile does not break
encoder's setting min pts for dts protection.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6971>
Adding a property to control error reporting behavior when output
window is closed in playing or paused state. This can be useful
for apps where an app wants to close window even if it's playing
a stream, and the closed window is expected.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6939>
Not all GPUs can support arbitrary offset of
D3D12_PLACED_SUBRESOURCE_FOOTPRINT when copying GPU memory between
texture and buffer. Instead of calculating size/offset per plane,
calculate the entire size and offsets at once.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6967>
drm_fourcc.h should be picked up via the pkgconfig include, not the
system includedir directly.
Also consolidate the libdrm usage in va and msdk.
All this allows it to be picked up consistently (via the subproject,
for example).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6932>
This patch fixes this critical warning when registering MSDK:
_dma_fmt_to_dma_drm_fmts: assertion 'fmt != GST_VIDEO_FORMAT_UNKNOWN' failed
It was because the HEVC string with possible output formats has an extra space
that could not be parsed correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6853>
When transforming from unknown alignment to frame or obu, the TU timestamp
was not properly transferred. Fix this by saving the TU DTS as the first
DTS seen within the the TU data, and the PTS as the last PTS seen in that
TU data. Finally, reset the TU timestamp after each TU have completed.
Fixes#1496
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6895>
Adds a separate vtenc_h265a element (with a _hw variant as usual) for the HEVCWithAlpha codec type.
Decided to go with a separate element to not break existing uses of the normal HEVC encoder.
The preserve_alpha property is still only used for ProRes, no need for it here because we explicitly say we want alpha
when using the new element.
For now, the HEVCWithAlpha has an issue where it does not throttle the amount of input frames queued internally.
I added a quick workaround where encode_frame() will block until enqueue_frame() callback notifies it that some space
has been freed up in the internal queue. The limit was set to 5, which should be enough I guess? Hopefully this is not
too prone to race conditions.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6664>
`StdVideoDecodeH265PictureInfo.flags.IsReference` refers to section 3.132 ITU-T
H.265 specification:
reference picture: A picture that is a short-term reference picture or a
long-term reference picture.
`GstH265Picture.ref` doesn't reflect this, but we need to query the NAL type of
the processed slice.
This patch fixes the validation layer error
`VUID-vkCmdBeginVideoCodingKHR-slotIndex-07239` while using the NVIDIA driver.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6901>
Conceptually identical to the present signal of d3d11videosink.
This signal will be emitted with current render target
(i.e., swapchain backbuffer) and command queue. Signal handler
can record GPU commands for an overlay image or to blend
an image to the render target.
In addition to d3d12 resources, videosink will send
d3d11 and d2d resources depending on "overlay-mode"
property, so that signal handler can render by using
preferred/required DirectX API.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6838>
The idea of using separate command queue per videosink was that
swapchain is bound to a command queue and we need to flush the
command queue when window size is changed. But the separate
queue does not seem to improve performance a lot.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6838>
The prime_fds for multi planes may be the same. For example, on Intel's
platform, the NV12 surface may have the same FD for the plane0 and the
plane1. Then, the DRM_IOCTL_GEM_CLOSE will close the same handle twice
and get an "Invalid argument 22" error the second time.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6914>
From the spec (chapter 34, v1.3.283):
````
UNORM: the components are unsigned normalized values in the range [0, 1]
SRGB: the R, G and B components are unsigned normalized value that represent
values using sRGB nonlinear encoding, while the A component (if one
exists) is a regular unsigned normalized value
```
The difference is the storage encoding, the first one is aimed for image
transfers, while the second is for shaders, mostly in the swapchain stage in the
pipeline, and it's done automatically if needed [1].
As far as I have checked, other frameworks (FFmpeg, GTK+), when import or export
images from/to Vulkan, use exclusively UNORM formats, while SRGB formats are
ignored.
My conclusion is that Vulkan formats are related on how bits are stored in
memory rather their transfer functions (colorimetry).
This patch does two interrelated changes:
1. It swaps certain color format maps to try first, in both
gst_vulkan_format_from_video_info() and gst_vulkan_format_from_video_info_2(),
the UNORM formats, when comparing its usage, and later check for SRGB.
2. It removes the code that check for colorimetry in
gst_vulkan_format_from_video_info_2(), since it not storage related.
1. https://community.khronos.org/t/noob-difference-between-unorm-and-srgb/106132/7
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6797>
This mirrors the behaviour in vp8enc / vp9enc and is generally more
useful than using any framerate from the caps as it provides some degree
of accuracy if the stream doesn't have timestamps perfectly according to
the framerate.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6891>