The access flags are kept around the operations, but when the buffer is
released, the access flag should be reset to its original value, since queue
transfers can be done along the pipeline and, when reusing the buffer, the new
queue might not support the latest access flag.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7165>
Instead of dragging the last destination pipeline stage as current barrier
source pipeline stage (which isn't a valid semantic) this patch adds a parameter
to gst_vulkan_operation_add_frame_barrier() to set the source pipeline stage to
define the barrier.
The previous logic brought problems particularly with queue transfers, when the
new queue doesn't support the stage set during a previous operation in a
different queue.
Now the operation API is closer to Vulkan semantics.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7165>
Doing so resets the stride from the VideoMeta and it wasn't done before
the commit below. While on it, drop the plane size check as we can't
reliably predict the correct size when using DRM modifiers.
Fixes: 89b0a6fa23 ("va: refactor buffer import")
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7187>
null event NT handle to ID3D12Fence::SetEventOnCompletion()
will block the calling CPU thread already, thus it has no point that
creating an event NT handle in order to immediate wait for fence at CPU-side.
Note that passing a valid event NT handle to the fence API might be useful
when we need to wait for the fence value later (or timeout is required),
or want to wait for multiple fences at once via WaitForMultipleObjects().
But it's not a considered use case for now.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7176>
GstD3D12Window.priv.input_info is referenced by mouse event handler
in order to calculate corresponding original position
if scene is rotated/flipped by the videosink.
Fixing regression introduced by recent d3d12videosink refactoring
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7177>
The driver - AKA intel-vaapi-driver - has been unmaintained for four years
now and encoding appears to be broken in various cases. As it's unlikely
that the situation will improve, blocklist the driver for encoding.
Decoding appears to be stable enough to keep it enabled.
The driver can still be used by setting the `GST_VA_ALL_DRIVERS` env
variable.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7170>
GstFFMpegAudDec.context can be nullptr if decoder got closed
without opening new context. Note that we don't need to clear
AVCodecContext.extradata there since avcodec_free_context()
will do clear the data if needed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7180>
The driver frame counters (processed, dropped, buffer level) are not
always correct apparently, and don't allow reliably assigning a frame
number to captured frames.
Instead of relying on them, count the number of frames directly here and
detect dropped frames based on the capture times of the frames: if more
than 1.75 frame durations are between two frames, then there must've
been a dropped frame.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7163>
Define a default usage and use it instead of repeating the same bitwise
addition.
Therefore, when usage is defined as zero, the usage is defined with the
format's supported usage and the default usage, now without the storage
bit, but with color and input attachment bits.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6798>
Not doing so would mean that tags would be overidden by any tag events sent by
upstream. Also only send a tag event directly if upstream never sent one.
By default use GST_TAG_MERGE_REPLACE to override tags that exist in both the
upstream event and this element with the ones from this element, but provide a
new "merge-mode" property to adjust the behaviour.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7145>
avcodec_close() is deprecated and it's not supported anymore to re-open
a codec, so we only ever allocate the codec in set_format() now and
always free it after usage.
As part of this, also fix various memory leaks in related code paths.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6505>
Direct access to AVInputFormat::read_probe() is not possible anymore
with ffmpeg 7.0, and the usefulness of this typefinder seems limited
anyway. An alternative implementation around av_probe_input_format3() or
similar would be possible but it would be going over all possible ffmpeg
probes at once.
Having a typefinder here means that basically every application will
load the gst-libav plugin when typefinding is necessary, which has
unnecessary performance impacts. If a typefinder from here was indeed
missing from typefindfunctions in gst-plugins-base then it would be
better to add it there directly.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3378
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6505>
If a d3d12 memory holds non-direct-queue fence but the fence was
created with D3D12_FENCE_FLAG_SHARED flag, use the fence instead of
waiting for fence at CPU side. Note that d3d12ipcsrc or
d3d12screencapture elements will hold such sharable fence.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7139>
This needs significant work to use the new Metal→Vulkan integration
extension `VK_EXT_metal_objects`
```
MoltenVK/mvk_deprecated_api.h:132:1: note: 'vkGetMTLDeviceMVK' has been explicitly marked deprecated here
MVK_DEPRECATED_USE_MTL_OBJS
^
MoltenVK/mvk_deprecated_api.h:74:52: note: expanded from macro 'MVK_DEPRECATED_USE_MTL_OBJS'
#define MVK_DEPRECATED_USE_MTL_OBJS VKAPI_ATTR [[deprecated("Use the VK_EXT_metal_objects extension instead.")]]
^
../sys/applemedia/videotexturecache-vulkan.mm:303:20: error: 'vkSetMTLTextureMVK' is deprecated:
Use the VK_EXT_metal_objects extension instead.
VkResult err = vkSetMTLTextureMVK (memory->vulkan_mem.image, texture);
^
MoltenVK/mvk_deprecated_api.h:151:1: note: 'vkSetMTLTextureMVK' has been explicitly marked deprecated here
MVK_DEPRECATED_USE_MTL_OBJS
^
MoltenVK/mvk_deprecated_api.h:74:52: note: expanded from macro 'MVK_DEPRECATED_USE_MTL_OBJS'
#define MVK_DEPRECATED_USE_MTL_OBJS VKAPI_ATTR [[deprecated("Use the VK_EXT_metal_objects extension instead.")]]
^
2 errors generated.
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7091>
With clang on macOS:
```
error: format specifies type 'long' but the argument has type 'uint64_t' (aka 'unsigned long long')
...
error: format specifies type 'unsigned long' but the argument has type 'VkImageView' (aka 'unsigned long long')
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7091>
When building for iOS in Cerbero, as of MoltenVK SDK 1.3.283, we have
to statically link to libMoltenVK since it no longer ships a dylib.
This requires linking to libc++, so we find the dep with the objc++
compiler to ensure that meson uses the right linker.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7091>
${libdir}/gstreamer-1.0/include is only valid after installation, but
extra_cflags are added unconditionally, so we can't use that for
include flags.
Instead, let's add the include flag via variables, which are different
for installed and uninstalled pc files.
This is particularly bad for consuming GStreamer via CMake which barfs
on non-existent include paths.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7130>
since the encoded output is changing based on version
it does not make sense to check the output bitstream with a fixed
bytearray since the version in the target might vary. So sticking
to checking the number of output buffers and encoded frame size
similar to the other tests
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7141>
The code is simplified by using GQuarks for looking for caps features, and
removing inner loops.
Also, it's used the pad template caps to compare with the incoming caps because
is cheaper at the beginning of negotiation, where the pad template caps is used.
And, since the ANY caps where removed, there's no need to check for an initial
intersection.
Finally, the completion of caps features is done through a loop.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6698>
The video meta API now is a mandatory request for DMA kind negotiation. In
current code, the raw method always adds that video meta API in the query
of propose_allocation(), so we do not need to do the duplicated task here.
Just adding a comment to declare that.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6698>
The ANY caps in pad template caps seems to mess up the DMA negotiation.
The command of:
GST_GL_API=opengl gst-launch-1.0 -vf videotestsrc ! video/x-raw,format=NV12 !
vapostproc ! "video/x-raw(memory:DMABuf)" ! glimagesink
fails to negotiate, but in fact, the vapostproc can convert the input NV12
formant into the RGBA format to render.
The ANY may help the passthough mode, but we should make the negotiate correct
first.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6698>
gst_element_send_event(FLUSH_START / FLUSH_STOP) returns FALSE in cases
where any of the most downstream elements have unlinked pads, even if
the pipeline is successfully flushed.
Currently this is considered expected behavior in GStreamer. This patch
updates gst-validate to treat it as such and therefore not fail the test
for a "failing" flush.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7064>
After signal recovery the capture times for the next frames are simply
wrong. Experimentally this affected 2-3 frames and seemed to be related
to the buffer fill level after signal recovery, so drop at least 5
frames and up to fill level + 1 frames in this situation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7106>
Since a buffer resource will occupy at least 64KB,
allocating upload resource per decoding command might not be
an optimal approach. Instead, use sub-region of a upload resource
for multiple decoding command if sub-regions are not overlapped
each other.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7108>
- Align `glib_debug`, `glib_assert` and `glib_checks` options with GLib,
otherwise glib subproject won't inherit their value. Previous names
and values are preserved using Meson's deprecation mechanism.
- Add `extra-checks` and `benchmarks` options in the main project so it
can be inherited in GStreamer subprojects.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1165>
When enable parallel encoding, it is possible that the unshown frame
is not output but it is already be marked as a repeated frame header.
So we need to use a dedicated buffer to hold the repeat frame header,
don't mix it with the orignal frame data.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6867>
Schedule (semi-)static resource upload at converter creation time.
And use single resource for all vertex, index, and constant
buffers, since separate resources will waste GPU memory.
Note that size and address of a committed resource are 64K aligned
even if requested buffer size is small.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7081>
If the first buffers have no timestamp then the sink position would be
initialized to 0. The source pad might output this buffer, which would then
initialize the source position to 0 too.
Afterwards two buffers with a valid but huge timestamp might arrive before any
of them are output on the source pad. The first one would set the sink position
to a huge value, the second one would notice that the difference between the
huge value and 0 is certainly larger than max-size-time and consider the queue
as full.
Instead, simply don't update the times from buffers without timestamps and
assume whatever was set before is still valid, i.e. the buffer has the same
timestamp as the previous one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7071>
We were storing the probe id in a different structure (DecodebinOutputStream)
than the pad it is targetting (which is in MultiQueueSlot).
The problem is that when re-targetting outputs (to a different slot)... we would
end up having an invalid probe id, or not have a reference to an existing one.
Instead, store the probe id in the same structure as the pad it's targetting
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7069>
As NVIDIA Amperium. In this case the each output buffer is also a DPB,
but using a different view layer.
Still pending a validation layer issue:
VUID-VkVideoBeginCodingInfoKHR-flags-07244
Co-authored-by: Victor Jaquez <vjaquez@igalia.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6954>
query_result_status can be optional so we should not create
the query pool if the queue does not support it,
ie, AMD does not support VK_QUERY_TYPE_RESULT_STATUS_ONLY_KHR
In other use case such as VK_QUERY_TYPE_VIDEO_ENCODE_FEEDBACK_KHR, the
query pool must be created.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7043>
As only queryResultStatusSupport can be optional,
the variable name should be more specific.
queryResultStatusSupport reports VK_TRUE if query type
VK_QUERY_TYPE_RESULT_STATUS_ONLY_KHR and use of
VK_QUERY_RESULT_WITH_STATUS_BIT_KHR are supported.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7043>
According to the SPEC:
The frame id numbers (represented in display_frame_id, current_frame_id,
and RefFrameId[ i ]) are not needed by the decoding process, but allow
decoders to spot when frames have been missed and take an appropriate action.
So we should just print out warning and should not return error in parser when
mismatching. The decoder itself is already robust to handle the reference missing.
Fixes#3622
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7047>
Specify "layout" field in src template to make sure it's
set and gets fixated properly if the downstream element
supports both interleaved and non-interleaved caps.
Fixes
gst_pad_set_caps: assertion 'caps != NULL && gst_caps_is_fixed (caps)' failed
critical with e.g.
gst-launch-1.0 rtpdtmfsrc ! rtpdtmfdepay ! audioconvert ! fakesink
Not that the layout really matters in our case since we always
output mono anyway, but non-interleaved requires adding AudioMeta,
so this is the easiest fix.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7036>
Because DXGI flip mode swapchain will disallow GDI operation
to a HWND once swapchain is configured, videosink has been creating
child window of application's window. However, since window creation
would take a few milliseconds, it can cause performance issue such as
UI freezing. Adding a property so that videosink can attach
DXGI swapchain diretly to application's window in order to improve
performance.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7013>
A large refactoring commit for adding features and improve performance
* Reuse internal converter and overlay compositor:
Converter can be reused as long as input and display formats are not
changed. Also overlay compositor reconstruction is required only if
display format is changed
* Don't wait for full GPU flush on resize or close:
D3D12 swapchain requires GPU idle in order to resize backbuffer.
Thus CPU side waiting is required for swapchain related commands
to be finished. However, don't need to wait for full GPU flushing.
* Support multiple sink on a single external window
Keep installed subclass window procedure even if there's no associated
our internal HWND. This will make window procedure hooking less racy.
Then parent HWND's message will be transferred to our internal HWNDs
if needed.
* Adding support for window handle update
Application can change target HWND even when videosink is playing or
paused state. So, users can call gst_video_overlay_set_window_handle()
against d3d12videosink anytime. The videosink will be able to update
internal state and setup resource upon requested.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7013>
Observed Intel GPU driver crash when multiple decoders are
configured in a process. It might be because of frequent
command queue alloc/free or too many in-flight decoding commands.
In order to make command queue persistent and limit the number of
in-flight command lists, holds global decoding command queue.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7019>
This was already being used in handle_frame() for errors that happen when queueing a frame for decoding,
let's do the same when a frame is flagged with an error in the output callback.
From quick testing, this makes seeking more reliable (previously, it would sometimes cause a decoding error
and shut the whole decoder down due to GST_FLOW_ERROR).
Also manually sets the max error count to actually stop processing if too many errors occur.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6446>
ReferenceMissingErr is not critical and the simplest solution is to just ignore it. The frame has
the FrameDropped flag set when it occurs, so we can just drop it as usual.
BadDataErr is also not immediately critical, but in its case let's set the ERROR flag,
so the output loop can use GST_VIDEO_DECODER_ERROR to count and error out if it happens too many times.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6446>
ensure_input_parsebin() has a top comment saying it must be called with
INPUT_LOCK taken, but 2 out of 3 usages of the function call it without
taking that mutex.
This patch adds locking in these two remaining usages.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5279>
Upon fatal errors the loop function will first post an error message
then push out an EOS event.
An application may react immediately to the error message by setting the
state of the pipeline to NULL, meaning by the time we push out the EOS
event PAUSED_TO_READY may have reset the seek seqnum to -1.
While this is harmless, the assertion when setting an invalid seqnum
isn't tidy, fix this by simply not resetting to INVALID as it serves no
practical purpose and the next READY_TO_PAUSED will select a new seqnum
anyway.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7032>
Adding NVIDIA nvCOMP library based plugin for lossless raw video
compression/decompression. To build this plugin, user should
install nvCOMP SDK first and specify the SDK path via
"nvcomp-sdk-path" build option or NVCOMP_SDK_PATH env.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6912>
Musl does not implement GNU basename and have fixed a bug where the
prototype was leaked into string.h [1], which resullts in compile errors
with GCC-14 and Clang-17+
| sys/uvcgadget/configfs.c:262:21: error: call to undeclared function 'basename'
ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
| 262 | const char *v = basename (globbuf.gl_pathv[i]);
| | ^
Use glib function instead makes it portable across musl and glibc on
linux
[1] https://git.musl-libc.org/cgit/musl/commit/?id=725e17ed6dff4d0cd22487bb64470881e86a92e7a
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7006>
Certain V4L2 drivers can report that a video receiver is seeing
some signal, but that it is unable to synchronize to it. IOW: the driver
can sometimes report V4L2_IN_ST_NO_SYNC and not report V4L2_IN_ST_NO_SIGNAL.
In particular, I've seen the tc358743 (HDMI-to-CSI2 converter) driver
sometimes report this when deployed to a fleet of embedded Raspberry Pis.
The relevant kernel code is in [1]. The video output is not practically
usable when V4L2_IN_ST_NO_SYNC is reported (only visually corrupted frames,
sometimes with random "snow", are received). I assume that this happens when
either the HDMI cable is poorly plugged in or damaged or when a CSI2 FFC
cable is used and is damaged.
The change in this commit is useful for detecting this working-but-not-really
condition in application code. Applications already listening for the "Signal lost"
message will gain the ability to handle this condition.
There seem to be more V4L2 error flags like this, see [2]. However, I do not
have practical experience with them and adding only V4L2_IN_ST_NO_SYNC seems
like a safer option.
[1]: https://github.com/raspberrypi/linux/blob/be8498ee21aa/drivers/media/i2c/tc358743.c#L1534
[2]: https://www.kernel.org/doc/html/v6.6/userspace-api/media/v4l/vidioc-enuminput.html
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7021>
The initial goal was to support the case where we are paused watching a live
stream, and when we resume we can no longer resume from the previously
downloaded position. In that case we internally do a flushing seek back to the
"current live head position". This was also extended since to be able to
handle (utterly broken) servers when we can't really figure out where we are
anymore and therefore trigger that lost sync so we can try to get back on our
feet.
This does fix the issue... but results in spurious FLUSH_{START|STOP} events
being sent downstream. While that's fine for regular playback scenarios, it's a
bit of a wild scenario since a lot of pipelines/applications don't expect such
events when it wasn't triggered by downstream/application.
Fixes#3605
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7005>
This fixes a regression introduced by 6c4f52ea20
There are cases where the input stream will be push-based, time-segment and not
have a collection nor caps. This means the event-based checks are not sufficient
to decide when/where to plug in a identity or parsebin to process the input.
For those corner cases we setup a buffer probe to ensure we always end up with
at least a parsebin
Fixes#3609
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7010>
It can be seen as a WA in the case of multi-channel transcoding (like
decoder output to two channels, one for encoder and one for vpp).
Normally, encoder sets min pts of a huge value to avoid negative dts,
while vpp set pts without this addtional huge value, which are likely to
cause input surface pts does not fit with encoder (since both encoder
and vpp accept the same buffer from decoder, means they modify the timestamp
of one mfx surface). So we add this huge value to vpp to ensure enc and
vpp set the same value to input mfx surface meanwhile does not break
encoder's setting min pts for dts protection.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6971>
Adding a property to control error reporting behavior when output
window is closed in playing or paused state. This can be useful
for apps where an app wants to close window even if it's playing
a stream, and the closed window is expected.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6939>
The typefind code was rejecting content smaller than 128 bytes making it
impossible to play files with very small srt files.
But those can actually be properly detected so fix typefind to allow
smaller content and try its best with it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6937>
When measuring video latency, one mechanism involves taking a photo
with a camera of two screens showing the test video overlayed with
timeoverlay or clockoverlay. In these cases, if the display's pixel
response time is crappy, you will see ghosting due to which it can be
quite difficult to discern what the current timestamp being shown is.
This commit adds a property that *also* shows the timestamp in
a different (sequentially predictable) location every frame, which
makes it easy to tell what the latest rendered timestamp is.
For bonus points, you can also use the fade-time of the previous frame
to measure with sub-framerate accuracy when the photo was taken, not
just clamped to the framerate, giving you a higher precision latency
value.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6935>
This attempts to implement the gtkglsink element on Windows using WGL,
as there were some more gotchas that are along the way, since we need to
juggle with libepoxy along the way, meaning that we need a recent
GTK+-3.24.x for this to work properly, i.e. the upcoming GTK+-3.24.43.
Since we are essentially using an overlay compositor only during
rendering, move its initialization and destruction into the
gtk_gst_gl_widget_render() function, so that things are safer as we are
doing things across threads between gstreamer (gst-gl) and GTK, as GL
operations, as above, have more gotchas on Windows.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4289>
When dealing with push-based inputs, we are now delaying the creation of
parsebin/identity until we get all pre-buffer events.
We therefore can simplify the handling of new pads being linked and only have to
check if upstream can handle pull-based or not.
Avoids creating parsebin for parsed upstream data altogether
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6953>
Simple gst_gl_sync_meta_wait() is not sufficient to ensure GL commands
are executed before dma-buf devices get to see the buffer.
This is the first step that should make the code behave correctly for
everybody, although there may be performance penalty. In the future we
should introduce a more general sync meta that would allow to move the
waiting from gldownload (the producer) to the sink elements (the
consumers).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6968>
Not all GPUs can support arbitrary offset of
D3D12_PLACED_SUBRESOURCE_FOOTPRINT when copying GPU memory between
texture and buffer. Instead of calculating size/offset per plane,
calculate the entire size and offsets at once.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6967>
drm_fourcc.h should be picked up via the pkgconfig include, not the
system includedir directly.
Also consolidate the libdrm usage in va and msdk.
All this allows it to be picked up consistently (via the subproject,
for example).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6932>