In an early non-linked scenario, this was causing a ton of criticals about the queue array,
because the output callback would still fire for leftover frames that were still being processed by VT
at the time the output loop stopped. This makes sure they're flushed correctly as well.
Also renames gst_vtdec_loop to gst_vtdec_output_loop for consistency with related functions.
wip
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6397>
Sometimes a call to negotiate (and thus drain) can happen from the output loop
(via finish_frame()), which will tell VT to output all internal frames, but that won't succeed
if we happen to decide to wait for the queue to empty (because the loop is waiting for draining to finish and
will not make space in the queue!). This commit adds an override for the queue size limit if we're draining/flushing.
This bug could happen for any formats, but was especially obvious for ProRes, which has dpb_size of 0.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6397>
Because ID3D12Device objects are singletons per adapter,
GstD3D12Device was following the API design, that is, keep track
of global GstD3D12Device objects and reuses it.
That means ID3D12Device object can be released at the time
when GstD3D12Device is destroyed.
But exetrnal APIs such as NVENC does not seem to be happy
with the released ID3D12Device, that could be a driver bug though.
Let's hold already opened ID3D12Device permanently without releasing
it for now.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6395>
In order to simplify caps negotiations for clients and, notably, be more
compatible with va* decoders.
Crucially this allows clients to know ahead of time whether buffers will
actually be DMABufs.
Similar to GstVaBaseDec we only announce system memory caps if the peer
has ANY caps. Further more, and again like va decoders, we fail in
`decide_allocation()` if DMA_DRM caps are used without VideoMeta.
Apart from buggy peers this can happen e.g. when a peer with ANY caps
is used in combination with caps filters.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5890>
Most importantly rely on video info helpers instead of manual parsing
of caps, which will allow us to use additional helpers in the future.
While on it, tighen the check for supported formats - failing that
indicates a bug in caps negotiation - and make some style changes.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5890>
This ensures we don't create filter caps that are not supported by the
individual codec implementations, as well as that the resulting caps
have the required fields so they can be turned into a GstVideoFormat.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5890>
Do not chain up to parent's GstBufferPool::start() which will do
preallocation. We don't want it to be preallocated
since there are various cases where negotiated downstream buffer pool is
not used at all (e.g., zero-copy decoding, IPC elements).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6326>
This fixes a crash in `gst_va_h264_enc_class_init` and `gst_va_h265_enc_class_init`
(and probably also in gst_va_av1_enc_class_init) when calling
`g_object_class_install_properties (object_class, n_props, properties);`
When rate_control_type is 0, the following code is executed in :
```
} else {
n_props--;
properties[PROP_RATE_CONTROL] = NULL;
}
```
n_props has initially a value of N_PROPERTIES but PROP_RATE_CONTROL
is not the last element in the array, so it's making
g_object_class_install_properties fail to iterate over the
properties array.
This applies the same fix to gstvah264enc.c, gstvah265enc.c and
gstvaav1enc.c.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6319>
osxaudio has a few helper methods potentially useful in atdec (or future atenc), like GStreamer -> CoreAudio
channel mapping. Doesn't make sense to duplicate them in applemedia, and atdec is the only audio-oriented
element there anyway.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6223>
Provide a clock from the source that is a monotonic system clock with
the rate corrected based on the measured and ideal capture rate of the
frames.
If this clock is selected as pipeline clock, then provide perfect
timestamps to downstream.
Otherwise, if the pipeline clock is the monotonic system clock, use the
internal clock for converting back to the monotonic system clock.
Otherwise, use the monotonic system clock time calculated in the above
case and convert that to the pipeline clock.
In all cases this will give a smoother time than the previous code,
which simply took the difference between the driver provided capture
time and the current real-time clock time, and applied that to the
current pipeline clock time.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6208>
Otherwise there's a small window between querying the state and doing
the transfer in which a frame could be dropped, and we would then output
the frame right after the dropped one as if it was the dropped frame.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6208>
In low delay B mode, the P frame is converted as B frame with forward
references. For example, One P frame may refers to P-1, P-2 and P-3 in
list0 and refers to P-3, P-2 and P-1 in list1.
So the num in list0 and list1 does not reflect the forward_num and
backward_num. The vaapi does not provide ref num for forward or backward
so far. In this case, we just consider the backward_num to be 1 conservatively.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6249>
In b_pyramid mode, B frames can be ref and prevPicOrderCntLsb can
be the B frame POC which is smaller than the P frame. This can cause
POC diff bigger than MaxPicOrderCntLsb/2 and generate wrong POC value.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6249>
Gets being released memory back to queue even if allocator is flushing
in order to count the number of outstanding memory objects.
Also, clear queue if there's no outstanding memory object and
allocator is flushing
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6240>
Unprepare method posts WM_GST_D3D11_DESTROY_INTERNAL_WINDOW
command to the window queue, and from that moment considers
internal_hwnd to be released, and so it sets it to null.
The problem is that it's possible that right at that moment
the window thread might be already processing some other
command, or just another command might be already in the queue.
On practice we met a crash when WM_PAINT got processed in between
(unprepare already finished and WM_GST_D3D11_DESTROY_INTERNAL_WINDOW
was not handled yet)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6187>
When the conversion is only caps feature from memory:VAMemory to system memory,
it's possible to optimize by doing a pseudo pass-through since the va-backed
buffers are the same for system memory buffers.
This change will also mitigates #2940
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6174>
If the allocation query received from downstream doesn't handle GstVideoMeta but
it requests memory:DMABuf caps feature, it's incomplete, so we rather reject the
negotiation.
Both in base decoder, base transform and compositor.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6155>
This is a simplification of the venerable
gst_va_base_dec_get_preferred_format_and_caps_features() function, which
predates since gstreamer-vaapi. It's used to select the format and the
capsfeature to use when setting the output state. It was complex and hard to
follow. This refactor simplifies a lot the algorithm.
The first thing to remove _downstream_has_video_meta() since, most of the time
it will be called before the caps negotiation, and allocation queries make sense
only after caps negotiation. It might work during renegotiation but, in that
case, caps feature change is uncommon. Better a simple and common approach.
Also, for performance, instead of dealing with caps features as strings, GQuarks
are used.
The refactor works like this:
1. If peer pad returns any caps, the returned caps feature is system memory and
looks for a proper format in the allowed caps.
2. The allowed caps are traversed at most 3 times: one per each valid caps
feature. First VAMemory, later DMABuf, and last system memory. The first to
match in allowed caps is picked, and the first format matching with the
chroma is picked too.
Notice that, right now, using playbin videoconvert never return any.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6154>
This reverts questionable commit 009bc15f33
which looks completely wrong.
The GstWasapi2RingBuffer:buffer_size variable is used to
calculate available buffer size we can write
(i.e., available size = buffer_size - padding_size).
But the commit makes the size to be exactly same as buffer period.
Then, it can confuse this element as if the endpoint buffer is full on
I/O event callback (if padding size is equal to buffer period)
but it's not true.
Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/2870
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6132>
The global semaphore was never closed/unlinked, causing permission
denied issue if the device is later used by another user. Properly
removing the semaphore when stopping the pipeline would still leave it
open in case of a crash.
With a GStreamer specific name, it was also not preventing other apps to access
the device concurrently.
Finally, if the system has multiple cards, the lock should be per card
and not global (to be confirmed).
Fixes: #3283.
Sponsored-by: Netflix Inc.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6117>
According to recommendation from MS, IDXGIOutputDuplication::ReleaseFrame()
needs to be called just before IDXGIOutputDuplication::AcquireNextFrame()
for performance reasons, so that driver can accumulate dirty rects
and update texture at once. But it seems to cause choppy output.
Do release acquired frame immediately once processing done,
like d3d11 implementation does.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6092>
Fence data could hold GstD3D12Device directly or indirectly.
Then if it's holding last refcount, the device object will
be released from the device object's internal thread,
and will try join self thread.
Delegates it to other global background thread to avoid
self thread joining.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6042>
The output of VP9 and AV1 encoder is a little different from the H264
and H265 encoder, it may contain repeat frames and so the output frame
number may be more than the input. We need to call finish_subframe()
when some frame will be repeated later. So we need to extend the
current prepare_output() virtual function.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3015>
Explicitly calls gst_vtenc_pause_output_loop when going PAUSED->READY to make sure GST_PAD_STREAM_LOCK is not taken.
Before this change, a deadlock would occur if pipeline got stopped right after one output buffer was generated by vtenc.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5933>
Modify the fix_output_format in vpp to directly generate caps with
negotiated src caps, and we have the correct dma caps negotiation in
fix_output_format function. And thus, we can remove the redundant
negotiation of using function pad_accept_memory in vpp.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5845>
The pool currently defaults to performing a layout transition to
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, with some special exceptions for
video usages. This may not be a legal transition depending on the usage.
Provide an API to explicitly control the initial image layout.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5881>
When implementing NDK media support, it would be useful to also have JNI
implementation in the same binary as NDK media compatibility is lower.
As such, implement a rudimentary vtable system for gstamc-codec and
gstamc-format, and allow choosing the implementation at static_init()
time.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4115>
This allows the implementations to do custom logic behind the hood. For
example, when NDK implementation is added, the entrypoint can chooses to
statically initialize the NDK implementations or the JNI one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4115>
With this patch, the caps is registered in the order of memory features
as: VAMemory, DMABuf then raw caps in linux path, and D3D11Memory then
raw caps in windows path. It helps to prioritize the video memory for all
msdk elements when doing negotiation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5898>
Fixing below debug layer report
ID3D12Device::CreateCommittedResource: Ignoring InitialState D3D12_RESOURCE_STATE_COPY_DEST.
Buffers are effectively created in state D3D12_RESOURCE_STATE_COMMON.
Buffer resource will be automatically promoted to D3D12_RESOURCE_STATE_COPY_DEST
at the very first COPY operation time.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5895>
Since DXGI desktop duplication API does not work with Direct3D12 device,
this element will use Direct3D11 device to acquire frame.
Then other rendering operations (e.g., texture copy, render pipeline) will
happen using Direct3D12 API
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5883>
In case of tier 1 decoder, always use reference-only picture to avoid
fixed-size pool limitation and output decoded picture without
copy even for negative rate. Also do not use copy queue for GPU to GPU
copy. Copy queue is specialized for upload/download and may occupy
PCIE bandwidth. Use direct queue as recommended by vendors.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5877>
On flush event, baseclass will discard all pictures from DPB
but there can be still in-flight commands not finished yet.
Use our command queue, allocator and fence data helper objects
to keep resource available during command execution.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5877>
Shader visible descriptors occupy GPU resource and there are hardware
limits. Thus, in order to minimize the amount of shader visible heaps,
only non shader visible descriptor heap (staging) will be held by d3d12memory.
Then converter will copy the staging descriptor to shader visible
descriptor heap per draw.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5875>
Zero initialization would have overhead and it's not required
most cases except for textures. Use CREATE_NOT_ZEROED flag
in case of buffer resource or if a texture will be rendered without any
prior read operation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5875>
Conversion will happen when constructed command list is executed,
not by converter element. Thus this object should not map output buffer
with write flag which will result in error if multiple threads
are building commands for the same output target frame.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5875>
Add macro which converts picture frame number to suitable timestamp in
nanoseconds for use in V4L2 VB2 buffer lookup. Since multiple codecs do
the same operation and almost all got it wrong, do it in one place so it
can be fixed in one place again, if needed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5791>
The GstMpeg2Picture system_frame_number is guint32, constant 1000 is guint32,
GstV4l2CodecMpeg2Dec *_ref_ts multiplication result is u64 .
```
u64 result = (u32)((u32)system_frame_number * (u32)1000);
```
behaves the same as
```
u64 result = (u32)(((u32)system_frame_number * (u32)1000) & 0xffffffff);
```
so in case `system_frame_number > 4294967295 / 1000`, the `result` will
wrap around. Since the `result` is really used as a cookie used to look
up V4L2 buffers related to the currently decoded frame, this wraparound
leads to visible corruption during MPEG2 decoding. At 30 FPS this occurs
after cca. 40 hours of playback .
Fix this by changing the 1000 from u32 to u64, i.e.:
```
u64 result = (u64)((u32)system_frame_number * (u64)1000ULL);
```
this way, the wraparound is prevented and the correct cookie is used.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5791>
The GstAV1Picture system_frame_number is guint32, constant 1000 is guint32,
GstV4l2CodecAV1Dec v4l2_av1_frame.*_frame_ts multiplication result is u64 .
```
u64 result = (u32)((u32)system_frame_number * (u32)1000);
```
behaves the same as
```
u64 result = (u32)(((u32)system_frame_number * (u32)1000) & 0xffffffff);
```
so in case `system_frame_number > 4294967295 / 1000`, the `result` will
wrap around. Since the `result` is really used as a cookie used to look
up V4L2 buffers related to the currently decoded frame, this wraparound
leads to visible corruption during AV1 decoding. At 30 FPS this occurs
after cca. 40 hours of playback .
Fix this by changing the 1000 from u32 to u64, i.e.:
```
u64 result = (u64)((u32)system_frame_number * (u64)1000ULL);
```
this way, the wraparound is prevented and the correct cookie is used.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5791>
The GstVp9Picture system_frame_number is guint32, constant 1000 is guint32,
GstV4l2CodecVp9Dec v4l2_vp9_frame.*_frame_ts multiplication result is u64 .
```
u64 result = (u32)((u32)system_frame_number * (u32)1000);
```
behaves the same as
```
u64 result = (u32)(((u32)system_frame_number * (u32)1000) & 0xffffffff);
```
so in case `system_frame_number > 4294967295 / 1000`, the `result` will
wrap around. Since the `result` is really used as a cookie used to look
up V4L2 buffers related to the currently decoded frame, this wraparound
leads to visible corruption during VP9 decoding. At 30 FPS this occurs
after cca. 40 hours of playback .
Fix this by changing the 1000 from u32 to u64, i.e.:
```
u64 result = (u64)((u32)system_frame_number * (u64)1000ULL);
```
this way, the wraparound is prevented and the correct cookie is used.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5791>
The GstVp8Picture system_frame_number is guint32, constant 1000 is guint32,
GstV4l2CodecVp8Dec v4l2_vp8_frame.*_frame_ts multiplication result is u64 .
```
u64 result = (u32)((u32)system_frame_number * (u32)1000);
```
behaves the same as
```
u64 result = (u32)(((u32)system_frame_number * (u32)1000) & 0xffffffff);
```
so in case `system_frame_number > 4294967295 / 1000`, the `result` will
wrap around. Since the `result` is really used as a cookie used to look
up V4L2 buffers related to the currently decoded frame, this wraparound
leads to visible corruption during VP8 decoding. At 30 FPS this occurs
after cca. 40 hours of playback .
Fix this by changing the 1000 from u32 to u64, i.e.:
```
u64 result = (u64)((u32)system_frame_number * (u64)1000ULL);
```
this way, the wraparound is prevented and the correct cookie is used.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5791>
The input of the vacompositor may be DMA buffers. And in this case, the input
caps has the format=DMA_DRM, which can not be recognized by base video
aggregator class' find_best_format() function. So we need to override the
update_caps() virtual function.
Also we consider the DMA kind caps in negotiated_src_caps() for output.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5160>
To achieve maximum throughput, waiting on command commit thread
is not ideal. And render-delay will introduce unwanted latency.
Best is to split thread and wait finished decoding job in a dedicated
output thread
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5812>
When creating a new VA pool set config size to zero because it's not used.
Also, given the potential different sizes from software buffer pools and VA
buffer pools, this patch handle that potential different values.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5805>
When creating a new VA pool set config size to zero because it's not used.
Also, given the potential different sizes from software buffer pools and VA
buffer pools, this patch handle that potential different values.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5805>
When creating a new VA pool set config size to zero because it's not used.
Also, given the potential different sizes from software buffer pools and VA
buffer pools, this patch handle that potential different values.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5805>
VA drivers allocate surfaces given their properties, so there's no need to
provide a buffer size to the VA pool.
Though, the buffer size is provided by the driver, or the canonical size
is used for single planed surfaces.
This patch removes the need to provide a size for the function
gst_va_pool_new_with_config() and adds a helper method to retrieve the surface
size, gst_va_pool_get_buffer_size(). Also change the callers accordingly.
Changes for custom VA pool creation will be addressed in the following commits.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5805>
In multi-card scenario, user can set GST_MSDK_DRM_DEVICE env variable to
choose the device. This patch can align vpl's queried results with the
users' choice by passing deviceID when creating mfx implementation.
Co-authored-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5697>
Direct3D feature level 10 supported GPUs were released
more than 15 years ago, around the time when Windows
Vista / 7 were released. Also our d3d11 plugin/library
does not support feature level 9.x very well already.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5709>
We were previously always querying index 0, and while the number of planes per
buffer will never change, it seems more proper to query the right buffer rather
than always the first one.
This was found while reading strace logs, and wondering why the
V4L2_BUF_FLAG_MAPPED flag was present on all ¬0 indices even though that
happened before VIDIOC_EXPBUF.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5647>
Even if IDXGIOutput6 says current display colorspace is HDR,
captured texture via IDXGIOutputDuplication::AcquireNextFrame()
is converted frame by OS unless we use IDXGIOutput5::DuplicateOutput1()
with DXGI_FORMAT_R16G16B16A16_FLOAT format, in order for captured
frame to be scRGB color space. Then application should perform
tonemap operation based on reported display white level, color primaries, etc.
Since we don't have any tonemapping implementation, ignores colorimetry
reported by IDXGIOutput6.
Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3128
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5671>
d2d runtime seems to execute pending GPU command list
when DXGI ID2D1RenderTarget is being released, and it will invoke
d3d11 immediate context APIs. Should protect all rendering operations
and DXGI resources with lock.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5659>
In the case of encoders and filters when importing a DMABuf, use
GstVideoInfoDmaDrm to get the drm fourcc and modifier.
In both cases, instead of keeping the original GstVideoInfoDmaDrm from caps, the
GstVideoInfo part of the structure is converted as canonical one, given the
format from the fourcc. It's kept in the way to handle V4L2 linear DMABufs and
to avoid too many changes in the current code.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5264>
Instead of guessing the DRM format and modifier, pass a DRM video info to
gst_va_dmabuf_memories_setup().
Still, it checks for the DRM parameters in DRM info, if they are not available,
as in the case of V4L2 buffers, the part of the video info is used.
This is an API breakage, but since the plugin is still in stage, it's still
allowed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5264>
Should fix auto-generated follow-up sections like "Hierarchy" or
"Factory details" to be listed under the element name in the
table-of-contents of the document, instead of a stand-alone
"Duplex-Mode" section.
Also cleanup some spurious colon suffix after section names.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5625>
Now that nvidia-vaapi-driver appeared and isn't yet supported by GstVA, we've to
add an allowed list of supported drivers.
This patch implements it adding a environment variable to disable this driver
check: GST_VA_ALL_DRIVERS
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5616>
When decoding stream using hardware V4L2 decoder element, in any of the
currently supported formats, the decoding will fail once frame number
1000000 is reached. The reported error clearly indicates a wrap-around
occured, instead of receiving decoded frame 1000000, frame 0 is received
from the hardware V4L2 decoder driver.
The problem is actually not in the driver itself, but rather in gstreamer,
which uses `struct v4l2_buffer` member `.timestamp` in a special way. The
timestamp of buffers with encoded data added to the SINK (input) queue of
the driver is copied by the driver into matching buffers with decoded data
added to the SOURCE (output) queue of the driver. In fact, the timestamp
is not a timestamp at all, but rather in this special case, only part of
it is used as an incrementing frame counter.
The `.timestamp` is of type `struct timeval`, which is defined in
`sys/time.h` [1]. Only the `tv_usec` member of this structure is used
for the incrementing frame counter. However, suseconds_t tv_usec [2]
may be limited to range [-1, 1000000]:
"
[XSI] The type suseconds_t shall be a signed integer type capable of
storing values at least in the range [-1, 1000000].
"
Therefore, once frame 1000000 is reached, a rollover occurs and decoding
fails.
Fix this by using both `struct timeval` members, `.tv_sec` and `.tv_usec`
with matching modular arithmetic, this way the failure would occur again
just short of 2^84 frames, which should be plenty.
[1] https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sys_time.h.html
[2] https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sys_types.h.html
A test case using stateless hardware h264 decoder, the WARN/ERROR output
in gstreamer log indicates a failure occurred. With this change, that
error no longer occurs and the WARN/ERROR are not present:
```
pc$ gst-launch-1.0 videotestsrc num-buffers=1001001 pattern=6 ! \
video/x-raw,width=16,height=16,format=I420 ! \
x264enc ! filesink location=/tmp/test.h264
dut$ GST_DEBUG="*:3" gst-launch-1.0 filesrc location=/tmp/test.h264 ! \
h264parse ! v4l2slh264dec ! fakesink
...
0:03:51.393677606 12111 0x370df400 WARN \
v4l2codecs-decoder gstv4l2decoder.c:1157:gst_v4l2_request_set_done:<v4l2decoder2> \
Requested frame 1000000, but driver returned frame 0.
0:03:51.394140597 12111 0x370df400 WARN \
v4l2codecs-decoder gstv4l2decoder.c:1157:gst_v4l2_request_set_done:<v4l2decoder2> \
Requested frame 1000001, but driver returned frame 1.
0:03:51.394425216 12111 0x370df400 WARN \
v4l2codecs-decoder gstv4l2decoder.c:1157:gst_v4l2_request_set_done:<v4l2decoder2> \
Requested frame 1000002, but driver returned frame 2.
0:03:51.394665211 12111 0x370df400 WARN \
v4l2codecs-decoder gstv4l2decoder.c:1157:gst_v4l2_request_set_done:<v4l2decoder2> \
Requested frame 1000003, but driver returned frame 3.
0:03:51.394785833 12111 0x370df400 WARN \
v4l2codecs-h264dec gstv4l2codech264dec.c:1059:gst_v4l2_codec_h264_dec_output_picture:<v4l2slh264dec0> \
error: Failed to decode frame 1000000
ERROR: from element /GstPipeline:pipeline0/v4l2slh264dec:v4l2slh264dec0: Failed to decode frame 1000000
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5598>
If users update geometry related properties very frequently
for a stream to be animated, redrawing on every update
can make rendering choppy or can be a performance bottleneck.
To address the issue, adding a property to control the behavior
of redrawing scene when geometry related properties are updated.
Also, do not resize swapchain on such property update, since
re-allocating backbuffer and multi-sampled render target is
unnecessary in that case.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5575>
Other Windows applications allow window switching even when
an application window is in fullscreen mode. Also fixing
regression introduced in 15248d8b84
which makes restored window is always located at topmost
since we do not call SetWindowPos() anymore when restoring
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5574>
ISimpleAudioVolume controls volume of corresponding audio session
and there would be only single input/output audio session
in case of share-mode, which means that it controls audio volume of the
process. Instead, use IAudioStreamVolume interface which controls
volume of the stream.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5549>
Ignore alpha component of source (mouse cursor texture)
when blending alpha channel, otherwise the background area of source
(which has zeros) will be written to render target. Then it will result
in black rectangle if output texture is converted to premultiplied alpha
texture
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5566>