The output of VP9 and AV1 encoder is a little different from the H264
and H265 encoder, it may contain repeat frames and so the output frame
number may be more than the input. We need to call finish_subframe()
when some frame will be repeated later. So we need to extend the
current prepare_output() virtual function.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3015>
A client may map dmabufs without the intention to either read or write
to the memory. One example is clients wanting to use the
`gst_video_frame_map()` helper function.
Thus, in order to make buffers from `GstVaDmabufAllocator` conveniently
usable, ignore the modifier check if the client specified neither
`GST_MAP_READ` nor `GST_MAP_WRITE`.
Also skip the `va_sync_surface()` call in that case, as it's likely only
needed for CPU reads/writes.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5965>
clang does not like the array index assignment without the `=` sign in
it. This is a gnu extension I believe, and adding the sign is proper.
This fixes the following two warnings:
```
../subprojects/gst-plugins-bad/gst-libs/gst/vulkan/gstvkvideo-private.c:32:40:
warning: use of GNU 'missing =' extension in designator [-Wgnu-designator]
[GST_VK_VIDEO_EXTENSION_DECODE_H264] {
^
=
../subprojects/gst-plugins-bad/gst-libs/gst/vulkan/gstvkvideo-private.c:36:40:
warning: use of GNU 'missing =' extension in designator [-Wgnu-designator]
[GST_VK_VIDEO_EXTENSION_DECODE_H265] {
^
=
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5996>
Remove optional sprop-stereo and sprop-maxcapture fields from Opus
remote offer caps before intersecting with local codec preferences.
According to https://datatracker.ietf.org/doc/html/rfc7587#section-7.1
those fields are sender-only informative, and don't affect
interoperability.
Fixes cases where the webrtc media will end up receive-only if the
local side wants to send stereo but the remote is sending mono, or
vice versa.
There may be other fields in other codecs, so the implementation
anticipates needing to add further fields and codecs in the future.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5993>
Post a bus message explaining that input buffers must
have timestamps and return GST_FLOW_ERROR, instead of
a confusing NOT-NEGOTIATED
Also remove an errant buffer unref in the error handling
that would lead to crashes after.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5935>
Add a finalize method and release locks and things in there, instead
of in the dispose method. Dispose may be called multiple times,
at any time, and should just safely release references to other
memory that might reference it back.
In this case, timecodestamper would later crash in the element
dispose method trying to take the freed mutex from
gst_timecodestamper_release_pad().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5935>
This protocol does what it says on the box, avoiding the need for a 1x1
wl_shm buffer.
A wayland-projects wrap has been added for users who do not have v1.26
available.
This commit was partly authored by Robert Mader.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2662>
Explicitly calls gst_vtenc_pause_output_loop when going PAUSED->READY to make sure GST_PAD_STREAM_LOCK is not taken.
Before this change, a deadlock would occur if pipeline got stopped right after one output buffer was generated by vtenc.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5933>
Modify the fix_output_format in vpp to directly generate caps with
negotiated src caps, and we have the correct dma caps negotiation in
fix_output_format function. And thus, we can remove the redundant
negotiation of using function pad_accept_memory in vpp.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5845>
The pool currently defaults to performing a layout transition to
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, with some special exceptions for
video usages. This may not be a legal transition depending on the usage.
Provide an API to explicitly control the initial image layout.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5881>
When implementing NDK media support, it would be useful to also have JNI
implementation in the same binary as NDK media compatibility is lower.
As such, implement a rudimentary vtable system for gstamc-codec and
gstamc-format, and allow choosing the implementation at static_init()
time.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4115>
This allows the implementations to do custom logic behind the hood. For
example, when NDK implementation is added, the entrypoint can chooses to
statically initialize the NDK implementations or the JNI one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4115>
With this patch, the caps is registered in the order of memory features
as: VAMemory, DMABuf then raw caps in linux path, and D3D11Memory then
raw caps in windows path. It helps to prioritize the video memory for all
msdk elements when doing negotiation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5898>
Fixing below debug layer report
ID3D12Device::CreateCommittedResource: Ignoring InitialState D3D12_RESOURCE_STATE_COPY_DEST.
Buffers are effectively created in state D3D12_RESOURCE_STATE_COMMON.
Buffer resource will be automatically promoted to D3D12_RESOURCE_STATE_COPY_DEST
at the very first COPY operation time.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5895>
Since DXGI desktop duplication API does not work with Direct3D12 device,
this element will use Direct3D11 device to acquire frame.
Then other rendering operations (e.g., texture copy, render pipeline) will
happen using Direct3D12 API
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5883>
- Fix skipsize on _update_backlog failure.
- Add robustness to AU completion detection by using AUD when present. If we've
received a AUD we overwrite the first VCL NAL detection when the result was
negative. VCL following AUD is the first VCL of next AU.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5862>
In case of tier 1 decoder, always use reference-only picture to avoid
fixed-size pool limitation and output decoded picture without
copy even for negative rate. Also do not use copy queue for GPU to GPU
copy. Copy queue is specialized for upload/download and may occupy
PCIE bandwidth. Use direct queue as recommended by vendors.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5877>
On flush event, baseclass will discard all pictures from DPB
but there can be still in-flight commands not finished yet.
Use our command queue, allocator and fence data helper objects
to keep resource available during command execution.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5877>
Instead of only supporting writing SPU data directly to YUV frames,
render the SPU data to an intermediate AYUV overlay buffer. The overlay
data is then blended to the video frame.
For the PGS format, the overlay buffer size is set to the size of the
Composition Window, and its position in the overlay composition is set
to the window position. The objects to render are now cropped when the
cropping flag is set.
For the Vobsub format, the overlay buffer size is set to the size of the
Display Area.
Once rendered, the overlay composition rectangle is now moved and scaled
to fit the video output size, to avoid clipping.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5827>
Shader visible descriptors occupy GPU resource and there are hardware
limits. Thus, in order to minimize the amount of shader visible heaps,
only non shader visible descriptor heap (staging) will be held by d3d12memory.
Then converter will copy the staging descriptor to shader visible
descriptor heap per draw.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5875>
Zero initialization would have overhead and it's not required
most cases except for textures. Use CREATE_NOT_ZEROED flag
in case of buffer resource or if a texture will be rendered without any
prior read operation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5875>
Conversion will happen when constructed command list is executed,
not by converter element. Thus this object should not map output buffer
with write flag which will result in error if multiple threads
are building commands for the same output target frame.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5875>