Renamed the first variable member of GstVaMemory from parent to mem in
order to avoid confusion with GstMemory's parent.
When freeing the structure, memory's parent is check in order to
decide if surfaces has to be destroyed or not, since only the parent
class have to destroy it.
Removed GST_MEMORY_FLAG_NO_SHARE in memory initialization, since it is
deprecated.
Implemented allocator's share virtual method which creates a new
shallow GstVaMemory structure based on the passed one which will be
it's parent.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1626>
Staging texture is used for memory transfer between system and
gpu memory. Apart from d3d11{upload,download} elements, however,
it should happen very rarely.
Before this commit, d3d11bufferpool was allocating at least one
staging texture in order to calculate cpu accessible memory size,
and it wasn't freed for later use of the texture unconditionally.
But it will increase system memory usage. Although GstD3D11memory
object is implemented so that support CPU access, most memory
transfer will happen in d3d11{upload,download} elements.
By this commit, the initial staging texture will be freed immediately
once cpu accessible memory size is calculated.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1627>
These function were repeated in the different implemented
elements. This patch centralize them.
The side effect is dmabuf memory type is no longer checked with the
current VAContext, but assuming that dmabuf is a consequence of caps
negotiation from dynamic generated caps templates, where the context's
memory types are validated, there's no need to validate them twice.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1644>
An oddness of wasapi loopback feature is that capture client will not
produce any data if there's no outputting sound to corresponding
render client. In other words, if there's no sound to render,
capture task will stall. As an option to solve such issue, we can
add timeout to wake up from capture thread if there's no incoming data
within given time interval. But it seems to be glitch prone.
Another approach is that we can keep pushing silence data into
render client so that capture client can keep capturing data
(even if it's just silence).
This patch will choose the latter one because it's more straightforward
way and it's likely produce glitchless sound than former approach.
A bonus point of this approach is that loopback capture on Windows7/8
will work with this patch. Note that there's an OS bug prior to Windows10
when loopback capture client is running with event-driven mode.
To work around the bug, event signalling should be handled manually
for read thread to wake up.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1588>
Add a global mutex to exclusive access to shared stream buffers, such
as DMABufs or VASurfaces after a tee:
LIBVA_DRIVER_NAME=iHD \
gst-launch-1.0 v4l2src ! tee name=t t. ! queue ! \
vapostproc skin-tone=9 ! xvimagesink \
t. ! queue ! vapostproc ! xvimagesink
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1529>
This function will take an array of DMABuf GstMemory and an array of
fd, and create a VASurfaceID with those fds. Later that VASurfaceID is
attached to each DMABuf through GstVaBufferSurface.
In order to free the surface GstVaBufferSurface now have GstVaDisplay
member, and _buffer_surface_unref() were added.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1529>
There are, in VPP, surfaces that doesn't support 4:2:2 fourccs but it
supports the chroma. So this patch gives that opportunity to the
driver.
This patch also simplifiies
gst_va_video_surface_format_from_image_format() to just an iterator
for surfaces available formats.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1529>
Add a new parameter to _create_surfaces(): a pointer to
VASurfaceAttribExternalBuffers.
If it's defined the memory type is changed to DRM_PRIME, also a new item is
added to the VASurfaceAttrib array with
VASurfaceAttribExternalBufferDescriptor.
Also, the VASurfaceAttrib for pixel format is not mandatory anymore. If fourcc
parameter is 0, is not added in the array, relying on the chroma. This is
useful when creating surfaces for uploading or downloading images.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1529>
The width/height from the video meta can be padded width, height. But when
sourcing from padded buffer, we only want to use the valid pixels. This
rectangle is from the crop meta, orther it is deduces from the caps. The width
and height from the caps is save in the parent class, use these instead of the
GstVideoInfo when settting the src rectangle.
This fixes an issue with 1080p video displaying repeated or green at the
padded bottom 8 lines (seen with v4l2codecs).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1580>
Note that newly added formats (YUY2, UYVY, and VYUY) are not supported
render target view formats. So such formats can be only input of d3d11convert
or d3d11videosink. Another note is that YUY2 format is a very common
format for hardware en/decoders on Windows.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1581>
Add custom IMFMediaBuffer and IMF2DBuffer implementation in order to
keep track of lifecycle of Media Foundation memory object.
By this new implementation, we can pass raw memory of upstream buffer
to Media Foundation without copy.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1518>
The min latency is calculated with the maximum number of frames that
precede any frame, if available, and it is lower than the maximum
number of frames in DBP.
The max latency is calculated with the maxium size of frames in DBP.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1500>
For the allocator to create surfaces with the correct chroma an
fourcc, it should use a surface format, not necessarily the negotiated
format.
Instead of the previous arbitrary extra formats list, the decoder
extracts the valid pixel formats from the given VA config, and pass
that list to the allocator which stores it (full transfer).
Then, when the allocator allocates a new surface, it looks for a
surface color format chroma-compatible with the negotiated image color
format.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1483>
Instead of adding a list of ad-hoc formats for raw caps (I420 and
YV12), the display queries the available image formats and we assume
that driver can download frames in that available format with same
chroma of the config surfaces chroma.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1483>
Some devices (e.g., Surface Book 2, Surface Pro X) will expose
both MediaStreamType_VideoPreview and MediaStreamType_VideoRecord types
for a logical device. And for some reason, MediaStreamType_VideoPreview
seems to be selected between them while initiailzing device.
But I cannot find any documentation for the decision rule.
To be safe, we will select common formats between them.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1478>
Microsoft defines two I420 formats, one is I420, and the other is
IYUV (but both are same, just names are different).
Since both will be converted to GST_VIDEO_FORMAT_I420,
we should check both I420 and IYUV subtypes during
GstVideoFormat to Microsoft's format conversion.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1478>
In case of UWP, documentation from MS is saying that
ActivateAudioInterfaceAsync() method should be called from UI thread.
And the resulting callback might not happen until user interaction
has been made.
So we cannot wait the activation result on constructed() method.
and therefore we should return gst_wasapi2_client_new()
immediately without waiting the result if wasapi2 elements are
running on UWP application.
In addition to async operation fix, this commit includes COM object
reference counting issue around ActivateAudioInterfaceAsync() call.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1466>
Since the commit c29c71ae9d,
device activation method will be called from an internal thread.
A problem is that, CoreApplication::GetCurrentView()
method will return nullptr if it was called from non-UI thread,
and as a result, currently implemented method for accessing ICoreDispatcher
will not work in any case. There seems to be no robust way for
accessing ICoreDispatcher other then setting it by user.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1466>