Some decoding APIs support delayed output or a command for decoding
a frame doesn't need to be sequential to corresponding command for
getting decoded frame. For instance, subclass might be able to
request decoding for multiple frames and then get for one (oldest)
decoded frame or so.
If aforementioned case is supported by specific decoding API,
delayed output might show better throughput performance.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1925>
* Don't warn for live object, since ID3D11Debug itself seems to be
holding refcount of ID3D11Device at the moment we called
ID3D11Debug::ReportLiveDeviceObjects(). It would report live object
always
* Device might not be able to support some formats (e.g., P010)
especially in case of WARP device. We don't need to warn about that.
* gst_d3d11_device_new() can be used for device enumeration. Don't warn
even if we cannot create D3D11 device with given adapter index therefore.
* Don't warn for HLSL compiler warning. It's just noise and
should not be critical thing at all
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1986>
Add two examples to demonstrate "draw-on-shared-texture" use cases.
d3d11videosink will draw application's own texture without copy
by using:
- Enable "draw-on-shared-texture" property
- make use of "begin-draw" and "draw" signals
And then, application will render the shared application's texture
to swapchain's backbuffer by using
1) Direct3D11 APIs
2) Or, Direct3D9Ex + interop APIs
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1873>
Add a way to support drawing on application's texture instead of
usual window handle.
To make use of this new feature, application should follow below step.
1) Enable this feature by using "draw-on-shared-texture" property
2) Watch "begin-draw" signal
3) On "begin-draw" signal handler, application can request drawing
by using "draw" signal action. Note that "draw" signal action
should be happen before "begin-draw" signal handler is returned
NOTE 1) For texture sharing, creating a texture with
D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX flag is strongly recommend
if possible because we cannot ensure sync a texture
which was created with D3D11_RESOURCE_MISC_SHARED
and it would cause glitch with ID3D11VideoProcessor use case.
NOTE 2) Direct9Ex doesn't support texture sharing which was
created with D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX. In other words,
D3D11_RESOURCE_MISC_SHARED is the only option for Direct3D11/Direct9Ex interop.
NOTE 3) Because of missing synchronization around ID3D11VideoProcessor,
If shared texture was created with D3D11_RESOURCE_MISC_SHARED,
d3d11videosink might use fallback texture to convert DXVA texture
to normal Direct3D texture. Then converted texture will be
copied to user-provided shared texture.
* Why not use generic appsink approach?
In order for application to be able to store video data
which was produced by GStreamer in application's own texture,
there would be two possible approaches,
one is copying our texture into application's own texture,
and the other is drawing on application's own texture directly.
The former (appsink way) cannot be a zero-copy by nature.
In order to support zero-copy processing, we need to draw on
application's own texture directly.
For example, assume that application wants RGBA texture.
Then we can imagine following case.
"d3d11h264dec ! d3d11convert ! video/x-raw(memory:D3D11Memory),format=RGBA ! appsink"
^
|_ allocate new Direct3D texture for RGBA format
In above case, d3d11convert will allocate new texture(s) for RGBA format
and then application will copy again the our RGBA texutre into
application's own texture. One texture allocation plus per frame GPU copy will hanppen
in that case therefore.
Moreover, in order for application to be able to access
our texture, we need to allocate texture with additional flags for
application's Direct3D11 device to be able to read texture data.
That would be another implementation burden on our side
But with this MR, we can configure pipeline in this way
"d3d11h264dec ! d3d11videosink".
In that way, we can save at least one texture allocation and
per frame texutre copy since d3d11videosink will convert incoming texture
into application's texture format directly without copy.
* What if we expose texture without conversion and application does
conversion by itself?
As mentioned above, for application to be able to access our texture
from application's Direct3D11 device, we need to allocate texture
in a special form. But in some case, that might not be possible.
Also, if a texture belongs to decoder DPB, exposing such texture
to application is unsafe and usual Direct3D11 shader cannot handle
such texture. To convert format, ID3D11VideoProcessor API needs to
be used but that would be a implementation burden for application.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1873>
1. Set the default output alignment to frame, rather than current
alignment of obu. This make it the same behaviour as h264/h265
parse, which default align to AU.
2. Set the default input alignment to byte. It can handle the "not
enough data" error while the OBU alignment can not. Also make it
conform to the comments.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1979>
The current behaviour for obu aligned output is not very precise.
Several OBUs will be output together within one gst buffer. We
should output each gst buffer just containing one OBU. This is
the same way as the h264/h265 parse do when NAL aligned.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1979>
The current optimization when input align and out out align are
the same is not very correct. We simply copy the data from input
buffer to output buffer, but we failed to consider the dropping of
OBUs. When we need to drop some OBUs(such as filter out the OBUs
of some temporal ID), we can not do simple copy. So we need to
always copy the input OBUs into a cache.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1979>
Problem is that unreffing the EGLImage/SHM Buffer while holding the
images_mutex lock may deadlock when a new buffer is advertised and
an attempt is made to lock the images_mutex there.
The advertisement of the new image/buffer is performed in the
WPEContextThread and the blocking dispatch when unreffing wants to run
something on the WPEContextThread however images_mutex has already been
locked by the destructor.
Delay unreffing images/buffers outside of images_mutex and instead just
clear the relevant fields within the lock.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1843>
1. Add the mono_chrome to identify 4:0:0 chroma-format.
2. Correct the mapping between subsampling_x/y and chroma-format.
There is no 4:4:0 format definition in AV1. And 4:4:4 should
let both subsampling_x/y be equal to 0.
3. Send the chroma-format when the color space is not RGB.
Fixes: #1502
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1974>
This AV1 parse implements the conversion between alignment of obu,
tu and frame, and the conversion between stream-format of obu-stream
and annexb.
TODO:
1. May need a property of operating_point to filter the OBUs
2. May add a property to disable deep parse.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1614>
gstd3d11window_corewindow.cpp(408): warning C4189:
'storage': local variable is initialized but not referenced
gstd3d11window_corewindow.cpp(490): warning C4189:
'self': local variable is initialized but not referenced
gstd3d11window_swapchainpanel.cpp(481): warning C4189:
'self': local variable is initialized but not referenced
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1962>
Some GPUs (especially NVIDIA) are complaining that GPU is still busy
even we did 50 times of retry with 1ms sleep per failure.
Because DXVA/D3D11 doesn't provide API for "GPU-IS-READY-TO-DECODE"
like signal, there seems to be still no better solution other than sleep.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1913>
The vabasedec's display and decoder are created/destroyed between
the gst_va_base_dec_open/close pair. All the data and event handling
functions are between this pair and so the accessing to these pointers
are safe. But the query function can be called anytime. So we need to:
1. Make these pointers operation in open/close and query atomic.
2. Hold an extra ref during query function to avoid it destroyed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1957>
Initial support for d3d11 texture so that encoder can copy
upstream d3d11 texture into encoder's own texture pool without
downloading memory.
This implementation requires MFTEnum2() API for creating
MFT (Media Foundation Transform) object for specific GPU but
the API is Windows 10 desktop only. So UWP is not target
of this change.
See also https://docs.microsoft.com/en-us/windows/win32/api/mfapi/nf-mfapi-mftenum2
Note that, for MF plugin to be able to support old OS versions
without breakage, this commit will load MFTEnum2() symbol
by using g_module_open()
Summary of required system environment:
- Needs Windows 10 (probably at least RS 1 update)
- GPU should support ExtendedNV12SharedTextureSupported feature
- Desktop application only (UWP is not supported yet)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1903>
As advised by !1366#note_629558 , the nice transport should be
accessed through:
> transceiver->sender/receiver->transport/rtcp_transport->icetransport
All the objects on the path can be accessed through properties
except sender/receiver->transport. This patch addresses that.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1952>
Move d3d11 device, memory, buffer pool and minimal method
to gst-libs so that other plugins can access d3d11 resource.
Since Direct3D is primary graphics API on Windows, we need
this infrastructure for various plugins can share GPU resource
without downloading GPU memory.
Note that this implementation is public only for -bad scope
for now.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/464>
Using the object lock is problematic for anything that can dispatch to
another thread which is what createWPEView() does inside
gst_wpe_src_start(). Using the object lock there can cause a deadlock.
One example of such a deadlock is when createWPEView is called, but
another (or the same) wpesrc is on the WPEContextThread and e.g. posts a
bus message. This message propagations takes and releases the object
lock of numerous elements in quick succession for determining various
information about the elements in the bin. If the object lock is
already held, then the message propagation will block and stall bin
processing (state changes, other messages) and wpe servicing any events.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/1490
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1934>
On renegotiation, or when the user has specified a mid for
a transceiver, we need to avoid picking a duplicate mid for
a transceiver that doesn't yet have one.
Also assign the mid we created to the transceiver, that doesn't
fix a specific bug but seems to make sense to me.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1902>