...for color space conversion if available
ID3D11VideoProcessor is equivalent to DXVA-HD video processor
which might use specialized blocks for video processing
instead of general GPU resource. In addition to that feature,
we need to use this API for color space conversion of DXVA2 decoder
output memory, because any d3d11 texture arrays that were
created with D3D11_BIND_DECODER cannot be used for shader resource.
This is prework for d3d11decoder zero-copy rendering and also
for conditional HDR tone-map support.
Note that some Intel platform is known to support tone-mapping
at the driver level using this API on Windows 10.
Don't specify the resolution of backbuffer. Then dxgi will let us know the
actual client area. When upstream resolution is chagned, updating the size
of backbuffer without the consideration for client size would cause mismatch
between them.
Use consistent memory layout between dxva and other shader use case.
For example, use DXGI_FORMAT_NV12 texture format instead of
two textures with DXGI_FORMAT_R8_UNORM and DXGI_FORMAT_R8G8_UNORM.
This reverts commit ddd13fc7c0
Dynamic usage can reduce the number of copy per frame but make
things complicated and the benefit seems to not significant.
Also since we don't provide _map() method for the dynamic usage,
application cannot read buffers which make "last-sample" property
unusable in case of d3d11videosink.
Although the target platform of D3D11 decoding API are both desktop and UWP app,
DXVA header is blocked by "WINAPI_FAMILY_PARTITION(WINAPI_PARTITION_DESKTOP)"
which is meaning that that's only for desktop app.
To workaround this inconsistent annoyingness, we need to define WINAPI_PARTITION_DESKTOP
regardless of target WinAPI partition.
A ID3D11Texture2D memory can consist of multiple planes with array.
For array typed memory, GstD3D11Allocator will allocate new GstD3D11Memory
with increased reference count to the ID3D11Texture2D but different array index.
Even if one of downstream d3d11 elements can support dynamic-usage memory,
another one might not support it. Also, to support dynamic-usage,
both upstream and downstream d3d11device must be the same object.
If d3d11colorconvert element is configured, do color space conversion
regardless of the device type whether it's S/W emulation or real H/W.
Since d3d11colorconvert is no more a child of d3d11videosinkbin,
we don't need this behavior. Note that previous code was added to
avoid color space conversion from d3d11videosink if no hardware
device is available (S/W emulation of d3d11 is too slow).
d3d11upload should be able to support upstream d3d11 memory, not only system memory.
Fix for following pipeline
d3d11upload ! "video/x-raw(memory:D3D11Memory)" ! d3d11videosink
borderless top-most style full screen mode support.
Basically fullscreen toggle mode is disabled by default. To enable it
use "fullscreen-toggle-mode" property to allow fullscreen mode change
by user input and/or property.
In some cases, rendering and dxgi (e.g., swapchain) APIs should be
called from window message pump thread, but current design (dedicated d3d11 thread)
make it impossible. To solve it, change concurrency model to locking based one
from single-thread model.
In earlier implementation of d3d11videosink where no shader was implemented,
the aspect ratio and render size were adjusted by manipulating the backbuffer size
with unintuitive formula. Since now we do color conversion and resize using
shader, we can remove the hack.
... and use SetParent() WIN32 API when external window is used.
Depending on DXGI swap effect, the external window might not be
reusable by another backend. To preserve the external window's property
and setting, drawing to internal window seems to be safer way.
If d3d11window does not convert format internally, shader resource view
is not required. Note that shader resource view is used for
color conversion using shader but when conversion is not required,
we just copy input input texture to backbuffer.