Msdkdec should use it own pool when the allocation from downstream query
is not any msdk_allocator (i.e. msdk_video_allocator,
msdk_dmabuf_allocator and msdk_system_allocator). Otherwise, when using
pipeline "msdkh264dec ! vah264enc !" to transcode a not 16-bit-aligned
stream (i.e. 1920x1080), the transcoding will fail due to the size
mismatch issue between decoder pool and encoder pool.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2451>
This is a workaround for pts because oneVPL cannot handle the pts
correctly when there is b-frames. We first cache the input frame pts in
a queue then retrive the smallest one for the output encoded frame as
we always output the coded frame when this frame is displayable.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2089>
gst_amf_encoder_try_output() pushes at most one output buffer downstream
although more may be ready. As a consequence, output samples will keep
queueing up in AMFComponent whenever QueryOutput() returns AMF_REPEAT
(and do_wait is FALSE). This has negative impact on latency when the
video being encoded is a live stream.
In order to avoid it, always retrieve and push all samples available in
AMFComponent's output queue at once.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2536>
Intel DXVA driver crashes sometimes (from GPU thread) if
ID3D11VideoDecoder is released while there are outstanding view objects.
To make sure the object life cycle, holds an ID3D11VideoDecoder refcount
in GstD3D11Memory object.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2504>
We should not reset the input/output_frame_count when some configure
changes. For example, the if resolution changes, the current way just
resets the frame count and make the PTS of the output buffer restart
from the original PTS of the first frame. That causes a lot of QOS
event and drop all the new frames.
We should only reset them when encoder start().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2489>
d3d11screencapture can miss a cursor shape to draw or draw an outdated cursor shape.
- AcquireNextFrame only provides cursor shape when there is one update
- current d3d11screencapture skips cursor shape when mouse is not drawn
So, if a gstreamer application uses d3d11screencapture with cursor initially not drawn
"show-cursor"=false and then switches this property to true, the cursor will not be
actually drawn until AcquireNextFrame provides a new cursor shape.
This commit makes d3d11screencapture always update the cursor shape information, even
if the mouse is not drawn. d3d11screencapture will always have the latest cursor shape
when requested to draw it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2485>
NVIDIA GPUs have undocumented limitation regarding minimum resolution
and it can be queried via a NVDEC API. However, since we don't want to
bring CUDA/NVDEC API into D3D11, use hardcoded values for now
until we find a nice way for capability check.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2406>
If the stream chroma doesn't match with any video format in the source
caps template (generated from va config surface formats) instead of
return unknown, return the first available format in the template,
assuming that the driver would be capable to do color conversions.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2404>
Instead of using a hard-coded list of preferred formats according the
chroma type, now if now caps are pre-negotiated, from template caps
will choose the first format with the same chroma type. If
pre-negotiated, then it will choose the first format, with same chroma
type, from the first caps structure.
Also all the decoders will check if GST_VIDEO_FORMAT_UNKNOWN is
returned, failing the negotiation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2351>
V4L spec now requires decode_params flags to be set in accordance to the
frame's type. In particular this is required by H.264 decoder of NVIDIA
Tegra SoC to operate properly. Set the flags based on type of parsed
slices.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1757>
GstD3D11ScreenCapture object is pipeline-independent global object
and the object can be shared by multiple src elements,
in order to overcome a limitation of DXGI Desktop Duplication API.
Note that the API allows only single capture session in a process for
a monitor.
Therefore GstD3D11ScreenCapture object must be able to handle a case
where a src element holds different GstD3D11Device object. Which can
happen when GstD3D11Device context is not shared by pipelines.
What's changed:
* Allocates capture texture with D3D11_RESOURCE_MISC_SHARED for the
texture to be able to copied into other device's texture
* Holds additional shader objects per src element and use it when drawing
mouse
Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/1197
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2366>
If there weren't any moved/dirty regions in the captured frame, the
viewport of the ID3D11DeviceContext would be left at whatever previous
value it had, which could lead to the cursor being drawn in a wrong
position and/or in an incorrect size.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2362>
Make all codecs consistent so that subclass can know additional DPB
size requirement depending on render-delay configuration regardless
of codec. Note that render-delay feature is not implemented for AV1
yet but it's planned.
Also, consider new_sequence() is mandatory requirement, not optional
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2343>
When we fixup src caps, the current way of handling the HDR fields is not
correct.
1. We trim the HDR fields only when the input caps is not a subset of the
fixup src caps. But in fact, the input caps with HDR fields such as the
"mastering-display-info" can possibly be the subset of the fixup src caps,
if they have all same other fields.
2. We always copy the colorimetry from input caps to src caps if it is
absent. But when hdr-tone-mapping is enabled, the HDR->SDR conversion makes
the colorimetry change. We should use downstream's setting, or just use the
default colorimetry of SDR.
We changes to:
1. If hdr-tone-mapping is enabled, we trim all HDR fields and add a correct
colorimetry.
2. Copy colorimetry from input if it is still absent.
3. Consider the subset replacement.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2244>
GLib guarantees libintl is always present, using proxy-libintl as
last resort. There is no need to mock gettex API any more.
This fix static build on Windows because G_INTL_STATIC_COMPILATION must
be defined before including libintl.h, and glib does it for us as part
as including glib.h.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2028>
Our decoder implementation does not use downstream d3d11 pool for
decoding because of special requirement of D3D11/DXVA. So preallocation
using the downstream buffer pool will waste GPU memory in most cases.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2211>
The current way names the level by the number of B frames it contains, the
less it contains, the higher level it is. So the non ref B frames are in the
lowest layer and the B frames in the highest level refer to I/P frames.
But the widely used way is just the opposite, the ref B frames are in the
lower level and non ref B frames are at the highest level.
The is just a terminology change, and does not have any effect for compression
result and quality.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2149>
timeapi.h is missing in our MinGW toolchain. Include mmsystem.h
header instead, which defines struct and APIs in case of our MinGW
toolchain. Note that in case of native Windows10 SDK (MSVC build),
mmsystem.h will include timeapi.h
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2153>
They are part of gst_dep already and we have to make sure to always have
gst_dep. The order in dependencies matters, because it is also the order
in which Meson will set -I args. We want gstreamer's config.h to take
precedence over glib's private config.h when it's a subproject.
While at it, remove useless fallback args for gmodule/gio dependencies,
only gstreamer core needs it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2031>
Added GstVaFeature enum type, and new parameter for VA allocator's
set_format() and get_format(). Also added a new parameter in VA pool
gst_va_pool_new_with_config() and
gst_buffer_pool_config_set_va_allocation_params().
This new parameter will define if derived images will by used for
buffer mapping.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2057>
By default, the classification is
"Converter/Filter/Colorspace/Scaler/Video/Hardware", but if VA
post-processor driver supports either color balance, skin tone
enhancement, sharpening or noise reduction, "Effect" is added.
Thus, if vapostproc ranking is raised, it can be chosen by
autovideosink.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2066>
In order to other plugins use gstva objects, such as allocators and buffer
pools, this merge request move them from the va plugin to the gstva library.
This objects are not exposed in <gst/va/gstva.h> since they are not expected
to be used by users, only by plugin implementators.
Because of the surface copy design, which is used to implement allocator's
mem_copy() virtual function, depends on the vafilter, which is kept inside
the plugin, memory copy through VAPosproc is disabled and removed temporarly.
Also added some missing parameter validation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2048>
Untabifying header file.
The logging category was moved from the plugin generic category to
the display category. It can argue that video formats hacks are
display dependant.
Added validations for input parameters.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2048>
... and add more encoding options.
QSV API supports dynamic bitrate change without IDR insertion.
That's more efficient way of runtime encoding option update
than starting from new sequence with IDR per bitrate option change.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2039>
Check that `self` and `self->callback` are defined. `self` can be set to
`NULL` in `remove_listener`, and `self->callback` can be set to `NULL`
inside `gst_amc_surface_texture_jni_set_on_frame_available_callback`.
This can cause a segfault since the Java object can outlive the C
object, and call the callback after `remove_listener` is called.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2024>
Produce an error if we try to use the feature of holding capture buffer
but it is not supported by the driver. Ingoring this can lead to stalls
as the driver will run-out of capture buffer to decode into. This
affects slice decoders but also split-field interlaced decoding.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2009>
Some problematic H265 stream may miss the reference frame in the DPB,
and get some message like: "No short term reference picture for xxx".
So there may be empty entries in ref_pic_list0/1 when passing to
decode_slice() function of sub class. We need to check the NULL pointer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2018>
Adding new encoder elements nvd3d11{h264,h265}enc for Direct3D11
input support and re-written nvcuda{h264,h265}enc elements.
Newly writeen elements have some differences compared with old
nv{h264,h265}enc including non-backward compatible changes.
* RGBA is not a supported input format any more:
New elements will support only YUV formats to avoid implicit conversion
done by hardware. Ideally it should be done by upstream element
in order to have more control on it. Moreover, RGBA support can cause
redundant RGBA -> YUV conversion if multiple encoders are
used for the same RGBA input
* Subsampled planar format support is dropped:
I420 and YV12 format are not supported formats for Direct3D11.
Although it's supported in CUDA mode, it's not a hardware friendly
memory layout and it will waste GPU memory since UV planes
will have large padding due to the memory layout requirement of NVENC.
* GL support is dropped: Similar to the RGBA case,
GL support in encoder would be suboptimal if GL input is
used by multiple encoders, because each encoder will copy GL memory
into CUDA memory.
Upstream cudaupload element can be used for GL <-> CUDA
interop instead.
* No more pre-allocation of encoder input surfaces. New implementation
will use input CUDA memory without copy (zero-copy) or
will copy into a NVENC's input buffer struct in case of
system memory input.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1997>
Since the strings are empty for GST_MSDK_CAPS_MAKE_WITH_DMABUF_FEATURE
and GST_MSDK_CAPS_MAKE_WITH_VA_FEATURE, when excuting
gst-inspect-1.0.exe msdkh265enc, there will be convert static caps error
because of the extra semicolon between two empty strings. Now macro
definitions are added to avoid this issue.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2004>
Pass the current frame to the duplicate_picture callback. This makes it easier
to set the frame's output_buffer if we already have one available. Also
documented that unlike VP9, it is not optional to implement this as the
picture will populate the DPB if it is a key-frame. To ensure this, remove the
default implementation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1992>
1. Always set the according GstVaH264EncFrame pointer when GstVideoCodecFrame
pointer is assigned, which can make the logic safe.
2. Fix the forgotten change in _sort_by_frame_num. Its input pointer now is
GstVideoCodecFrame type.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1935>
On GstVideoDecoder::{drain,flush}, we send null packet with
CUVID_PKT_ENDOFSTREAM flag to drain out decoder. Which will
reset CUVID parser as well.
To continue decoding after the drain, the next input buffer
should include sequence headers otherwise CUVID parser will
not report any decodeable frame.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1911>
Instead of using GstMiniObject to hold H264 frame, now it uses a plain
structure. Besides, instead of holding a reference to
GstVideoCodecFrame, the H264 frame structure is set as a
GstVideoCodecFrame user data.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1856>
When there is vpp scaling downstream, we need to make sure SFC is not
triggered because vpp may fall into passthrough mode which causes
the decoder negotiation to create src caps with vpp scaled width/height.
This patch includes bitstream's original size in first query with
downstream in gst_msdkdec_src_caps, which is the same for what we do for
color format in this query. This is to ensure SFC scaling starts to
work only when downstream directly asks for a different size instead of
through vpp.
Note that here SFC scaling follows the same behavior as msdkvpp:
if user only changes width or height, e.g. dec ! video/x-raw,width=xx !,
the height will be modified to the value which fits the original DAR.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1838>
* Hide GstCudaMemory member variables
* Make GstCudaAllocator object GstCudaContext independent
* Set offset/stride of memory correctly via video meta
* Drop GST_BUFFER_POOL_OPTION_VIDEO_ALIGNMENT support.
This implementation actually does not support custom alignment
because we allocate device memory via cuMemAllocPitch
of which alignment is almost uncontrollable
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1834>
* cudaupload/download
- Specify only formats actually we can deal with
nvcodec elements, not all video formats
- Supports CUDA output for download and input for upload in order
to make passthrough possible, like other upload/download elements.
* cudabasetransform
- Reset conversion element if upstream CUDA memory
holds different CUDA context and the element can accept it.
This is the same behavior as corresponding d3d11 filter elements.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1834>
Currently for copying the coded buffer onto a GStreamer buffer, the
coded buffer is mapped two times: one for getting the size, and later
for do the actual copy. We can avoid this by doing directly in the
element rather than in the general encoder object.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1845>