DecodedOrder was deprecated in msdk-2017 version, but some customers
still use this for low-latency streaming of non-b-frame encoded streams,
which needs to output the frame at once
Asking decklink to render audio data seems to be based entirely on
the sample counts which completely disregards the timestamps
we pass to decklink. As a result, we need to explicitly check
for late buffers and drop them ourselves.
Do not restrict allowed maximum resolution depending on the
initial resolution. If new resolution is larger than previous one,
just re-init encode session.
Due to uncleared last flow, decoding after seek was never possible
(last_ret == GST_FLOW_FLUSHING).
nvdec dose not need to keep track of the previous flow return,
and actually the interest is data/even flow of the current handle_frame().
Implementing ::negotiate() method to support runtime output format
change. If downstream was reconfigured, baseclass will invoke
::negotiate() method, and nvdec should update output memory
type depending on downstream caps.
Input stream might be silently changed without ::set_format() call.
Since nvdec has internal parser, nvdec element can figure out the format change
by itself.
Register openGL resource only once per memory. Also if upstream
provides the registered information, reuse the information
instead of doing it again. This can improve performance dramatically
depending on system since the resource registration might cause
high overhead.
Introduce GstCudaGraphicsResource structure to represent registered
CUDA graphics resources and to enable sharing the information among
nvdec and nvenc. This structure can reduce the number of resource
registration which cause high overhead.
For openGL interoperability, nvdec uses cuGraphicsGLRegisterImage API
which is to register openGL texture image.
Meanwhile nvenc uses cuGraphicsGLRegisterBuffer API to registure openGL buffer object.
That means two kinds of graphics resources are registered per memory
when nvdec/nvenc are configured at the same time.
The graphics resource registration brings possibly high overhead
so the registration should be performed only once per resource
from optimization point of view.
Both g_list_delete_link and g_list_remove remove an element and free it,
so l->next is invalid (catched by valgrind) after calling g_list_delete_link
or g_list_remove
Returning MFX_WRN_INCOMPATIBLE_VIDEO_PARAM means MSDK detects some
incompatible parameters but it is resolved, and we may not regard
MFX_WRN_INCOMPATIBLE_VIDEO_PARAM as a fatal error. In this fix,
GST_FLOW_OK is returned but with a warning message so that a pipeline
may run to the end.
Fixes werror build:
In file included from ../sys/androidmedia/gstahcsrc.c:70:
../gst-libs/gst/interfaces/photography.h:27:2: error: "The GstPhotography interface is unstable API and may change in future." [-Werror,-W#warnings]
#warning "The GstPhotography interface is unstable API and may change in future."
^
../gst-libs/gst/interfaces/photography.h:28:2: error: "You can define GST_USE_UNSTABLE_API to avoid this warning." [-Werror,-W#warnings]
#warning "You can define GST_USE_UNSTABLE_API to avoid this warning."
^
video-direction property is common property in gstreamer. In addition,
both mirroring & rotation properties are marked as deprecated,
video-direction will override mirroring & rotation properties when they
are set explicitly
Fix https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/1058
gst_msdkdec_finish_task() may release all frames in
GstVideoDecoder object. In this case, allocate_output_buffer()
cannot get the oldest frame to allocate buffer.
So gst_msdkdec_handle_frame() should return GST_FLOW_OK for
letting gst_video_decoder_decode_frame() to send a new frame
for decoding.
Fixes#664.
Fixes#665.
When vpp rotation is 90 or 270, the output frame
should be rotated, too.
Example:
gst-launch-1.0 -vf videotestsrc \
! video/x-raw,width=720,height=480 \
! msdkvpp rotation=90 ! vaapisink
There is no NdkMediaCodecList API yet, but it is still better to isolate
JNI code. This will facilitate porting to a native API if Google ever
release one.
gst_query_get_n_allocation_pools > 0 does not guarantee that
the N th internal array has GstBufferPool object. So users should
check the returned GstBufferPool object from
gst_query_parse_nth_allocation_pool.
Async CUDA operation with default stream (NULL CUstream) is not much
beneficial than blocking operation since all CUDA operations which belong
to the CUDA context will be synchronized with the default stream's operation.
Note that CUDA stream will share all resources of the corresponding CUDA context
but which can help parallel operation similar to the relation between thread and process
The internal decoding state must be GST_NVDEC_STATE_PARSE before
calling CuvidParseVideoData(). Otherwise, nvdec will be confused
on decode callback as if the frame is decoding only frame and
the input timestamp of corresponding frame will be ignored.
Eventually one decoded frame will have non-increased PTS.
The destroy callback can be called just before the fìnalization of
GstMiniObject. So the nvdec object might be destroyed already.
Instead, store the GstCudaContext with increased ref to safely
unregister the CUDA resource.
Fix unexpected cropping with non 1:1 pixel aspect-ratio.
The actual buffer width/height should be passed to gst_d3d11_window_render(),
instead of the calculated resolution. The width/height
values are parameters for copying d3d11 video memory.
Also, aspect-ratio should be considered on resize callback
to decide render rectangle size.
YV12 format is supported by Nvidia NVENC without manual conversion.
So nvenc is exposing YV12 format at sinkpad template but there is some
missing point around uploading the memory to GPU.
Currently h264parser produces a field or a frame for
alignment=au for interlaced streams, but the flag
MFX_BITSTREAM_COMPLETE_FRAME needs a complete frame
or complementary field pair of data, this results in
broken images being output.
Some patches have been sent out to fix h264parser,
but they are pending on some unfinished work. In
order to make gstreamer-msdk decoding work properly
for interlaced streams before h264parser is fixed,
this flag will be removed temporarily and will be
added back once h264parser if fixed.
Related to:
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/merge_requests/399https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/merge_requests/228
Instead of using the information we stored ourselves for the video frame
itself. Which was also the wrong one: it was the mode from the property,
not the autodetected one.
This fixes vanc extraction with mode=auto
The gst_cuda_result macro function is more helpful for debugging
than previous cuda_OK because gst_cuda_result prints the function
and line number. If the CUDA API return was not CUDA_SUCCESS,
gst_cuda_result will print WARNING level debug message with
error name, error text strings.
... and drop CUvideoctxlock usage. The CUvideoctxlock basically
has the identical role of cuda context push/pop but nvdec specific
way. Since we can share the CUDA context among encoders and decoders,
use CUDA context directly for accessing GPU API.
... and add support CUDA context sharing similar to glcontext sharing.
Multiple CUDA context per GPU is not the best practice. The context
sharing method is very similar to that of glcontext. The difference
is that there can be multiple context object on a pipeline since
the CUDA context is created per GPU id. For example, a pipeline
has nvh264dec (uses GPU #0) and nvh264device0dec (uses GPU #1),
then two CUDA context will propagated to all pipeline.
New object and helper functions can remove duplicated code
from nvenc/nvdec. Also this is prework for CUDA device context sharing
among nvdec(s)/nvenc(s).
We don't support negotiation with downstream but simply set caps based
on the buffers we receive. This prevents renegotiation to other formats,
and negotiation to NTSC in mode=auto in the beginning until the first
buffer is received.
As side-effect of this, also remove various other caps handling code
that was working around the behaviour of the default
BaseSrc::negotiate().
During GstVideoInfo conversion from GstCaps, interlace-mode is
inferred to progressive so unspecified interlace-mode should not cause any
negotiation issue. Simly set GST_PAD_FLAG_ACCEPT_INTERSECT flag
on sinkpad to fix issue.
Encoded bitstream might not have valid framerate. If upstream
provided non-variable-framerate (i.e., fps_n > 0 and fps_d > 0)
use upstream framerate instead of parsed one.
Encoding thread is terminated without any notification so
upstream streaming thread is locked because there is nothing
to pop from GAsyncQueue. If downstream returns error,
we need put SHUTDOWN_COOKIE to GAsyncQueue for chain function
can wakeup.
By adding system memory support for nvdec, both en/decoder
in the nvcodec plugin are able to be usable regardless of
OpenGL dependency. Besides, the direct use of system memory
might have less overhead than OpenGL memory depending on use cases.
(e.g., transcoding using S/W encoder)
False warning from MSVC, or it does not understand that
g_assert_not_reached() does not return.
...\gst-plugins-bad-1.0-1.17.0.1\sys\decklink\gstdecklink.cpp(1647) : warning C4715: 'gst_decklink_configure_duplex_mode': not all control paths return a value
Any plugin which returned FALSE from plugin_init will be blacklisted
so the plugin will be unusable even if an user install required runtime
dependency next time. So that's the reason why nvcodec returns TRUE always.
This commit is to remove possible misreading code.
Since we build nvcodec plugin without external CUDA dependency,
CUDA and en/decoder library loading failure can be natural behavior.
Emit error only when the module was opend but required symbols are missing.
This commit includes h265 main-10 profile support if the device can
decode it.
Note that since h264 10bits decoding is not supported by nvidia GPU for now,
the additional code path for h264 high-10 profile is a preparation for
the future Nvidia's enhancement.
GstVideoDecoder::drain/flush can be called at very initial state
with stream-start and flush-stop event, respectively.
Draning with NULL CUvideoparser seems to unsafe and that eventually
failed to handle it.
It is possible that the output region size (e.g. 192x144) is different
from the coded picture size (e.g. 192x256). We may adjust the alignment
parameters so that the padding is respected in GstVideoInfo and use
GstVideoInfo to calculate mfx frame width and height
This fixes the error below when decoding a stream which has different
output region size and coded picture size
0:00:00.057726900 28634 0x55df6c3220a0 ERROR msdkdec
gstmsdkdec.c:1065:gst_msdkdec_handle_frame:<msdkh265dec0>
DecodeFrameAsync failed (failed to allocate memory)
Sample pipeline:
gst-launch-1.0 filesrc location=output.h265 ! h265parse ! msdkh265dec !
glimagesink
... and add our stub cuda header.
Newly introduced stub cuda.h file is defining minimal types in order to
build nvcodec plugin without system installed CUDA toolkit dependency.
This will make cross-compile possible.
* By this commit, if there are more than one device,
nvenc element factory will be created per
device like nvh264device{device-id}enc and nvh265device{device-id}enc
in addition to nvh264enc and nvh265enc, so that the element factory
can expose the exact capability of the device for the codec.
* Each element factory will have fixed cuda-device-id
which is determined during plugin initialization
depending on the capability of corresponding device.
(e.g., when only the second device can encode h265 among two GPU,
then nvh265enc will choose "1" (zero-based numbering)
as it's target cuda-device-id. As we have element factory
per GPU device, "cuda-device-id" property is changed to read-only.
* nvh265enc gains ability to encoding
4:4:4 8bits, 4:2:0 10 bits formats and up to 8K resolution
depending on device capability.
Additionally, I420 GLMemory input is supported by nvenc.
Only the default device has been used by NVDEC so far.
This commit make it possible to use registered device id.
To simplify device id selection, GstNvDecCudaContext usage is removed.
By this commit, each codec has its own element factory so the
nvdec element factory is removed. Also, if there are more than one device,
additional nvdec element factory will be created per
device like nvh264device{device-id}dec, so that the element factory
can expose the exact capability of the device for the codec.
Callbacks of CUvideoparser is called on the streaming thread.
So the use of async queue has no benefit.
Make control flow straightforward instead of long while/switch loop.
Previously we would've reported that there is signal unless we know for
sure that we don't have signal. For example signal would've been
reported before the device is even opened.
Now keep track whether the signal state is unknown or not and report no
signal if we don't know yet. As before, only send an INFO message about
signal recovery if we actually had a signal loss before.
... and put them into new nvcodec plugin.
* nvcodec plugin
Now each nvenc and nvdec element is moved to be a part of nvcodec plugin
for better interoperability.
Additionally, cuda runtime API header dependencies
(i.e., cuda_runtime_api.h and cuda_gl_interop.h) are removed.
Note that cuda runtime APIs have prefix "cuda". Since 1.16 release with
Windows support, only "cuda.h" and "cudaGL.h" dependent symbols have
been used except for some defined types. However, those types could be
replaced with other types which were defined by "cuda.h".
* dynamic library loading
CUDA library will be opened with g_module_open() instead of build-time linking.
On Windows, nvcuda.dll is installed to system path by CUDA Toolkit
installer, and on *nix, user should ensure that libcuda.so.1 can be
loadable (i.e., via LD_LIBRARY_PATH or default dlopen path)
Therefore, NVIDIA_VIDEO_CODEC_SDK_PATH env build time dependency for Windows
is removed.
Direct3D11 was shipped as part of Windows7 and it's obviously
primary graphics API on Windows.
This plugin includes HDR10 rendering if following requirements are satisfied
* IDXGISwapChain4::SetHDRMetaData is available (decleared in dxgi1_5.h)
* Display can support DXGI_COLOR_SPACE_RGB_FULL_G2084_NONE_P2020 color space
* Upstream provides 10 bitdepth format with smpte-st 2084 static metadata
MFX_FOURCC_VP9_SEGMAP surface in MSDK is an internal surface however
MSDK still call the external allocator for this surface, so this plugin
has to return UNSUPPORTED and force MSDK allocates surface using the
internal allocator.
See https://github.com/Intel-Media-SDK/MediaSDK/issues/762 for details
The call of MFXVideoENCODE_EncodeFrameAsync may not generate output and
the function returns MFX_ERR_MORE_DATA with NULL sync point, the input
frame is cached in this case, so it is possible that all allocated
frames go into the surfaces_used list after calling
MFXVideoENCODE_EncodeFrameAsync a few times, then the encoder will fail
to get an available surface before releasing used frames
This patch adds a new field of num_extra_frames to GstMsdkEnc and allows
encode element requires extra frames, the default value is 0.
This patch is the preparation for msdkvp9enc element.
msdkenc supports CSC implicitly, so it is possible that two VPP
processes are required when a pipeline contains msdkvpp and msdkenc.
Before this fix, msdkvpp and msdkenc may share the same context, hence
the same mfx session, which results in MFX_ERR_UNDEFINED_BEHAVIOR
in MSDK because a mfx session has at most one VPP process only
This fixes the broken pipelines below:
gst-launch-1.0 videotestsrc ! video/x-raw,format=I420 ! msdkh264enc ! \
msdkh264dec ! msdkvpp ! video/x-raw,format=YUY2 ! fakesink
gst-launch-1.0 videotestsrc ! msdkvpp ! video/x-raw,format=YUY2 ! \
msdkh264enc ! fakesink
MSDK supports JPEG YUY2 (422 chroma) output color
format. The color format of input bitstream is
described by JPEGChromaFormat and JPEGColorFormat
fields in the mfxInfoMFX structure which is filled
in by the MFXVideoDECODE_DecodeHeader function.
To obtain lossless decoded output from 422 encoded
JPEGs, we must set the output color format in the
FourCC and ChromaFormat fields in the mfxFrameInfo
structure to the appropriate values at post_configure
so that they are propagated through to the srcpad
caps accordingly.
A post_configure virtual method is added to allow
codec subclasses to adjust the initialized parameters
after MFXVideoDECODE_DecodeHeader is called from the
gstmsdkdec::gst_msdkdec_handle_frame function.
This is useful if codecs want to adjust the output
parameters based on the codec-specific decoding
options that are present in the mfxInfoMFX structure
after MFXVideoDECODE_DecodeHeader initializes them.
We were not setting self->segment and we are using it
when notifying downstream that we handled a REQUEST_KEY_UNIT
event, leading to all sort of criticals.
It's been replaced by NVENC/NVDEC and even NVIDIA doesn't
support VDPAU any longer and hasn't for quite some time.
The plugin has been unmaintained and unsupported for a very
long time, and given the track record over the last 10 years
it seems highly unlikely anyone is going to make it work well,
not to mention adding plumbing for proper zero-copy or
gst-gl integration.
Closes https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/828
All DRM ioctl uses errno to report the error and simply returns -1
when some error occured. This patch fixes all usage of the return
value instead of errno to trace the error type and moves to g_strerror
instead of string.h strerror in order to be consistent with the rest
of GStreamer.
separateColourPlaneFlag is mapped to separate_colour_plane_flag which
means Y, U and V planes are separately processed as monochrome sampled pictures.
So encoder shouldn't set that flag for normal 4:4:4 encoding.
Also for 4:4:4 encoding, NV_ENC_H264_PROFILE_HIGH_444_GUID profile must be
explicitly set.
The workaround for https://github.com/Intel-Media-SDK/MediaSDK/issues/1139
is required for vp8 only, so move this workaround to the corresponding
postinit_decoder function
The pipeline below works with this change
gst-launch-1.0 filesrc location=SA10104.vc1 ! \
'video/x-wmv,profile=(string)advanced',width=720,height=480,framerate=14/1 ! \
msdkvc1dec ! fakesink
MFXVideoDECODE_DecodeHeader only parses the sequence layer for VC1, so
the structure is unknown for a stream with interlace flag set in the
sequence layer. If forcing the struct to progressive in this plugin,
MediaSDK will fail to decode such streams.
It is possible MFXVideoDECODE_DecodeFrameAsync returns MFX_ERR_INCOMPATIBLE_VIDEO_PARAM
and this error can't be recovered by retrying MFXVideoDECODE_DecodeFrameAsync
in some cases, so we need to limit the number of retries to avoid infinite loop.
This fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/909
To drain all queued encoding items, encoder should gracefully
wait the encoding thread without stealing queued items.
Otherwise, some input frames can be dropped.
../subprojects/gst-plugins-bad/sys/msdk/msdk.c(61): error C2065: 'MFX_FOURCC_RGB565'
The minimum required version for the format seems to MFX_VERSION >= 1028
Returning MFX_ERR_INCOMPATIBLE_VIDEO_PARAM from
MFXVideoDECODE_DecodeFrameAsync means the allocated mfx surface is not
suitable for the current frame, we need a new mfx surface and try
MFXVideoDECODE_DecodeFrameAsync again.
When MFXVideoDECODE_DecodeFrameAsync () returns MFX_WRN_DEVICE_BUSY with
an output surface, a new input surface is required when retrying
MFXVideoDECODE_DecodeFrameAsync ().
This fixes the out-of-surface issue mentioned in
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/890
This gets rid of annoying message in the log, e.g. run the pipeline
below:
gst-launch-1.0 videotestsrc num-buffers=100 ! \
video/x-raw,format=NV12,width=352,height=288 ! msdkh264enc ! filesink \
location=test.h264
[LIBVA]:CRITICAL - DdiMedia_DestroyImage:4357: Invalid image
This make the pipeline below works:
gst-launch-1.0 videotestsrc num-buffers=1 ! msdkvpp ! \
video/x-raw,format=UYVY ! filesink location=a.yuv
Once https://github.com/intel/media-driver/pull/526 in the media-driver
is merged, the pipeline below also works:
gst-launch-1.0 videotestsrc num-buffers=1 ! msdkvpp ! \
video/x-raw\(memory:DMABuf\),format=UYVY ! filesink location=a.yuv
The VCD source was ported in 2014 (commit 89eb1e9), but the necessary
"cdxaparse" plugin, which is used to "Parse a .dat file (VCD) into
raw mpeg1" was never ported.
This means that the probable main user for the feature, totem, hasn't
actually been able to play back VCDs, since 2012, when it switched to
using GStreamer 1.0.
Note that even if cdxaparse was finally ported, a lot of work would
still be necessary before it is considered usable. Notably, it is
missing disc image support [1] and some VCDs just cannot be opened for
reading [2].
[1]: https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/898
[2]: https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/899
gstladspa.c:360:5: error: zero-length ms_printf format string [-Werror=format-zero-length]
vad_private.c:108:3: error: this decimal constant is unsigned only in ISO C90 [-Werror]
gstdecklinkvideosink.cpp:478:32: error: comparison between 'BMDTimecodeFormat {aka enum _BMDTimecodeFormat}' and 'enum GstDecklinkTimecodeFormat' [-Werror=enum-compare]
win/DeckLinkAPI_i.c:72:8: error: extra tokens at end of #endif directive [-Werror]
win/DeckLinkAPIDispatch.cpp:35:10: error: unused variable 'res' [-Werror=unused-variable]
gstwasapiutil.c:733:3: error: format '%x' expects argument of type 'unsigned int', but argument 8 has type 'DWORD' [-Werror=format]
gstwasapiutil.c:733:3: error: format '%x' expects argument of type 'unsigned int', but argument 9 has type 'guint64' [-Werror=format]
kshelpers.c:446:3: error: missing braces around initializer [-Werror=missing-braces]
kshelpers.c:446:3: error: (near initialization for 'known_property_sets[0].guid.Data4') [-Werror=missing-braces]
An output surface is returned but without sync point when when
MFXVideoDECODE_DecodeFrameAsync () returns MFX_ERR_MORE_DATA, this
surface should be released too, otherwise the surface is occupied
and it is easy to exhaust all pre-allocated mfx surfaces.
Example pipeline (input_vp8.webm contains lots of frame with show_frame
set to 0):
gst-launch-1.0 filesrc location=input_vp8.webm ! matroskademux !
msdkvp8dec ! msdkvpp ! fakesink
0:00:05.995959693 19866 0x563f30f14590 ERROR default
gstmsdkvideomemory.c:77:gst_msdk_video_allocator_get_surface: failed to
get surface available
ERROR: from element
/GstPipeline:pipeline0/GstMatroskaDemux:matroskademux0: Internal data
stream error.
The input buffer is released in gst_msdkdec_finish_task () when decoding
some special clips however this buffer is still in use, so ref the input
buffer before gst_msdkdec_finish_task () and unref it at the end of
gst_msdkdec_handle_frame ().
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/862
The picture structure in the output parameters from
MFXVideoDECODE_Query is set to MFX_PICSTRUCT_UNKNOWN for some codecs, so
the structure of the corresponding mfx surfaces created for decoding are
unknown. The pipeline will be broken when these surfaces are used as the
input for msdkvpp.
Example pipeline:
gst-launch-1.0 filesrc location=input_vp8.webm ! matroskademux !
msdkvp8dec ! msdkvpp ! fakesink
Error message:
0:00:00.031568911 14259 0x55b79dc684a0 ERROR msdkvpp
gstmsdkvpp.c:728:gst_msdkvpp_transform:<msdkvpp0> MSDK Failed to do VPP
ERROR: from element
/GstPipeline:pipeline0/GstMatroskaDemux:matroskademux0: Internal data
stream error.
This is a workaround for the above issue
The memory type was used as bitwise enum, but the enum was not
defined in that way.
Nonetheless, most of the usage of the memory type was as mutually
exclusive options, rather than option composition.
This patch refactor how the memory type is defined, so it is kept
the mutual exclusion among options.
'channel-mask' field should not be put in caps if channel mask is 0x0
Mapping WASAPI channel mask to GST equivalent was going only over
first nChannels elements of wasapi_to_gst_pos array, translating, for
example, WASAPI's 0x63f to GST's 0x3f instead of 0xc3f.
When 'channel-mask' is specified as NULL, it signifies that there's
need to do downmix or upmix and it makes caps negotiation with
audioconvert element impossible. Just omit it.
Signed-off-by: Nirbheek Chauhan <nirbheek@centricular.com>
When building the msdk plugin even if libmfx is found, unless the
plugin is explicitly enabled we should not error out if msdk
dependencies are not found.
Also give an error message when we don't build the plugin on Windows
because we're not building with MSVC.
When the audio device goes away during playback or capture, we were
going into an infinite loop of AUDCLNT_E_DEVICE_INVALIDATED. Return -1
and post an error message so the ringbuffer thread exits with an error.
BRCParamMultiplier in mfxInfoMFX is a parameter which specifies a
multiplier for bitrate control parameters [1], it impacts TargetKbps,
MaxKbps, BufferSizeInKB and InitialDelayInKB.
[1]: https://software.intel.com/en-us/node/628473
According to the MSDK documation[1], MFXCloneSession is a light-weight
equivalent of MFXJoinSession after MFXInit, so MFXJoinSession call isn't
needed in the msdk plugin, otherwise the cloned session is joined to the
parent session twice, and we will get a MFX error when closing the
parent session
example pipeline:
gst-launch-1.0 videotestsrc num-buffers=100 ! \
video/x-raw,format=NV12,width=352,height=288 ! msdkh264enc ! msdkh264dec ! \
msdkh264enc ! fakesink
Error message:
0:00:00.211948518 21733 0x5586ee741c60 ERROR msdk
msdk.c:148:msdk_close_session: Close failed (undefined behavior)
[1]: https://software.intel.com/en-us/node/628429#MFXCloneSession
In gst-msdk, a mfx session may be shared between different gst
elements, each element tries to set the frame allocator. However, per
the MSDK documation[1], the behavior is undefined if reset the frame
allocator while the previous allocator is in use. Fortunately all
elements use the same frame allocator, so we can avoid to call
MFXVideoCORE_SetFrameAllocator again.
[1]: https://software.intel.com/en-us/node/628430#MFXVideoCORE3
If so, BGRA is the preferred output format hence BGRA will be selected
as input format by default, e.g. in the pipleline below, BGRA instead of
NV12 is selected without renegotiation, so we can avoid the NV12 issue
(see commit 3f2314a) by default.
gst-launch-1.0 videotestsrc ! msdkvpp ! glimagesink
Otherwise MFXVideoVPP_Init will fail because it is called twice without
a close.
Example pipeline:
gst-launch-1.0 videotestsrc ! msdkvpp ! glimagesink
Sometimes glimagesink emits GST_EVENT_RECONFIGURE event which results
in that MFXVideoVPP_Init is called twice, then get the negotiation
failure below:
0:00:00.093715518 21218 0x558ef56231e0 ERROR msdkvpp
gstmsdkvpp.c:995:gst_msdkvpp_initialize:<msdkvpp0> Init failed
(undefined behavior)
WARNING: from element /GstPipeline:pipeline0/GstMsdkVPP:msdkvpp0: not
negotiated
After applying this commit, the pipeline above may run without
negotiation failure, however NV12 layout in dmabuf mode is selected in
renegotiation, the display image is corrupted due to the NV12 issue which
was mentioned in commit 3f2314a. Some other fixes are needed to avoid
renegotiation by default
In general, we should assume any unhandled error is
non-recoverable.
In the flush frames loop, some error states can cause us
to never increment the task and therefore we get stuck
in an infinite loop and generate GST_ELEMENT_ERROR
over and over again. This eventually consumes all
system memory and triggers OOM. Thus, assume the worst
and break out of the loop upon the first "unhandled" error.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/859
When either the source or sink goes from PLAYING -> NULL -> PLAYING,
we call _reset() which sets client_needs_restart, and then we call
prepare() which calls IAudioClient_Start(), so we don't need to call
it again in src_read() or sink_write(). Unlike when we're just going
PLAYING -> PAUSED -> PLAYING.
configure_mode_setting() keeps a ref on tmp_kmsmem which is released in
gst_kms_sink_show_frame().
But if for some reason configure_mode_setting() is re-called before
showing a frame or if none is showed this memory was leaked.
ACM is an ancient legacy API, and there's no point in
keeping it around for a licensed mp3 decoder now that
mp3 patents have expired and we have a decoder in -good.
We didn't ship this in cerbero anyway. If there's a good
case for the AAC encoder (which is LC only anyway) someone
should write a new plugin based on current APIs, that can
actually be built out of the box.
Fixes#850
As a side-effect we can now actually store the line offset in the
line21dec element, and have to perform fewer transformations in the
decklink elements (which were also buggy as they assumed a single byte
triplet per meta).
We will add more profiles in the sink caps of msdkh265enc, so let
msdkh265enc re-add the sink pad template. Note this change doesn't
impact any capability
Fixes the time calculations when dealing with a slaved clock (as
will occur with more than one decklink video sink), when performing
flushing seeks causing stalls in the output timeline, pausing.
Tighten up the calculations by relying solely on the internal time
(from the internal clock) for determining when to schedule display
frames instead attempting to track pause lengths from the external
clock and converting to internal time. This results in a much easier
offset calculation for choosing the output time and ensures that the
clock is always advancing when we need it to.
This is fixup to the 'monotonically increasing output timestamps' goal
in: bf849e9a69
CodecProfile will be set in MFXVideoDECODE_DecodeHeader() to match
the input stream. Setting the hard-coded profile here will mislead
user that msdkh265dec supports a special profile only.
Previously alloc_info is initialized when both thiz->initialized
and thiz->allocation_caps are true, but only thiz->initialized is
checked when alloc_info is used.
Instead of relying on buffers after a state change to PLAYING to always start
from 0, track the amount of time we have spent outside playing but not changed
state to PAUSED.
Note that, since Nvidia does not provide nvEncodeAPI.lib file,
find_library() couldn't be used for build on Windows.
This patch changes to load nvEncodeAPI(64).dll or libnvidia-encode.so
in runtime
dynlink_* was introduced since CUDA Toolkit 9.x but it's deprecated from 10.0.
Instead of using #ifdef hack, shipping nvidia headers of NVIDA CODEC SDK
can make build/code simple
MediaSDK has been released as open source [1], but the directories
where it installs its files, are different from the binary only
distribution.
This patch adds to the libraries path the directory /lib. Also it
is defined in meson if the include directory has the mfx/ prefix,
something that is already handled in autotools.
1. https://github.com/Intel-Media-SDK/MediaSDK
"required" keyword is not a valid argument for has_header()
WARNING: Passed invalid keyword argument "required".
WARNING: This will become a hard error in the future.
These function are not used at all, using them together with the
transport-volume property from avdtpsrc may end up in a binding loop so
we better remove the functions.
If properties are proxied through GBinding this can work only if the
proxied property keeps it's own value. The previous implementation will
read the original value if the proxied property signals a change and
thus nothing will happen.
Right now this only works for video. An attempt was made at adding
monitoring following the example winks, but it seems the only devices that
can be easily detected are KS sources, which winks already handles.
The previous behaviour had issues when setting one of the device properties
after _get_caps had been called. The device shouldn't be locked in until after
_start has been called.
the 2018.3.1 intel sdk release places libraries into /lib64 instead of
/lib/lin_x64 or /lib/x64, this commit adds /lib64 to the libdir
locations list
Fixes#815
Before this patch, if mode=auto and video-format!=auto, video-format would
always be ignored, and get set to 8bit-yuv, or if detected to be RGB444, then
it would be set to 8bit-argb. This change respects video-format if it is set
to 10bit-yuv (v210) or 8bit-bgra, even when mode=auto.
Closes#772
There's a race condition when is-live is set to true and the shmsrc
element releases the pipe in the transition from PLAYING to PAUSED.
To avoid it this change ensures that _create method takes the pipe
and increases the use_count in one operation protected by object lock.
Also perform apropriate protections when releasing the pipe.
https://bugzilla.gnome.org/show_bug.cgi?id=797203
../sys/decklink/gstdecklinkvideosink.cpp:1006:11: error: ‘GstDecklinkVideoSink {aka struct _GstDecklinkVideoSink}’ has no member named ‘scheduled_stop_time’
self->scheduled_stop_time = start_time;
^
Decklink sometimes does not notify us through the callback that it has
stopped scheduled playback either because it was uncleanly shutdown
without an explicit stop or for unknown other reasons.
Wait on the cond for a short amount of time before checking if scheduled
playback has stopped without notification.
https://bugzilla.gnome.org/show_bug.cgi?id=797130
This is part of a much larger goal to always keep the frames we schedule to
decklink be always increasing. This also allows us to avoid using both the
sync and async frame display functions which aren't recomended to be used
together.
If the output timestatmsp is not always increasing decklink seems to hold
onto the latest frame and may cause a flash in the output if the played
sequence has a framerate less than the video output.
Scenario is play for N seconds, pause, flushing seek to some other position,
play again. Each of the play sequences would normally start at 0 with
the decklink time. As a result, the latest frame from the previous sequence
is kept alive waiting for it's timestamp to pass before either dropping
(if a subsequent frame in the new sequence overrides it) or displayed
causing the out of place frame to be displayed.
This is also supported by the debug logs from the decklink video sink
element where a ScheduledFrameCompleted() callback would not occur for
the frame until the above had happened.
It was timing related as to whether the frame was displayed based
on the decklink refresh cycle (which seems to be 16ms here),
when the frame was scheduled by the sink and the difference between
the 'time since vblank' of the two play requests (and thus start times
of scheduled playback).
https://bugzilla.gnome.org/show_bug.cgi?id=797130
This is now handled directly in gstaudiosrc/sink, and we were setting
it in the wrong thread anyway. prepare() is not the same thread as
sink_write() or src_read().
This adds "restore-crtc" property using which one
can restore previous crtc mode.
By default it is enabled, if CRTC was already
active with a valid mode and kmssink set a new mode
on CRTC using force-modesetting.
This helps user restore previous crtc mode and get
the previous session back after running a kmssink
pipeline involving a force-modesetting.
For e.g. When running a kmssink pipeline on rpi
using force-modesetting on tty console, it was giving
a blank screen after pipeline, and now with help of restore-crtc
functionality, CRTC is set with previous crtc mode
previously active on tty console.
Edited-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
https://bugzilla.gnome.org/show_bug.cgi?id=797025
This allow setting properties that contains spaces. The spaces are
replaced with '-'. As an example, one can set the connector proper
"scaling mode" with the following:
... ! kmssink connector-properties="s,scaling-mode=1"
https://bugzilla.gnome.org/show_bug.cgi?id=797027
Can be used to pass custom connector properties to DRM. Properties can
be enumerated using modetest tool. These properties can then be applied
with the following gst-launch-1.0 syntax. Note that the name of the
structure is ignored.
... ! kmssink connector-properties="s,props1=value,props2=value"
https://bugzilla.gnome.org/show_bug.cgi?id=797027
drmModeGetFB returns -EINVAL for multi-planar framebuffers. Instead of
depending on the framebuffer dimensions to select the mode, use width
and height from GstVideoInfo, which was used to create the framebuffer
in the first place. This enables kmssink to display multi-planar
formats such as I420 or NV12 with modesetting enabled.
https://bugzilla.gnome.org/show_bug.cgi?id=796985
Since both audio and video capture devices declare the KSCATEGORY_CAPTURE interface,
plugging a camera that supports both could result in an audio device being mistaken
for a video one.
https://bugzilla.gnome.org/show_bug.cgi?id=796958
With the Windows 8.1 SDK, the v1 of the AUDCLNT_STREAMOPTIONS enum is
defined which only has NONE and RAW, so it's not only defined when
AudioClient3 is available.
Add a meson check for the symbol. This is not needed for Autotools
because there we build against the MinGW audioclient.h which is still
at v1 of the AudioClient interface.
The mxsfb-drm driver has been added to the kernel long ago and will now
be the default display driver for NXP i.MX28, i.MX6SX and i.MX7D
processors so now is a good time to add it to kmssink.
Also, this is used in the upcoming i.MX8MQ and i.MX8MM processors.
https://bugzilla.gnome.org/show_bug.cgi?id=796873
Otherwise decklink seems to hold onto the latest frame and may cause a
flash in the output if the played sequence has a framerate less than the
video output.
Scenario is play for N seconds, pause, flushing seek to some other position,
play again. Each of the play sequences would normally start at 0 with
the decklink time. As a result, the latest frame from the previous sequence
is kept alive waiting for it's timestamp to pass before either dropping
(if a subsequent frame in the new sequence overrides it) or displayed
causing the out of place frame to be displayed.
This is also supported by the debug logs from the decklink video sink
element where a ScheduledFrameCompleted() callback would not occur for
the frame until the above had happened.
It was timing related as to whether the frame was displayed based
on the decklink refresh cycle (which seems to be 16ms here),
when the frame was scheduled by the sink and the difference between
the 'time since vblank' of the two play requests (and thus start times
of scheduled playback).
Use async_depth for latency calcuation instead of
the length of Tasks array which could be NULL since we
don't do the msdk decoder init in set_format().
According to MediaSDK specification,
Width must be a multiple of 16 and Height must be a multiple
of 16 for progressive frame sequence and a multiple of 32 otherwise.
This patch sets a 16 bit alignment for width and 32 bit alignment
for height as default.
https://bugzilla.gnome.org/show_bug.cgi?id=796566
In cases where we do hard resest, the current code destroys the frame
which has new resolution bit early and this causes buffer_unmap
warnings. Keep an extra ref to the frame internally to avoid this.
The gst-msdk decoders only support packetized formats for
all codecs except VC1. For VC1, it supports codec_data for advanced
profiles and this codec_data wan't submitting to MSDK's DecodeHeader APIs.
Make sure the subclass deocders correctly configured so that
the codec_data buffers are in place in the internal adapter for
MediaSDK's DecoderHeader usage.
Currently we use the gst_video_decoder_get_oldest_frame()
to get the old pending frame to output. But this is not correct
if pts re-ordering required. This patch uses a custom made
get_old_frame() which accounts the PTS too similar to the
v4l2decoder.
https://bugzilla.gnome.org/show_bug.cgi?id=796699
The patch adds a serios of changes to support dynamic resolution
change and efficient utilization of resources.
Major changes:
-- Use MSDK's apis to retrieve the headers instead of only relying
on upsteram notification. For eg: avc decoder requires SEI header
information for dpb count calculation which we don't get from caps.
-- For all codecs other than VP9, we force the reset of decoder
if resoultion changes to fit with gstreamer flow. VP9 enfource
the hard reset only if the new resolution is bigger.
-- delay the src caps setting till msdk api's invokation in
handle_frame to avoid caching multiple configuration values
-- ensure pool negotiation is based on decoder's allocation_caps.
--dynamic resoluttion change use an explicit allocation_query
to reclaim the buffers before closing the decoder (thanks to v4l2dec)
--In case if we don't get upstream notification of res change (for eg,
this can can happen for vp9 frames with ivfheader where ivfparse
is not able to notify the dynamic changes), we handle the the case
based on MFX_ERR_INCOMPATIBLE_VIDEO_PARAM which is the return value
of MFXVideoDECODE_DecodeFrameAsync
-- calculate the minimum surfaces to be preallocated based on
msdk suggestion, downstream requirement, async depth and scratch surface
count for smooth display.
https://bugzilla.gnome.org/show_bug.cgi?id=796566
The transform from mediacodec applies to the texture coords, but
GStreamer affine meta applies to the video geometry, which is the
opposite - so invert it to get display correct for decoders
that require transforming
According to msdk spec, there are two ways to enable filters:
1: Filters can be enabled by adding a filter ID
to mfxExtVPPDoUse. In this case, default filter parameters are used
2: Add filter configuration structures directly to mfxVideoParam.
Using 1 with 2 is optional but legal. Unfortunately it won't work
with some specific use cases like Detail/EdgeEnhancement.
Let's stick with option2 which works fine for all VPP operations.
https://bugzilla.gnome.org/show_bug.cgi?id=796468
Since we do the MSDK initializing in set_caps(), a FALSE
return may still cause the invokation of set_caps() again
and this will leads to buffer allocation and other mess-up.
So make sure the msdk initialized correctly before trying
to do any buffer allocation.
https://bugzilla.gnome.org/show_bug.cgi?id=796465
Make sure all the enabled filter structures are added in the
mfxVideoParm before doing the VPPQuery so that msdk
can do the input param validation
https://bugzilla.gnome.org/show_bug.cgi?id=796465
Update the license blurb in camconditionalaccess.[hc] from GPL to LGPL.
The plugin is LGPL and the GPL header in those two files was just a
copy/paste mistake.
Using NV12 layout in dmabuf mode giving mis-aligned
VPP output with the media-driver. Keep the NV12 support
(so that we can file the bug agianst msdk or mediadriver),
but lower the ordering so that BGRA picks as default.
NV12 issue can be reproduced with explicit capfilter:
vidoetestsrc ! msdkvpp ! video/x-raw\(memory:DMABuf\),format=NV12 ! glimagesink
Added a utility method to replace the MemID (interanl VASurfaceID)
associated with the mfxFrameSurface. This is usefull for dmabuf-import
where we need to replace the memID dynamically
https://bugzilla.gnome.org/show_bug.cgi?id=794817
Exporting DRM_PRIME fd to VASurface requires direct
invocation of VA api VACreateSurface with
VASurfaceAttribExternalBufferDescriptor and other
necessary surface attributes.
https://bugzilla.gnome.org/show_bug.cgi?id=794817
In case the wasapi buffer levels got low in shared mode we would still wait until
more buffer is available until writing something in it, which means we could never
catch up and recover.
Instead only wait for a new buffer in case the existing one is full and always write
what we can. Also don't loop until all data is written since the base class can handle
that for us and under normal circumstances this doesn't happen anyway.
This only works in shared mode, as in exclusive mode we have to exactly
fill the buffer and always have to wait first.
This fixes noisy (buffer underrun) playback with the wasapisink under load.
https://bugzilla.gnome.org/show_bug.cgi?id=796354
The calculation for the frame count in the non-aligned case resulted in
a one too low buffer frame count.
This resulted in:
1) exclusive mode not working as the frame count has to match
exactly there.
2) Buffer underruns in shared mode as the current write() code doesn't
handle catching up to low buffer levels (fixed in the next commit)
To fix just use the wasapi API to get the buffer size which will always
be correct.
https://bugzilla.gnome.org/show_bug.cgi?id=796354
S_FALSE is a valid return value which does not indicate an error.
For example IAudioClient_Stop() returns S_FALSE when it is already stopped.
Use the FAILED macro instead which just checks if an error occured or not.
This fixes spurious warnings when using the wasapisink element.
https://bugzilla.gnome.org/show_bug.cgi?id=796280
This is a new warning introduced by gcc 8
We already check just before that we have enough space, just do a regular
memcpy with the full string size.
camswclient.c:87:3: error: ‘strncpy’ specified bound depends on the length of the source argument [-Werror=stringop-overflow=]
'cuDeviceComputeCapability' was deprecated as of CUDA 5.0
gstnvenc.c: In function ‘gst_nvenc_create_cuda_context’:
gstnvenc.c:290:9: error: ‘cuDeviceComputeCapability’ is deprecated [-Werror=deprecated-declarations]
&& cuDeviceComputeCapability (&maj, &min, cdev) == CUDA_SUCCESS) {
^
https://bugzilla.gnome.org/show_bug.cgi?id=796203
The new property "output-order" can be set to either "display" order
which is the default where frames will be outputting in display order,
or "decoded-order" which will be outputting the frames in decoded order.
The "decoded order" output is generally useful for debugging. But there
are few
customers who use it for low-latency streaming. For eg if the customer
already knows that the stream doesn't have b-frames (which means no
algorithm requires for display order calculation), then they can use
"decoded-order"
output to skip some of the DPB logic to avoid the frame accumulation at
start-up.
The root cause of the above issue is a bit of unclarity in h264 spec +
lazy implementation of many H264 encoders; This is well handled in
gstreamer-vaapi using "low-latency" property:
https://bugzilla.gnome.org/show_bug.cgi?id=762509https://bugzilla.gnome.org/show_bug.cgi?id=795783
For packetized input, inform the msdk that the buffer has
a complete frame or complementary field pairs. For decoding,
this means that the decoder can proceed with this buffer without
waiting for the start of the next frame, which effectively reduces
decoding latency.
https://bugzilla.gnome.org/show_bug.cgi?id=795783
Currently we use an async depth of 4 as default (based on
recommendations
in msdk apps), which indicates how many asynchronous operations an
application performs
before the application explicitly synchronizes the result. As a result,
we
queue four frames in decoder which might not be good approach for
live streaming.
This patch reset the async-depth to 1 as default so that we do sync for
each frame we decode without queuing. Customer can play with already
exposed "async-depth" property for other use cases
https://bugzilla.gnome.org/show_bug.cgi?id=795783
So far msdk produced dmabuf fds are non-mappable.
If user wants to download the content of underlined surfaces,
dmabufcapsfeature negotiated pipeline will fail. So if the input surface
is dmabuf and downstream doesn't have support for dmabuf capsfeatures,
we do the vpp (no passthrough) and produce the mappable videomemory
buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=794946
prpose_allocation:
-- always instantiate a pool for for upstream
-- use async_depth + 1 as min buffer count
decide_allocation:
-- always create a new bufferpool for source pad.
Each of the msdk element has to create it's own mfxsurfacepool
which is an msdk contraint. For eg: Each Msdk component (vpp, dec and
enc)
will invoke the external Frame allocator for video-memory usage
So sharing the pool between gst-msdk elements might not be a good idea.
https://bugzilla.gnome.org/show_bug.cgi?id=793705
If the "output-cc" property is set to TRUE and there is CC present
in the VBI Ancillary Data, they will be extracted and set on the
outgoing buffer as GstVideoCaptionMeta.
Only CDP packets are supported.
https://bugzilla.gnome.org/show_bug.cgi?id=773863
This adds entry for new DRM driver from xilinx
called "xlnx" which supports atomic modesetting.
We have kept entry for older DRM driver "xilinx_drm"
for backward compatility with a note describing
deprecation.
Signed-off-by: Devarsh Thakkar <devarsht@xilinx.com>
https://bugzilla.gnome.org/show_bug.cgi?id=795228
The clock seems to have a lot of drift (or we're using it incorrectly)
which causes buffers to be late on the sink and get dropped.
Disable till someone can investigate whether our usage of the API is
incorrect (it looked correct to me) or if something is wrong.
Since cuda-tools 9.0, nvcuvid.h is replaced by dynlink_nvcuvid.h.
This patch changes nvdec to use run-time dynamic linking if
cuda-tools version >= 9.
nvenc does not require any change since its necessary headers are
still available.
https://bugzilla.gnome.org/show_bug.cgi?id=791724
Using the default value (InterleavedDec == MFX_SCANTYPE_UNKNOWN)
causing issues with non-interleaved sample decode. Ideally the usage
of MFXVideoDECODE_DecodeHeader should fix these type of issue, but
it seems to be not. But hardcoding the InterleaveDec to
MFX_SCANTYPE_NONINTERLEAVED
is fixing the problem and fortunately msdk seems to be taking care of
Interleaved samples
too .So let's hardcode it for now.
https://bugzilla.gnome.org/show_bug.cgi?id=793787
We can just return the template caps till the device is opened when
going from READY -> PAUSED. This fixes a CRITICAL when calling
ELEMENT_ERROR before the ringbuffer is allocated.
Also fixes a couple of leaks in error conditions.
https://bugzilla.gnome.org/show_bug.cgi?id=794611
Now, when you set loopback=true on wasapisrc, the `device` property
should refer to a sink (render) device for loopback recording.
If the `device` property is not set, the default sink device is used.
This patch includes:
1\ Implements MsdkDmaBufAllocator and allocation of msdk dmabuf memroy.
2\ Each msdk dmabuf memory include its own msdk surface kept by GQuark.
3\ Adds new option GST_BUFFER_POOL_OPTION_MSDK_USE_DMABUF
https://bugzilla.gnome.org/show_bug.cgi?id=793707
There needs to be generalized for the parameter from
GstVideoMsdkVideoMemory to GstMemory.
Thus we can call these functions if using DMABuf memory.
https://bugzilla.gnome.org/show_bug.cgi?id=793707
For example, if framerate 0/1 is provided from upstream, the driver
fails to configure and complain about it.
We can let it go and make the driver assuming framerate itself.
https://bugzilla.gnome.org/show_bug.cgi?id=789752
There is no log of gst_decklink_com_thread () which initializes COM.
The initialization part is not valid with #ifdef MSC_VER.
Windows binaries are built with gcc.
As with other codes, it was avoidable by setting it to G_OS_WIN32
instead of MSC_VER.
https://bugzilla.gnome.org/show_bug.cgi?id=794652
There was not handling the end of encoding sequence in encoder.
This patch does drain any remaining internal streams while decoder
already does this.
Document says:
"To mark the end of the encoding sequence, call this function with a
NULL surface
pointer. Repeat the call to drain any remaining internally cached
bitstreams—one
frame at a time—until MFX_ERR_MORE_DATA is returned."
https://bugzilla.gnome.org/show_bug.cgi?id=793236
Sometimes parent context is released before its children get released.
In this case MFXClose of parent session fails.
To make sure that child sessions are closed before closing a parent
session,
Parent context needs to manage child sessions and close them first when
it's released.
https://bugzilla.gnome.org/show_bug.cgi?id=793412
Currently a gst buffer has one mfxFrameSurface when it's allocated and
can't be changed.
This is based on that the life of gst buffer and mfxFrameSurface would
be same.
But it's not true. Sometimes even if a gst buffer of a frame is finished
on downstream,
mfxFramesurface coupled with the gst buffer is still locked, which means
it's still being used in the driver.
So this patch does this.
Every time a gst buffer is acquired from the pool, it confirms if the
surface coupled with the buffer is unlocked.
If not, replace it with new unlocked one.
In this way, user(decoder or encoder) doesn't need to manage gst buffers
including locked surface.
To do that, this patch includes the following:
1. GstMsdkContext
- Manages MSDK surfaces available, used, locked respectively as the
following:
1\ surfaces_avail : surfaces which are free and unused anywhere
2\ surfaces_used : surfaces coupled with a gst buffer and being used
now.
3\ surfaces_locked : surfaces still locked even after the gst buffer
is released.
- Provide an api to get MSDK surface available.
- Provide an api to release MSDK surface.
2. GstMsdkVideoMemory
- Gets a surface available when it's allocated.
- Provide an api to get an available surface with new unlocked one.
- Provide an api to release surface in the msdk video memory.
3. GstMsdkBufferPool
- In acquire_buffer, every time a gst buffer is acquired, get new
available surface from the list.
- In release_buffer, it confirms if the buffer's surface is unlocked or
not.
- If unlocked, it is put to the available list.
- If still locked, it is put to the locked list.
This also fixes bug #793525.
https://bugzilla.gnome.org/show_bug.cgi?id=793413https://bugzilla.gnome.org/show_bug.cgi?id=793525
Directsoundsrc/sink have multiple issues, most of which cannot be
fixed at all because the API is deprecated and is implemented as a
compatibility wrapper around WASAPI since Vista.
Users and developers should now use the wasapisrc/sink elements, and
future development efforts should go towards that.
The low-latency property is *always* safe to enable, so applications
that do realtime communication should set it, and the elements will
automatically configure WASAPI to use the lowest possible device
period, and the audioringbuffer in audiobasesink will also be
configured accordingly.
Applications can also use exclusive mode during capture and playback
for the lowest possible latency if they know that the device will not
be used by any other application.
In this mode, the latency-time and buffer-time properties will be
completely ignored.
The AudioClient3 API is only available on Windows 10, and we will
automatically detect when it is available and use it.
However, using it for capturing audio with low latency and without
glitches seems to require setting the realtime priority of the entire
pipeline to "critical", which we cannot do from inside the element.
Hence, we can only enable that by default for wasapisink since
apps should be able to safely set the low-latency property to TRUE if
they need low-latency capture or playback.
This allows us to request ultra-low-latency device periods even in
shared mode. However, this requires good drivers and Windows 10, so
we only enable this when we detect that we are running on Windows 10
at runtime.
You can forcibly disable this feature on Windows 10 by setting
GST_WASAPI_DISABLE_AUDIOCLIENT3=1 in the environment.
Since there is already an "adaptive-B" option, just
use boolean property for B-pyramid enabling.
Fixme: Not sure whether this can be supported in vp8 and vp9.
It could be possible through GPB (b without backward ref) but
can't verify currently. We can move this as common property
once verified with vp8 and vp9 without breaking any backward
compatibility.
https://bugzilla.gnome.org/show_bug.cgi?id=791637
Add a new property "trellis" to enable trellis quantization.
Keeping trellis as a flag value (which is boolean for gst x264 enc element)
since it is possible to enable/disable this seperately for
I,P and B frames through MediaSDK ext option headers.
The subclass implementations always need to inform base-encoder
if it requires the inclusion of Extend Header buffers (mfxExtCodingOption2
and mfxExtCodingOption3).
https://bugzilla.gnome.org/show_bug.cgi?id=791637
This option controls down sampling in look ahead bitrate
control mode. According to spec it is only supported in AVC.
Fixme: Probably HEVC also have support for this in recent
MSDK versions. We could move the enumeration types to common
header usable for multiple codecs.
https://bugzilla.gnome.org/show_bug.cgi?id=791637
MediaSDK has support for a number of rate control algorithms.
Adding all possible options to the property rate-control.
Fixme1: In case of failure, currently we don't have a proper method
to show which rate-control has been failed. It could be better
to add some extensive validation on EncQuery output in case of error.
Unfortunately, not all ratecontrol methods are supported by every codecs
and we don't have the dynamic detection of supported ratecontrol methods yet.
https://bugzilla.gnome.org/show_bug.cgi?id=791637
We have the property "i-frames" to set the IDR interval in a
gop. Unfortunately MSDK HEVC encoder behaves bit differently
for IdrInterval field, IdrInteval == 1 indicate every
I-frame should be an IDR (which is IdrInterval == 0 for other codecs),
IdrInteval == 2 means every other I-frame is an IDR
(which is IdrInterval == 1 for other codecs) etc.
So we generalize the behaviour of property "i-frames" by
incrementing the value by one in each case (only for HEVC).
https://bugzilla.gnome.org/show_bug.cgi?id=791637
The base encoder common properties are not valid for
mjpeg encoder where there is no motion compensation or rate control.
Delaying the property installation on the base gobject
untill the subclass class_init get invoked.
https://bugzilla.gnome.org/show_bug.cgi?id=791637
The gst-msdk decoders prefer packetized streams as input
and in this case we can avoid unnecessary input bitstream copy
to mfxBitstream. This works fine for codecs like h264 where
we only support byte-stream with au alignment. Other format
conversions should be done thorugh parsers. But this won't work
for codecs like vc1 where we don't have an autoplugged parser.
Even the parser is not capable to do format conversions.
Packetizing through base decoders parse() routine will bring a
lot of uncecessary of complexities and codecparser libraray dependency.
So we just use an interal gst_adaper to keep track of bitstream
which is not consumed by msdk durig AsynchronusDecoding.
This adapter will get used only if subclass implementations
set the "is_packetized" to FALSE for msdk base encoder.
https://bugzilla.gnome.org/show_bug.cgi?id=792589
Adding Simple and Main profiles decode support.
Currently msdkvc1dec is not capable to handle the codec_data,
only instream headers are supported. Also msdk vc1 decoder
expecting instream with Sequence header as per SMPTE 421M Annex L.
Most of the decdoebin/playbin pipeline won't work with the above
constraints
because vc1parse is still not an autoplug element.
Only way to make mskdvc1dec work is by connecting a vc1parse
as an upstream element.
https://bugzilla.gnome.org/show_bug.cgi?id=792589
Use drm render node as the first choice of device node file.
Fall backs to use drm primary (/dev/dri/card[0-9])
if there is no render node available
Basic logic is inherited from gstreamer-vaapi, but using
gudev API rather than libudev directly.
Added gudev library as dependency for msdk.
https://bugzilla.gnome.org/show_bug.cgi?id=791599
1\ If downstream's pool is MSDK bufferpool,
2\ If there's shared GstMsdkContext in the pipeline,
a decoder decides to use video memory.
This policy should be improved to handle more cases.
https://bugzilla.gnome.org/show_bug.cgi?id=790752
In case that pipeline is like ".. ! decoder ! encoder ! ..." with using
video memory,
decoder needs to know the async depth of the following msdk element so
that it could
allocate the correct number of video memory.
Otherwise, decoder's memory is exhausted while processing.
https://bugzilla.gnome.org/show_bug.cgi?id=790752
How to share/create GstMsdkcontext is the following:
- Search GstMsdkContext if there's in the pipeline.
- If found, check if it's decoder, encoder or vpp by job type.
- If it's same job type, it creates another instance of
GstMsdkContext
with joined-session.
- Otherwise just use the shared GstMsdkContext.
- If not found, just creates new instance of GstMsdkContext.
https://bugzilla.gnome.org/show_bug.cgi?id=790752
According to the driver's instruction, if there are two or more encoders
or decoders in a process, the session should be joined by
MFXJoinSession.
To achieve this successfully by GstContext, this patch adds job type
specified if it's encoder, decoder or vpp.
If a msdk element gets to know if joining session is needed by the
shared context,
it should create another instance of GstContext with joined session,
which
is not shared.
https://bugzilla.gnome.org/show_bug.cgi?id=790752
1\ In decide_allocation, it makes its own msdk bufferpool.
- If downstream supports video meta, it just replace it with the msdk
bufferpool.
- If not, it uses the msdk bufferpool as a side pool, which will be
decoded into.
and will copy it to downstream's bufferpool.
2\ Decide if using video memory or system memory.
- This is not completed in this patch.
- It might be decided in update_src_caps.
- But tested for both system memory and video memory cases.
https://bugzilla.gnome.org/show_bug.cgi?id=790752
1\ Proposes msdk bufferpool to upstream.
- If upstream has accepted the proposed msdk bufferpool,
encoder can get msdk surface from the buffer directly.
- If not, encoder get msdk surface its own msdk bufferpool
and copy from upstream's frame to the surface.
2\ Replace arrays of surfaces with msdk bufferpool.
3\ In case of using VPP, there should be another msdk bufferpool
with NV12 info so that it could convert first and encode.
Calls gst_msdk_set_frame_allocator and uses video memory only on linux.
and uses system memory on Windows until d3d allocator is implemented.
https://bugzilla.gnome.org/show_bug.cgi?id=790752
Implements 2 memory allocators:
1\ GstMsdkSystemAllocator: This will allocate system memory.
2\ GstMsdkVideoAllocator: This will allocate device memory depending
on the platform. (eg. VASurface)
Currently GstMsdkBufferPool uses video allocator currently by default
only on linux. On Windows, we should use system memory until d3d
allocator
is implemented.
https://bugzilla.gnome.org/show_bug.cgi?id=790752
Implements msdk frame allocator which is required from the driver.
Also makes these functions global so that GstMsdkAllocator could use
the allocated video memory later and couple with GstMsdkMemory.
GstMsdkContext keeps allocation information such as mfxFrameAllocRequest
and mfxFrameAllocResponse after allocation.
https://bugzilla.gnome.org/show_bug.cgi?id=790752