GstVideoDecoder::drain/flush can be called at very initial state
with stream-start and flush-stop event, respectively.
Draning with NULL CUvideoparser seems to unsafe and that eventually
failed to handle it.
It is possible that the output region size (e.g. 192x144) is different
from the coded picture size (e.g. 192x256). We may adjust the alignment
parameters so that the padding is respected in GstVideoInfo and use
GstVideoInfo to calculate mfx frame width and height
This fixes the error below when decoding a stream which has different
output region size and coded picture size
0:00:00.057726900 28634 0x55df6c3220a0 ERROR msdkdec
gstmsdkdec.c:1065:gst_msdkdec_handle_frame:<msdkh265dec0>
DecodeFrameAsync failed (failed to allocate memory)
Sample pipeline:
gst-launch-1.0 filesrc location=output.h265 ! h265parse ! msdkh265dec !
glimagesink
... and add our stub cuda header.
Newly introduced stub cuda.h file is defining minimal types in order to
build nvcodec plugin without system installed CUDA toolkit dependency.
This will make cross-compile possible.
* By this commit, if there are more than one device,
nvenc element factory will be created per
device like nvh264device{device-id}enc and nvh265device{device-id}enc
in addition to nvh264enc and nvh265enc, so that the element factory
can expose the exact capability of the device for the codec.
* Each element factory will have fixed cuda-device-id
which is determined during plugin initialization
depending on the capability of corresponding device.
(e.g., when only the second device can encode h265 among two GPU,
then nvh265enc will choose "1" (zero-based numbering)
as it's target cuda-device-id. As we have element factory
per GPU device, "cuda-device-id" property is changed to read-only.
* nvh265enc gains ability to encoding
4:4:4 8bits, 4:2:0 10 bits formats and up to 8K resolution
depending on device capability.
Additionally, I420 GLMemory input is supported by nvenc.
Only the default device has been used by NVDEC so far.
This commit make it possible to use registered device id.
To simplify device id selection, GstNvDecCudaContext usage is removed.
By this commit, each codec has its own element factory so the
nvdec element factory is removed. Also, if there are more than one device,
additional nvdec element factory will be created per
device like nvh264device{device-id}dec, so that the element factory
can expose the exact capability of the device for the codec.
This patch just enforces boudaries for the access to the
standard_deviation array (64 floats). Such case can be
seen with a corrupted stream, where there's no hope to
obtain a valid decoded frame anyway.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/1002
Callbacks of CUvideoparser is called on the streaming thread.
So the use of async queue has no benefit.
Make control flow straightforward instead of long while/switch loop.
The D bit is meant to be set whenever there is a discontinuity
in transmission, and directly maps to the DISCONT flag.
The E bit is not meant to be set on every buffer preceding a
discontinuity, but only on the last buffer of a contiguous section
of recording. This has to be signaled through the unfortunately-named
"discont" field of the custom NtpOffset event.
There's no reason for it to inherit from GstObject apart from
locking, which is easily replaced, and inheriting from
GInitiallyUnowned made introspection awkward and needlessly
complicated.
Previously we would've reported that there is signal unless we know for
sure that we don't have signal. For example signal would've been
reported before the device is even opened.
Now keep track whether the signal state is unknown or not and report no
signal if we don't know yet. As before, only send an INFO message about
signal recovery if we actually had a signal loss before.
We pass-through the video as is, only putting a GstMeta on it from the
caption sinkpad.
This fixes negotation problems caused by not passing through caps
queries in both directions.
Also handle CAPS/ACCEPT_CAPS queries directly for the caption pad
instead of proxying.
Make declare/define a function consistent.
Note that GstBaseTransform::set_caps should return gboolean
Compiling C object subprojects/gst-plugins-bad/ext/vulkan/f3f9d6b@@gstvulkan@sha/vkviewconvert.c.obj.
../subprojects/gst-plugins-bad/ext/vulkan/vkviewconvert.c(644):
warning C4133: '=': incompatible types - from 'GstFlowReturn (__cdecl *)(GstBaseTransform *,GstCaps *,GstCaps *)'
to 'gboolean (__cdecl *)(GstBaseTransform *,GstCaps *,GstCaps *)'
Based on a patch by
Georg Lippitsch <glippitsch@toolsonair.com>
Vivia Nikolaidou <vivia@toolsonair.com>
Using libltc from https://github.com/x42/libltc
We now have a single property to select the timecode source that should
be applied, and for each timecode source the timecode is updated at
every frame. Then based on a set mode, the timecode is added to the
frame if none exists already or all existing timecodes are removed and
the timecode is added.
In addition the real-time clock is considered a proper timecode source
now instead of only allowing to initialize once in the beginning with
it, and also instead of just taking the current time we now take the
current time at the clock time of the video frame.