Don't fixate profile caps which will choose the first profile from list.
Instead, store all profiles allowed by peer and try them until x265 can
accept one of them.
YV12 format is supported by Nvidia NVENC without manual conversion.
So nvenc is exposing YV12 format at sinkpad template but there is some
missing point around uploading the memory to GPU.
Currently h264parser produces a field or a frame for
alignment=au for interlaced streams, but the flag
MFX_BITSTREAM_COMPLETE_FRAME needs a complete frame
or complementary field pair of data, this results in
broken images being output.
Some patches have been sent out to fix h264parser,
but they are pending on some unfinished work. In
order to make gstreamer-msdk decoding work properly
for interlaced streams before h264parser is fixed,
this flag will be removed temporarily and will be
added back once h264parser if fixed.
Related to:
https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/merge_requests/399https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/merge_requests/228
Instead of using the information we stored ourselves for the video frame
itself. Which was also the wrong one: it was the mode from the property,
not the autodetected one.
This fixes vanc extraction with mode=auto
Remove some custom and incomplete seek calculation
logic in favour of gst_segment_do_seek(), and
short-circuit any actual seeking or recalculation
if the position didn't change and just send an updated
segment directly.
This removes the custom seeking logic in favour of
using standard core seek handling.
The gst_cuda_result macro function is more helpful for debugging
than previous cuda_OK because gst_cuda_result prints the function
and line number. If the CUDA API return was not CUDA_SUCCESS,
gst_cuda_result will print WARNING level debug message with
error name, error text strings.
... and drop CUvideoctxlock usage. The CUvideoctxlock basically
has the identical role of cuda context push/pop but nvdec specific
way. Since we can share the CUDA context among encoders and decoders,
use CUDA context directly for accessing GPU API.
... and add support CUDA context sharing similar to glcontext sharing.
Multiple CUDA context per GPU is not the best practice. The context
sharing method is very similar to that of glcontext. The difference
is that there can be multiple context object on a pipeline since
the CUDA context is created per GPU id. For example, a pipeline
has nvh264dec (uses GPU #0) and nvh264device0dec (uses GPU #1),
then two CUDA context will propagated to all pipeline.
New object and helper functions can remove duplicated code
from nvenc/nvdec. Also this is prework for CUDA device context sharing
among nvdec(s)/nvenc(s).
The default behaviour of rtponviftimestamp is to drop buffers
outside the segment. This creates obvious problems for reverse
playback.
The ONVIF specification unfortunately doesn't describe how to handle
that specific use case, but we can expose a property to let the
user disable the dropping behaviour, and forward these buffers with
a G_MAXUINT64 ONVIF timestamp.
Also modify rtponvifparse to handle such timestamps appropriately.
We don't support negotiation with downstream but simply set caps based
on the buffers we receive. This prevents renegotiation to other formats,
and negotiation to NTSC in mode=auto in the beginning until the first
buffer is received.
As side-effect of this, also remove various other caps handling code
that was working around the behaviour of the default
BaseSrc::negotiate().
We reject caps with other framerates as it's impossible to generate
timecodes unless we actually know a constant framerate. Reflect this
also in the pad template caps.
During GstVideoInfo conversion from GstCaps, interlace-mode is
inferred to progressive so unspecified interlace-mode should not cause any
negotiation issue. Simly set GST_PAD_FLAG_ACCEPT_INTERSECT flag
on sinkpad to fix issue.
Encoded bitstream might not have valid framerate. If upstream
provided non-variable-framerate (i.e., fps_n > 0 and fps_d > 0)
use upstream framerate instead of parsed one.