Make declare/define a function consistent.
Note that GstBaseTransform::set_caps should return gboolean
Compiling C object subprojects/gst-plugins-bad/ext/vulkan/f3f9d6b@@gstvulkan@sha/vkviewconvert.c.obj.
../subprojects/gst-plugins-bad/ext/vulkan/vkviewconvert.c(644):
warning C4133: '=': incompatible types - from 'GstFlowReturn (__cdecl *)(GstBaseTransform *,GstCaps *,GstCaps *)'
to 'gboolean (__cdecl *)(GstBaseTransform *,GstCaps *,GstCaps *)'
Based on a patch by
Georg Lippitsch <glippitsch@toolsonair.com>
Vivia Nikolaidou <vivia@toolsonair.com>
Using libltc from https://github.com/x42/libltc
We now have a single property to select the timecode source that should
be applied, and for each timecode source the timecode is updated at
every frame. Then based on a set mode, the timecode is added to the
frame if none exists already or all existing timecodes are removed and
the timecode is added.
In addition the real-time clock is considered a proper timecode source
now instead of only allowing to initialize once in the beginning with
it, and also instead of just taking the current time we now take the
current time at the clock time of the video frame.
... and put them into new nvcodec plugin.
* nvcodec plugin
Now each nvenc and nvdec element is moved to be a part of nvcodec plugin
for better interoperability.
Additionally, cuda runtime API header dependencies
(i.e., cuda_runtime_api.h and cuda_gl_interop.h) are removed.
Note that cuda runtime APIs have prefix "cuda". Since 1.16 release with
Windows support, only "cuda.h" and "cudaGL.h" dependent symbols have
been used except for some defined types. However, those types could be
replaced with other types which were defined by "cuda.h".
* dynamic library loading
CUDA library will be opened with g_module_open() instead of build-time linking.
On Windows, nvcuda.dll is installed to system path by CUDA Toolkit
installer, and on *nix, user should ensure that libcuda.so.1 can be
loadable (i.e., via LD_LIBRARY_PATH or default dlopen path)
Therefore, NVIDIA_VIDEO_CODEC_SDK_PATH env build time dependency for Windows
is removed.
Direct3D11 was shipped as part of Windows7 and it's obviously
primary graphics API on Windows.
This plugin includes HDR10 rendering if following requirements are satisfied
* IDXGISwapChain4::SetHDRMetaData is available (decleared in dxgi1_5.h)
* Display can support DXGI_COLOR_SPACE_RGB_FULL_G2084_NONE_P2020 color space
* Upstream provides 10 bitdepth format with smpte-st 2084 static metadata
MFX_FOURCC_VP9_SEGMAP surface in MSDK is an internal surface however
MSDK still call the external allocator for this surface, so this plugin
has to return UNSUPPORTED and force MSDK allocates surface using the
internal allocator.
See https://github.com/Intel-Media-SDK/MediaSDK/issues/762 for details
The call of MFXVideoENCODE_EncodeFrameAsync may not generate output and
the function returns MFX_ERR_MORE_DATA with NULL sync point, the input
frame is cached in this case, so it is possible that all allocated
frames go into the surfaces_used list after calling
MFXVideoENCODE_EncodeFrameAsync a few times, then the encoder will fail
to get an available surface before releasing used frames
This patch adds a new field of num_extra_frames to GstMsdkEnc and allows
encode element requires extra frames, the default value is 0.
This patch is the preparation for msdkvp9enc element.
The SPS parsing functions take a parse_vui_param flag
to skip VUI parsing, but there's no indication in the output
SPS struct that the VUI was skipped.
The only caller that ever passed FALSE seems to be the
important gst_h264_parser_parse_nal() function, meaning - so the
cached SPS were always silently invalid. That needs changing
anyway, meaning noone ever passes FALSE.
I don't see any use for saving a few microseconds in
order to silently produce garbage, and since this is still
unstable API, let's remove the parse_vui_param.
The spec calls for pic_timing SEI to be absent unless
there's either a CpbDpbDelaysPresentFlag or
pic_struct_present_flag in the SPS VUI data. If
both those flags are missing, warn.
If parsing an SEI errors out, it might not consume
all bits, leaving extra unparsed data in the reader
that the outer loop then tries to parse as a new
appended SEI.
Skip all the bits if any are left over to avoid
'finding' extra garbage SEI in the parsing.
When parsing SEI that require an SPS, return
GST_H264_PARSER_BROKEN_LINK instead of a generic
parsing error to let callers distinguish
bitstream errors from (expected) missing packets
when resuming decode.
This patch adds the infrastructure to test AVTP plugin elements. It also
adds a test case to check avtpaafpay element basic functionality. The
test consists in setting the element sink caps and properties, and
verifying if the output buffer is set as expected.
This patch adds to the CVF depayloader the capability to regroup H.264
fragmented FU-A packets.
After all packets are regrouped, they are added to the "stash" of H.264
NAL units that will be sent as soon as an AVTP packet with M bit set is
found (usually, the last fragment).
Unrecognized fragments (such as first fragment seen, but with no Start
bit set) are discarded - and any NAL units on the "stash" are sent
downstream, as if a SEQNUM discontinuty happened.
This patch introduces the AVTP Compressed Video Format (CVF) depayloader
specified in IEEE 1722-2016 section 8. Currently, this depayloader only
supports H.264 encapsulation described in section 8.5.
Is also worth noting that only single NAL units are handled: aggregated
and fragmented payloads are not handled.
As stated in AVTP CVF payloader patch, AVTP timestamp is used to define
outgoing buffer DTS, while the H264_TIMESTAMP defines outgoing buffer
PTS.
When an AVTP packet is received, the extracted H.264 NAL unit is added to
a "stash" (the out_buffer) of H.264 NAL units. This "stash" is pushed
downstream as single buffer (with NAL units aggregated according to format
used on GStreamer, based on ISO/IEC 14496-15) as soon as we get the AVTP
packet with M bit set.
This patch groups NAL units using a fixed NAL size lenght, sent downstream
on the `codec_data` capability.
The "stash" of NAL units can be prematurely sent downstream if a
discontinuity (a missing SEQNUM) happens.
This patch reuses the infra provided by gstavtpbasedepayload.c.
Based on `mtu` property, the CVF payloader is now capable of properly
fragmenting H.264 NAL units that are bigger than MTU in several AVTP
packets.
AVTP spec defines two methods for fragmenting H.264 packets, but this
patch only generates non-interleaved FU-A fragments.
Usually, only the last NAL unit from a group of NAL units in a single
buffer will be big enough to be fragmented. Nevertheless, only the last
AVTP packet sent for a group of NAL units will have the M bit set (this
means that the AVTP packet for the last fragment will only have the M
bit set if there's no more NAL units in the group).
This patch introduces the AVTP Compressed Video Format (CVF) payloader
specified in IEEE 1722-2016 section 8. Currently, this payload only
supports H.264 encapsulation described in section 8.5.
Is also worth noting that only single NAL units are encapsulated: no
aggregation or fragmentation is performed by the payloader.
An interesting characteristic of CVF H.264 spec is that it defines an
H264_TIMESTAMP, in addition to the AVTP timestamp. The later is
translated to the GST_BUFFER_DTS while the former is translated to the
GST_BUFFER_PTS. From AVTP CVF H.264 spec, it is clear that the AVTP
timestamp is related to the decoding order, while the H264_TIMESTAMP is
an ancillary information to the H.264 decoder.
Upon receiving a buffer containing a group of NAL units, the avtpcvfpay
element will extract each NAL unit and payload them into individual AVTP
packets. The last AVTP packet generated for a group of NAL units will
have the M bit set, so the depayloader is able to properly regroup them.
The exact format of the buffer of NAL units is described on the
'codec_data' capability, which is parsed by the avtpcvfpay, in the same
way done in rtph264pay.
This patch reuses the infra provided by gstavtpbasepayload.c.
This patch introduces the avtpsrc element which implements a typical
network source. The avtpsrc element receives AVTPDUs encapsulated into
Ethernet frames and push them downstream in the GStreamer pipeline.
Implementation if pretty straightforward since the burden is implemented
by GstPushSrc class.
Likewise the avtpsink element, applications that utilize this element
must have CAP_NET_RAW capability since it is required by Linux to open
sockets from AF_PACKET domain.
This patch introduces the avtpsink elements which implements a typical
network sink. Implementation is pretty straightforward since the burden
is implemented by GstBaseSink class.
The avtpsink element defines three new properties: 1) network interface
from where AVTPDU should be transmitted, 2) destination MAC address
(usually a multicast address), and 3) socket priority (SO_PRIORITY).
Socket setup and teardown are done in start/stop virtual methods while
AVTPDU transmission is carried out by render(). AVTPDUs are encapsulated
into Ethernet frames and transmitted to the network via AF_PACKET socket
domain. Linux requires CAP_NET_RAW capability in order to open an
AF_PACKET socket so the application that utilize this element must have
it. For further info about AF_PACKET socket domain see packet(7).
Finally, AVTPDUs are expected to be transmitted at specific times -
according to the GstBuffer presentation timestamp - so the 'sync'
property from GstBaseSink is set to TRUE by default.