The call of MFXVideoENCODE_EncodeFrameAsync may not generate output and
the function returns MFX_ERR_MORE_DATA with NULL sync point, the input
frame is cached in this case, so it is possible that all allocated
frames go into the surfaces_used list after calling
MFXVideoENCODE_EncodeFrameAsync a few times, then the encoder will fail
to get an available surface before releasing used frames
This patch adds a new field of num_extra_frames to GstMsdkEnc and allows
encode element requires extra frames, the default value is 0.
This patch is the preparation for msdkvp9enc element.
The SPS parsing functions take a parse_vui_param flag
to skip VUI parsing, but there's no indication in the output
SPS struct that the VUI was skipped.
The only caller that ever passed FALSE seems to be the
important gst_h264_parser_parse_nal() function, meaning - so the
cached SPS were always silently invalid. That needs changing
anyway, meaning noone ever passes FALSE.
I don't see any use for saving a few microseconds in
order to silently produce garbage, and since this is still
unstable API, let's remove the parse_vui_param.
The spec calls for pic_timing SEI to be absent unless
there's either a CpbDpbDelaysPresentFlag or
pic_struct_present_flag in the SPS VUI data. If
both those flags are missing, warn.
If parsing an SEI errors out, it might not consume
all bits, leaving extra unparsed data in the reader
that the outer loop then tries to parse as a new
appended SEI.
Skip all the bits if any are left over to avoid
'finding' extra garbage SEI in the parsing.
When parsing SEI that require an SPS, return
GST_H264_PARSER_BROKEN_LINK instead of a generic
parsing error to let callers distinguish
bitstream errors from (expected) missing packets
when resuming decode.
This patch adds the infrastructure to test AVTP plugin elements. It also
adds a test case to check avtpaafpay element basic functionality. The
test consists in setting the element sink caps and properties, and
verifying if the output buffer is set as expected.
This patch adds to the CVF depayloader the capability to regroup H.264
fragmented FU-A packets.
After all packets are regrouped, they are added to the "stash" of H.264
NAL units that will be sent as soon as an AVTP packet with M bit set is
found (usually, the last fragment).
Unrecognized fragments (such as first fragment seen, but with no Start
bit set) are discarded - and any NAL units on the "stash" are sent
downstream, as if a SEQNUM discontinuty happened.
This patch introduces the AVTP Compressed Video Format (CVF) depayloader
specified in IEEE 1722-2016 section 8. Currently, this depayloader only
supports H.264 encapsulation described in section 8.5.
Is also worth noting that only single NAL units are handled: aggregated
and fragmented payloads are not handled.
As stated in AVTP CVF payloader patch, AVTP timestamp is used to define
outgoing buffer DTS, while the H264_TIMESTAMP defines outgoing buffer
PTS.
When an AVTP packet is received, the extracted H.264 NAL unit is added to
a "stash" (the out_buffer) of H.264 NAL units. This "stash" is pushed
downstream as single buffer (with NAL units aggregated according to format
used on GStreamer, based on ISO/IEC 14496-15) as soon as we get the AVTP
packet with M bit set.
This patch groups NAL units using a fixed NAL size lenght, sent downstream
on the `codec_data` capability.
The "stash" of NAL units can be prematurely sent downstream if a
discontinuity (a missing SEQNUM) happens.
This patch reuses the infra provided by gstavtpbasedepayload.c.
Based on `mtu` property, the CVF payloader is now capable of properly
fragmenting H.264 NAL units that are bigger than MTU in several AVTP
packets.
AVTP spec defines two methods for fragmenting H.264 packets, but this
patch only generates non-interleaved FU-A fragments.
Usually, only the last NAL unit from a group of NAL units in a single
buffer will be big enough to be fragmented. Nevertheless, only the last
AVTP packet sent for a group of NAL units will have the M bit set (this
means that the AVTP packet for the last fragment will only have the M
bit set if there's no more NAL units in the group).
This patch introduces the AVTP Compressed Video Format (CVF) payloader
specified in IEEE 1722-2016 section 8. Currently, this payload only
supports H.264 encapsulation described in section 8.5.
Is also worth noting that only single NAL units are encapsulated: no
aggregation or fragmentation is performed by the payloader.
An interesting characteristic of CVF H.264 spec is that it defines an
H264_TIMESTAMP, in addition to the AVTP timestamp. The later is
translated to the GST_BUFFER_DTS while the former is translated to the
GST_BUFFER_PTS. From AVTP CVF H.264 spec, it is clear that the AVTP
timestamp is related to the decoding order, while the H264_TIMESTAMP is
an ancillary information to the H.264 decoder.
Upon receiving a buffer containing a group of NAL units, the avtpcvfpay
element will extract each NAL unit and payload them into individual AVTP
packets. The last AVTP packet generated for a group of NAL units will
have the M bit set, so the depayloader is able to properly regroup them.
The exact format of the buffer of NAL units is described on the
'codec_data' capability, which is parsed by the avtpcvfpay, in the same
way done in rtph264pay.
This patch reuses the infra provided by gstavtpbasepayload.c.
This patch introduces the avtpsrc element which implements a typical
network source. The avtpsrc element receives AVTPDUs encapsulated into
Ethernet frames and push them downstream in the GStreamer pipeline.
Implementation if pretty straightforward since the burden is implemented
by GstPushSrc class.
Likewise the avtpsink element, applications that utilize this element
must have CAP_NET_RAW capability since it is required by Linux to open
sockets from AF_PACKET domain.
This patch introduces the avtpsink elements which implements a typical
network sink. Implementation is pretty straightforward since the burden
is implemented by GstBaseSink class.
The avtpsink element defines three new properties: 1) network interface
from where AVTPDU should be transmitted, 2) destination MAC address
(usually a multicast address), and 3) socket priority (SO_PRIORITY).
Socket setup and teardown are done in start/stop virtual methods while
AVTPDU transmission is carried out by render(). AVTPDUs are encapsulated
into Ethernet frames and transmitted to the network via AF_PACKET socket
domain. Linux requires CAP_NET_RAW capability in order to open an
AF_PACKET socket so the application that utilize this element must have
it. For further info about AF_PACKET socket domain see packet(7).
Finally, AVTPDUs are expected to be transmitted at specific times -
according to the GstBuffer presentation timestamp - so the 'sync'
property from GstBaseSink is set to TRUE by default.
This patch introduces the AAF depayloader element, the counterpart from
the AAF payloader. As expected, this element inputs AVTPDUs and outputs
audio raw data and supports AAF PCM encapsulation only.
The AAF depayloader srcpad produces a fixed format that is encoded
within the AVTPDU. Once the first AVTPDU is received by the element, the
audio features e.g. sample format, rate, number of channels, are decoded
and the srcpad caps are set accordingly. Also, at this point, the
element pushes a SEGMENT event downstream defining the segment according
to the AVTP presentation time.
All AVTP depayloaders will share some common code. For that reason, this
patch introduces the GstAvtpBaseDepayload abstract class that implements
common depayloader functionalities. AAF-specific functionalities are
implemented in the derived class GstAvtpAafDepay.
This patch introduces the AVTP Audio Format (AAF) payloader element from
the AVTP plugin. The element inputs audio raw data and outputs AVTP
packets (aka AVTPDUs), implementing a typical protocol payloader element
from GStreamer.
AAF is one of the available formats to transport audio data in an AVTP
system. AAF is specified in IEEE 1722-2016 section 7 and provides two
encapsulation mode: PCM and AES3. This patch implements PCM
encapsulation mode only.
The AAF payloader working mechanism consists of building the AAF header,
prepending it to the GstBuffer received on the sink pad, and pushing the
buffer downstream. Payloader parameters such as stream ID, maximum
transit time, time uncertainty, and timestamping mode are passed via
element properties. AAF doesn't support all possible sample format and
sampling rate values so the sink pad caps template from the payloader is
a subset of audio/x-raw. Additionally, this patch implements only
"normal" timestamping mode from AAF. "Sparse" mode should be implemented
in future.
Upcoming patches will introduce other AVTP payloader elements that will
have some common code. For that reason, this patch introduces the
GstAvtpBasePayload abstract class that implements common payloader
functionalities, and the GstAvtpAafPay class that extends the
GstAvtpBasePayload class, implementing AAF-specific functionalities.
The AAF payloader element is most likely to be used with the AVTP sink
element (to be introduced by a later patch) but it could also be used
with UDP sink element to implement AVTP over UDP as described in IEEE
1722-2016 Annex J.
This element was inspired by RTP payloader elements.
This patch introduces the bootstrap code from the AVTP plugin (plugin
definition and init) as well as the build system files. Upcoming patches
will introduce payloaders, source and sink elements provided by the AVTP
plugin. These elements can be utilized by a GStreamer pipeline to
implement TSN audio/video applications.
Regarding the plugin build system files, both autotools and meson files
are introduced. The AVTP plugin is landed in ext/ since it has an
external dependency on libavtp, an opensource AVTP packetization
library. For further information about libavtp check [1].
[1] https://github.com/AVnu/libavtp
Fix a recently introduced segfault. Don't de-reference a NULL
SPS pointer when attempting to update source caps before SPS
has been seen in the stream.
Interleaved frames can be fragmented between
incoming frames. Thus, we can have multiple
frames within the single input frame, as well as
incomplete frame. Now it preserves parsing
state and handle both situations.
Fixes#991
msdkenc supports CSC implicitly, so it is possible that two VPP
processes are required when a pipeline contains msdkvpp and msdkenc.
Before this fix, msdkvpp and msdkenc may share the same context, hence
the same mfx session, which results in MFX_ERR_UNDEFINED_BEHAVIOR
in MSDK because a mfx session has at most one VPP process only
This fixes the broken pipelines below:
gst-launch-1.0 videotestsrc ! video/x-raw,format=I420 ! msdkh264enc ! \
msdkh264dec ! msdkvpp ! video/x-raw,format=YUY2 ! fakesink
gst-launch-1.0 videotestsrc ! msdkvpp ! video/x-raw,format=YUY2 ! \
msdkh264enc ! fakesink
MSDK supports JPEG YUY2 (422 chroma) output color
format. The color format of input bitstream is
described by JPEGChromaFormat and JPEGColorFormat
fields in the mfxInfoMFX structure which is filled
in by the MFXVideoDECODE_DecodeHeader function.
To obtain lossless decoded output from 422 encoded
JPEGs, we must set the output color format in the
FourCC and ChromaFormat fields in the mfxFrameInfo
structure to the appropriate values at post_configure
so that they are propagated through to the srcpad
caps accordingly.
A post_configure virtual method is added to allow
codec subclasses to adjust the initialized parameters
after MFXVideoDECODE_DecodeHeader is called from the
gstmsdkdec::gst_msdkdec_handle_frame function.
This is useful if codecs want to adjust the output
parameters based on the codec-specific decoding
options that are present in the mfxInfoMFX structure
after MFXVideoDECODE_DecodeHeader initializes them.