Optimize the process to detect new pictures or start of new access
units by checking if the previous NAL unit was the end of a picture,
or the end of the previous access unit.
Add support for MVC streams with multiple SPS and subset SPS headers
emitted regularly, e.g. at around every I-frame. Track the maximum
number of views in ensure_context() and really reset the DPB size to
the expected value, always. i.e. even if it decreased. dpb_reset()
only cares of ensuring the DPB allocation.
Fix the compaction process when the DPB is cleared for a specific
view, i.e. fix the process of filling in the holes resulting from
removing frame buffers matching the current picture.
It is not necessary to periodically send SPS or subset SPS headers.
This is up to the upper layer (e.g. transport layer) to decide on
if/how to periodically submit those. For now, only generate new SPS
or subset SPS headers when the codec config changed.
Note: the upper layer could readily determine the config headers
(SPS/PPS) through the gst_vaapi_encoder_h264_get_codec_data() function.
https://bugzilla.gnome.org/show_bug.cgi?id=732083
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Report sample aspect ratio (SAR) as present, and make it match what
we have obtained from the user as pixel-aspect-ratio (PAR). i.e. the
VUI parameter aspect_ratio_info_present_flag now defaults to TRUE.
Set the value of num_anchor_refs_l0, num_anchor_refs_l1, num_non_anchor_refs_l0,
and num_non_anchor_refs_l1 to zero since the inter-view prediction is not yet
supported.
When the seq_parameter_set_data() syntax structure is present in a subset
sequence parameter set and vui_parameters_present_flag is equal to 1, then
timing_info_present_flag shall be equal to 0 (H.7.4.2.1.1).
The gst_h264_parse_collect_nal() function is a misnomer. In reality,
this function is used to determine access unit boundaries, i.e. that
is the key function for alignment=au output format generation.
Always use a GstAdapter when collecting access units (alignment="au")
in either byte-stream or avcC format. This is required to properly
preserve config headers like SPS and PPS when invalid or broken NAL
units are subsequently parsed.
More precisely, this fixes scenario like:
<SPS> <PPS> <invalid-NAL> <slice>
where we used to reset the output frame buffer when an invalid or
broken NAL is parsed, i.e. SPS and PPS NAL units were lost, thus
preventing the next slice unit to be decoded, should this also
represent any valid data.
https://bugzilla.gnome.org/show_bug.cgi?id=732203
Carefully track cases when skipping broken or invalid NAL units is
necessary. In particular, always allow NAL units to be processed
and let that gst_h264_parse_process_nal() function decide on whether
the current NAL needs to be dropped or not.
This fixes parsing of streams with SEI NAL buffering_period() message
inserted between SPS and PPS, or SPS-Ext NAL following a traditional
SPS NAL unit, among other cases too.
Practical examples from the H.264 AVC conformance suite include
alphaconformanceG, CVSE2_Sony_B, CVSE3_Sony_H, CVSEFDFT3_Sony_E
when parsing in stream-format=byte-stream,alignment=au mode.
https://bugzilla.gnome.org/show_bug.cgi?id=732203
Improve parser state tracking by introducing new flags reflecting
it: "got-sps", "got-pps" and "got-slice". This is an addition for
robustness purposes.
Older have_sps and have_pps variables are kept because they have
a different meaning. i.e. they are used for deciding on when to
submit updated caps or not, and rather mean "have new SPS/PPS to
be submitted?"
Always default to stream-format=byte-stream,alignment=nalu if avcC
format was not detected. This is the natural stream format specified
in the standard (Annex.B): a series of NAL units prefixed with the
usual start code.
https://bugzilla.gnome.org/show_bug.cgi?id=732167
Use gst_h264_parser_identify_nalu_unchecked() to identify the next
NAL unit. We don't want to parse the full NAL unit, but only the
header bytes and possibly the first RBSP byte for identifying the
first_mb_in_slice syntax element.
Also fix check for failure when returning from that function. The
only success condition for that is GST_H264_PARSER_OK, so use it.
https://bugzilla.gnome.org/show_bug.cgi?id=732154
Submit Prefix NAL headers (nal_unit_type = 14) before every packed
slice header (nal_unit_type = 1 or 5) only for the base view. In non
base views, a Coded Slice Extension NAL header (nal_unit_type = 20)
is required, with an appropriate nal_unit_header_mvc_extension() in
the NAL header bytes.
https://bugzilla.gnome.org/show_bug.cgi?id=732083
Fix search for a picture in the DPB that has a lower POC value than
the current picture. The dpb_find_lowest_poc() function will return
a picture with the lowest POC in DPB and that is marked as "needed
for output", but an additional check against the actual POC value
of the current picture is needed.
This is a regression from 1c46990.
https://bugzilla.gnome.org/show_bug.cgi?id=732130
Fix dpb_clear() to clear previous frame buffers only if they actually
exist to begin with. If the decoder bailed out early, e.g. when it
does not support a specific profile, that array of previous frames
might not be allocated beforehand.
We can avoid scanning for start codes again if the bitstream is fed
in NALU chunks. Currently, we always scan for start codes, and keep
track of remaining bits in a GstAdapter, even if, in practice, we
are likely receiving one GstBuffer per NAL unit. i.e. h264parse with
"nal" alignment.
https://bugzilla.gnome.org/show_bug.cgi?id=723284
[use gst_adapter_available_fast() to determine the top buffer size]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
The `vaapipostproc' element could never determine if the H.264 stream
was interlaced, and thus always assumed it to be progressive. Fix the
H.264 decoder to report interlace-mode accordingly, thus allowing the
vaapipostproc element to automatically enable deinterlacing.
Avoid reaching an assert if dynamic framerates (0/1) are used. One
way to solve this problem is to just stick field_duration to zero.
However, this means that, in presence of interlaced streams, the
very first field will never be displayed if precise presentation
timestamps are honoured.
https://bugzilla.gnome.org/show_bug.cgi?id=729604
ensure_srcpad_buffer_pool() tries to avoid unnecessarily deleting and
recreating filter_pool. Unfortunately, this also meant it didn't create
it if it did not exist.
Fix it to always create the buffer pool if it does not exist.
https://bugzilla.gnome.org/show_bug.cgi?id=723834
Signed-off-by: Simon Farnsworth <simon.farnsworth@onelan.co.uk>
Reset deinterlacer state, i.e. past reference frames used for advanced
deinterlacing, when there is some discontinuity detected in the course
of processing source buffers.
This fixes support for advanced deinterlacing when a seek occurred.
https://bugzilla.gnome.org/show_bug.cgi?id=720375
[fixed type of pts_diff variable, fetch previous buffer PTS from the
history buffer, reduce heuristic for detecting discontinuity]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Apply video cropping regions stored in GstVideoCropMeta, or in older
GstVaapiSurfaceProxy representation, to VPP pipelines. In non-VPP modes,
the crop meta are already propagated to the output buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=720730
deinterlace-mode didn't behave in the way you'd expect if you have
past experience of the deinterlace element. There were two bugs:
1. "auto" mode wouldn't deinterlace "interleaved" buffers, only "mixed".
2. "force" mode wouldn't deinterlace "mixed" buffers flagged as progressive.
Fix these up, and add assertions and error messages to detect cases that
aren't handled.
https://bugzilla.gnome.org/show_bug.cgi?id=726361
Signed-off-by: Simon Farnsworth <simon.farnsworth@onelan.co.uk>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
gst_video_info_set_format() does not preserve video info properties. In
order to keep important information in the caps such as interlace mode,
framerate, pixel aspect ratio, ... we need to manually copy back those
properties after setting the new video format.
https://bugzilla.gnome.org/show_bug.cgi?id=722276
It can happen that there is a pool provided that does not advertise
the vappivideometa. We should unref that pool before using our own.
Discovered with vaapidecode ! {glimagesink,cluttersink}
https://bugzilla.gnome.org/show_bug.cgi?id=724957
[fixed compilation by adding the missing semi-colon]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Parse any pending data until a complete frame is obtained. This is a
memory optimization to avoid expansion of video packets stuffed into
the GstAdapter, and a fix to EOS condition to detect there is actually
pending data that needs to be decoded, and subsequently output.
https://bugzilla.gnome.org/show_bug.cgi?id=731831
The packed slice header and packed raw data need to be paired with
the submission of VAEncSliceHeaderParameterBuffer. So handle them
on a per-slice basis insted of a per-picture basis.
[removed useless initializer]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Force early initializatin of the GstVaapiDisplay so that to make sure
that the sink element display object is presented first to upstream
elements, as it will be correctly featuring the requested display type
by the user.
Otherwise, we might end up in situations where a VA/X11 display is
initialized in vaapidecode, then we try VA/DRM display in vaapisink
(as requested by the "display" property), but this would cause a failure
because we cannot acquire a DRM display that was previously acquired
through another backend (e.g. VA/X11).
When a new display is settled through GstElement::set_context() (>= 1.2),
or GstVideoContext::set_context() (<= 1.0), then we shall also update the
associated display type.
The built-in video parsers elements are built into a single DSO named
libgstvaapi_parse.so. The various video parsers could be accessed as
vaapiparse_CODEC.
For now, this only includes a modified version of h264parse so that to
support H.264 MVC encoded streams.
7d8d045 h264parse: use new gst_h264_video_calculate_framerate()
d2f965a h264parse: set field_pic_flag when parsing a slice header
24c15b8 Import h264parse
a9283e5 bytereader: Use concistant derefence method
a8252c6 bytereader: Use pointer instead of index access
b1bebfc Import GstBitReader and GstByteReader
2f58788 h264: recognize SVC NAL units
4335da5 h264: fix SPS copy code for MVC
cf9b6dc h264: fix quantization matrix conversion routine names
b11ce2a h264: add gst_h264_video_calculate_framerate()
126dc6f add C++ guards for MPEG-4 and VP8 parsers
Factor out the removal process of unused inter-view only reference
pictures from the DPB, prior to the possible insertion of the current
picture.
Ideally, the compiler could still opt for generating two loops. But
at least, the code is now clearer for maintenance.
Improve process for the removal of pictures from DPB before possible
insertion of the current picture (C.4.4) for H.264 MVC inter-view only
reference components. In particular, handle cases where picture to be
inserted is not the last one of the access unit and if it was already
output and is no longer marked as used for reference, including for
decoding next view components within the same access unit.
While invoking the DPB bumping process in presence of many views,
it could be necessary to output previous pictures that are ready,
in a whole. i.e. emitting all view components from the very first
view order index zero to the very last one in its original access
unit; and not starting from the view order index of the picture
that caused the DPB bumping process to be invoked.
As a reminder, the maximum number of frames in DPB for MultiView
High profile with more than 2 views is not necessarily a multiple
of the number of views.
This fixes decoding of MVCNV-4.264.
Let the utility layer handle dynamic growth of the inter-view pictures
array. By definition, setting a new size to the array will effectively
grow the array, but would also fill in the newly created elements with
empty entries (NULL), thus also increasing the reported length, which
is not correct.
When decoding Multiview High profile streams with a large number of
views, it is not possible to make the VAPictureParameterBufferH264.
ReferenceFrames[] array hold the complete DPB, with all possibly
active pictures to be used for inter-view prediction in the current
access unit.
So reduce the scope of the ReferenceFrames[] array to only include
the set of reference pictures that are going to be used for decoding
the current picture. Basically, this is a union of all RefPicListX[]
array, for all slices constituting the decoded picture.
The inter-view reference components and inter-view only reference
components that are included in the reference picture lists shall
be considered as not being marked as "used for short-term reference"
or "used for long-term reference". This means that reference flags
should all be removed from VAPictureH264.flags.
This fixes decoding of MVCNV-2.264.
If the VA driver exposes ad-hoc H.264 MVC profiles, then we have to
be careful to detect profiles changes and not reset the underlying
VA context erroneously. In MVC situations, we could indeed get a
profile_idc change for every SPS that gets activated, alternatively
(base-view -> non-base view -> base-view, etc.).
An improved fix would be to characterize the exact profile to use
once and for all when SPS NAL units are parsed. This would also
allow for fallbacks to a base-view decoding only mode.
Exclusively use VA drivers that support raw packed headers for encoding.
i.e. simply submit packed headers Subset SPS and Prefix NAL units. This
provides for better compatibility accross the various VA drivers and HW
generations since no particular API is needed beyond what readily exists.