Add libvpx submodule that tracks the upstream version 1.3.0. This is
needed to build a libgstcodecparsers_vpx.so library with all symbols
placed into the GSTREAMER namespace.
The gst_h264_parse_parse_sei() function now returns an array of SEI
messages, instead of a single SEI message. Reason: it is allowed to
have several SEI messages packed into a single SEI NAL unit, instead
of multiple NAL units.
8fadf40 h264: Fix multiple SEI messages in one SEI RBSP parsing.
644825f h265: remove trailling 0x00 bytes as the spec doesn't allow them
95f9f0f h264: remove trailling 0x00 bytes as the spec doesn't allow them
766007b h265: Initialize pointer correctly that is never assigned but freed in error cases
8ec5816 h265: Fix segfault when parsing HRD parameter
5b1730f h265: Fix segfault when parsing VPS
983b7f7 h265: prevent to overrun chroma_weight_l0_flag
7ba641d h265: Fix debug output
d9f9f9b h264: not all startcodes should have 3-byte 0 prefix
Fix parser and decoder state to sync at the right locations. This is
because we could reset the parser state, while the decoder state was
not copied yet, e.g. when parsing several NAL units from multiple frames
whereas the current frame was not decoded yet.
This is a regression brought in by commit 6fe5496.
Fix gstreamer-vaapi includedir for GStreamer 1.2 setups. i.e. use
the pkgconfig version (1.0) instead of the intended API version (1.2).
libgstvaapi1.0-dev and libgstvaapi1.2-dev packages will now conflict,
as would core GStreamer 1.0 and GStreamer 1.2 dev packages anyway.
Add gst_vaapi_get_config_attribute() helper function that takes a
GstVaapiDisplay and the rest of the arguments with VA types. The aim
is to have thread-safe VA helpers by default.
Make sure to configure the encoder with the set of packed headers we
intend to generate and submit. i.e. make selection of packed headers
to submit more robust.
Cache the first compatible GstVaapiProfile found if the encoder is not
configured yet. Next, factor out the code to check for the supported
rate-control modes by moving out vaGetConfigAttributes() to a separate
function, while also making sure that the attribute type is actually
supported by the encoder.
Also fix the default set of supported rate control modes to not the
"none" variant. It's totally useless to expose it at this point.
Introduce GstVaapiContextUsage so that to explicitly determine the
usage of a VA context. This is useful in view to simplifying the
creation of VA context for VPP too.
Unknown attributes, or attributes that are not supported for the given
profile/entrypoint pair have a return value of VA_ATTRIB_NOT_SUPPORTED.
So, return failure in this case.
Move GstVideoOverlayComposition handling to separate source files.
This helps keeing GstVaapiContext core implementation to the bare
minimal, i.e. simpy helpers to create a VA context and handle pool
of associated VA surfaces.
Improve documentation and debug messages. Clean-up APIs, i.e. strip
them down to the minimal set of interfaces. They are private, so no
need expose getters for instance.
Make sure that libgstvaapi private headers remain internally used to
build libgstvaapi libraries only. All header dependencies were reviewed
and checks for IN_LIBGSTVAAPI definition were added accordingly.
Also rename GST_VAAPI_CORE definition to IN_LIBGSTVAAPI_CORE to keep
consistency.
Set a valid GstVideoCodecFrame.system_frame_number when decoding a
stream in standalone mode. While we are at it, improve the debugging
messages to also include that frame number.
When decoding failed, or that the frame was dropped, the associated
surface proxy is not guaranteed to be present. Thus, the GST_DEBUG()
message needs to check whether the proxy is actually present or not.
https://bugzilla.gnome.org/show_bug.cgi?id=722403
[fixed gst_vaapi_surface_proxy_get_surface_id() to return VA_INVALID_ID]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Don't emit NAL HRD parameters for now in the SPS headers because the
SEI buffering_period() and picture_timing() messages are not handled
yet. Some additional changes are necessary to get it right.
https://bugzilla.gnome.org/show_bug.cgi?id=722734
Fix default CPB buffer size to something more reasonable (1500 ms)
and that still fits the level limits. This is a non configurable
property for now. The initial CPB removal delay is also fixed to
750 ms.
https://bugzilla.gnome.org/show_bug.cgi?id=722087
Round down the calculated, or supplied, bitrate (kbps) into a multiple
of the HRD bitrate scale factor. Use a bitrate scale factor of 64 so
that to have less losses in precision. Likewise, don't round up because
that could be a strict constraint imposed by the user.
Fix the level calculation involving bitrate limits. Since we are
targetting NAL HRD conformance, the check against MaxBR from the
Table A-1 limits shall involve cpbBrNalFactor depending on the
active profile.
Submit sequence parameter buffers only once, or when the bitstream
was reconfigured in a way that requires such. Always submit packed
sequence parameter buffers at I-frame period, if the VA driver needs
those.
https://bugzilla.gnome.org/show_bug.cgi?id=722737
Make sure to submit the packed headers only if the underlying VA driver
requires those. Currently, only handle packed sequence and picture
headers.
https://bugzilla.gnome.org/show_bug.cgi?id=722737
The VAEncSequenceParameterBuffer.ip_period value reprents the distance
between the I-frame and the next P-frame. So, this also accounts for
any additional B-frame in the middle of it.
This fixes rate control heuristics for certain VA drivers.
https://bugzilla.gnome.org/show_bug.cgi?id=722735
Fix level characterisation when the bitrate is automatically computed
from the active coding tools. i.e. ensure the bitrate once the profile
is completely characterized but before the level calculation process.
Document and rename a few functions here and there. Drop code that
caps num_bframes variable in reset_properties() since they shall
have been checked beforehand, during properties initialization.
Clean-up GstBitWriter related utility functions and simplify notations.
While we are at it, also make bitstream writing more robust should an
overflow occur. We could later optimize for writing headers capped to
their maximum possible size by using the _unchecked() helper variants.
Drop private header since it was originally used to expose internals
to the plugin element. The proper interface is now the properties API,
thus rendering private headers totally obsolete.
Add support for hue, saturation, brightness and constrat adjustments.
Also fix cap info local copy to match the really expected cap subtype
of interest.
https://bugzilla.gnome.org/show_bug.cgi?id=720376
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Change the submission order of VA objects so that to make that process
more logical. i.e. submit sequence parameter first, if any; next the
packed headers associated to sequece, picture or slices; and finally
the actual picture and associated slices.
Various clean-ups to improve consistency and readability: rename some
variables, drop unused macro definitions, drop initialization of vars
that are zero-initialized from the base class, drop un-necessary casts,
allocate GPtrArrays with a destroy function.
Fix frame cropping rectangle calculation to handle horizontal resolutions
that don't match a multiple of 16 pixels, but also the vertical resolution
that was incorrectly computed for progressive sequences too.
https://bugzilla.gnome.org/show_bug.cgi?id=722089
For non "Constant-QP" modes, we could provide more reasonable heuristics
for the target bitrate. In general, 48 bits per macroblock with all the
useful coding tools enable looks safe enough. Then, this rate is raised
by +10% to +15% for each coding tool that is disabled.
https://bugzilla.gnome.org/show_bug.cgi?id=719699
Add support for "high-compression" tuning option. First, determine the
largest supported profile by the hardware. Next, check any target limit
set by the user. Then, enable each individual coding tool based on the
resulting profile_idc value to use.
https://bugzilla.gnome.org/show_bug.cgi?id=719696
Allow user to precise the largest profile to use for encoding due
to target decoder constraints. For instance, if CABAC entropy coding
mode is requested by "constrained-baseline" profile only is desired,
then an error is returned during codec configuration.
Also make sure that the suitable profile we derived actually matches
what the HW can cope with.
https://bugzilla.gnome.org/show_bug.cgi?id=719694
Refine the heuristic to determine the maximum size of a coded buffer
to account for the exact number of slices. set_context_info() is the
last step during codec reconfiguration, no additional change is done
afterwards, so re-using the num_slices field here is fine.
https://bugzilla.gnome.org/show_bug.cgi?id=719953
Automatically derive the minimum profile and level to be used for
encoding, based on the activated coding tools. The encoder will
be trying to generate a bitstream that has the best chances to be
decoded on most platforms by default.
Also change the default profile to "constrained-baseline" so that
to ensure maximum compatibility when the stream is decoded.
https://bugzilla.gnome.org/show_bug.cgi?id=719691