Record the underlying native display instance into the toplevel
GstVaapiDisplay object. This is useful for fast lookups to the
underlying native display, e.g. for creating an EGL display.
Add new generic helper functions gst_vaapi_texture_new_wrapped()
and gst_vaapi_texture_new() to create a texture without having
the caller to uselessly check for the display type himself. i.e.
internally, there is now a GstVaapiDisplayClass hook to create
textures, and the actual backend implementation fills it in.
This is a simplification in view to supporting EGL.
GstVaapiTexture is a generic abstraction that could be moved to the
core libgstvaapi library. While doing this, no extra dependency needs
to be added. This means that a GstVaapitextureClass is now available
for any specific code that needs to be added, e.g. creation of the
underlying GL texture objects, or backend dependent ways to upload
a surface to the texture object.
Generic OpenGL data types (GLuint, GLenum) are also replaced with a
plain guint.
https://bugzilla.gnome.org/show_bug.cgi?id=736715
The VA/GLX interfaces are obsolete. They used to exist for XvBA, and
ease of use, but they had other caveats to deal with. It's now better
to move on to legacy mode, whereby VA/GLX interop is two be provided
through (i) X11 Pixmap, and (ii) other modern means of buffer sharing.
https://bugzilla.gnome.org/show_bug.cgi?id=736711
The gst_vaapi_texture_put_surface() function is missing a crop_rect
argument that would be used during transfer for cropping the source
surface to the desired dimensions.
Note: from a user point-of-view, he should create the GstVaapiTexture
object with the cropped size. That's the default behaviour in software
decoding pipelines that we need to cope with.
This is an API/ABI change, and SONAME version needs to be bumped.
https://bugzilla.gnome.org/show_bug.cgi?id=736712
Add new gst_vaapi_surface_new_full() helper function that allocates
VA surface from a GstVideoInfo template in argument. Additional flags
may include ways to
- allocate linear storage (GST_VAAPI_SURFACE_ALLOC_FLAG_LINEAR_STORAGE) ;
- allocate with fixed strides (GST_VAPI_SURFACE_ALLOC_FLAG_FIXED_STRIDES) ;
- allocate with fixed offsets (GST_VAAPI_SURFACE_ALLOC_FLAG_FIXED_OFFSETS).
Add new gst_vaapi_surface_proxy_new() helper to wrap a surface into
a proxy. The main use case for that is to convey additional information
at the proxy level that would not be suitable to the plain surface.
Re-introduce a GST_VAAPI_ID_INVALID value that represents
a non-zero and invalid id. This is useful to have a value
that is still invalid for cases where zero could actually
be a valid value.
Make it possible to have all libgstvaapi backends (libs) access to a
common GstVaapiMiniObject API and implementation. This is a minor step
towards full exposure when needed, but restrict it to libgstvaapi at
this time.
Really report sample aspect ratio (SAR) as present, and make it match
what we have obtained from the user as pixel-aspect-ratio (PAR). i.e.
really make sure VUI parameter aspect_ratio_info_present_flag is set
to TRUE and that the indication from aspect_ratio_idc is Extended_SAR.
This is a leftover from git commit a12662f.
https://bugzilla.gnome.org/show_bug.cgi?id=740360
Fix gst_vaapi_decoder_mpeg4_parse() to initialize the packet type to
GST_MPEG4_USER_DATA so that a parse error would result in skipping
that packet. Also fix gst_vaapi_decoder_mpeg4_decode_codec_data() to
initialize status to GST_VAAPI_DECODER_STATUS_SUCCESS.
Use the SEI pic_timing() message to track and propagate down the repeat
first field (RFF) flag. This is only initial support as there is one
other condition that could induce the RFF flag, which is not handled
yet.
Fix the decoding process for picture order count type 0 when the previous
picture had a memory_management_control_operation = 5. In particular, fix
the actual variable type for prev_pic_structure to hold the full bits of
the picture structure.
In practice, this used to work though, due to the underlying type used to
express a gboolean.
Use the SEI pic_timing() message to track the pic_struct variable when
present, or infer it from the regular slice header flags field_pic_flag
and bottom_field_flag. This fixes temporal sequence ordering when the
output pictures are to be displayed.
https://bugzilla.gnome.org/show_bug.cgi?id=739291
Add support for DRM Render-Nodes. This is a new feature that appeared
in kernel 3.12 for experimentation purposes, but was later declared
stable enough in kernel 3.15 for getting enabled by default.
This allows headless usages without authentication at all, i.e. usages
through plain ssh connections is possible.
Fix gst_vaapi_surface_proxy_copy() to copy the view-id element, thus
fixing random frames skipped when vaapipostproc element is used in
passthrough mode. In that mode, GstMemory is copied, thus including
the underlying GstVaapiVideoMeta and associated GstVaapiSurfaceProxy.
Ensure the X11 implementation for GstVaapiWindow::get_geometry() is
thread-safe by default, so that upper layer users don't need to handle
that explicitly.
Add gst_vaapi_window_reconfigure() interface to force an update of
the GstVaapiWindow "soft" size, based on the current geometry of the
underlying native window.
This can be useful for instance to synchronize the window size when
the user changed it.
Thanks to Fabrice Bellet for rebasing the patch.
[changed interface to gst_vaapi_window_reconfigure()]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Add gst_vaapi_display_get_display_name() helper function to determine
the name associated with the underlying native display. Note that for
raw DRM backends, the display name is actually the device path.
The timestamp generator in gstvaapidecoder_mpeg2.c always interpolated
frame timestamps within a GOP, even when it's been fed input PTS for
every frame.
That leads to incorrect output timestamps in some situations - for example
live playback where input timestamps have been scaled based on arrival time
from the network and don't exactly match the framerate.
https://bugzilla.gnome.org/show_bug.cgi?id=732719
Forbid GstVaapiObject to be created without an associated klass spec.
It is mandatory that the subclass implements an adequate .finalize()
hook, so it shall provide a valid GstVaapiObjectClass.
https://bugzilla.gnome.org/show_bug.cgi?id=722757
[made non-NULL klass argument to gst_vaapi_object_new() a requirement]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Call the subclass .init() function in gst_vaapi_object_new(), if
needed. The default behaviour is to zero initialize the subclass
object data, then the .init() function can be used to initialize
fields to non-default values, e.g. VA object ids to VA_INVALID_ID.
Also fix the gst_vaapi_object_new() description, which was merely
copied from GstVaapiMiniObject.
https://bugzilla.gnome.org/show_bug.cgi?id=722757
[changed to always zero initialize the subclass]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
When a DPB flush is required, e.g. at a natural and of stream or issued
explicitly through an IDR, try to detect any frame left in the DPB that
is interlaced but does not contain two decoded fields. In that case, mark
the picture as having a single field only.
This avoids a hang while decoding tv_cut.mkv.
Simplify the dpb_output() function to exclusively rely on the frame store
buffer to output, since this is now always provided. Besides, also fix
cases where split fields would not be displayed.
This is a regression from f48b1e0.
Cope with latest changes from codecparsers/h264. It is now required
to explicitly clear the GstH264PPS structure as it could contain
additional allocations (slice_group_ids).
Add new GstVaapiSurfaceProxy flag FFB, which means "first frame in
bundle", and really expresses the first view component of a multi
view coded frame. e.g. in H.264 MVC, the surface proxy has flag FFB
set if VOIdx = 0.
Likewise, new API is exposed to retrieve the associated "view-id".
Allow decoders to set the "one-field" attribute when the decoded frame
genuinely has a single field, or if the second field was mis-decoded but
we still want to display the first field.
Make sure to output the decoded picture, and push the associated
GstVideoCodecFrame, only once. The frame fully represents what needs
to be output, included for interlaced streams. Otherwise, the base
GstVideoDecoder class would release the frame twice.
Anyway, the general process is to output decoded frames only when
they are complete. By complete, we mean a full frame was decoded or
both fields of a frame were decoded.
Slightly optimize decoding process by submitting the current VA surface
for decoding earlier to the hardware, and perform the reference picture
marking process and DPB update process afterwards.
This is a minor optimization to let the video decode engine kick in work
earlier, thus improving parallel resources utilization.
Fix decoding of interlaced streams where a first field (e.g. B-slice)
was immediately output and the current decoded field is to be paired
with that former frame, which is no longer in DPB.
https://bugzilla.gnome.org/show_bug.cgi?id=701340
Optimize the process to detect new pictures or start of new access
units by checking if the previous NAL unit was the end of a picture,
or the end of the previous access unit.
Add support for MVC streams with multiple SPS and subset SPS headers
emitted regularly, e.g. at around every I-frame. Track the maximum
number of views in ensure_context() and really reset the DPB size to
the expected value, always. i.e. even if it decreased. dpb_reset()
only cares of ensuring the DPB allocation.
Fix the compaction process when the DPB is cleared for a specific
view, i.e. fix the process of filling in the holes resulting from
removing frame buffers matching the current picture.
It is not necessary to periodically send SPS or subset SPS headers.
This is up to the upper layer (e.g. transport layer) to decide on
if/how to periodically submit those. For now, only generate new SPS
or subset SPS headers when the codec config changed.
Note: the upper layer could readily determine the config headers
(SPS/PPS) through the gst_vaapi_encoder_h264_get_codec_data() function.
https://bugzilla.gnome.org/show_bug.cgi?id=732083
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Report sample aspect ratio (SAR) as present, and make it match what
we have obtained from the user as pixel-aspect-ratio (PAR). i.e. the
VUI parameter aspect_ratio_info_present_flag now defaults to TRUE.
Set the value of num_anchor_refs_l0, num_anchor_refs_l1, num_non_anchor_refs_l0,
and num_non_anchor_refs_l1 to zero since the inter-view prediction is not yet
supported.
When the seq_parameter_set_data() syntax structure is present in a subset
sequence parameter set and vui_parameters_present_flag is equal to 1, then
timing_info_present_flag shall be equal to 0 (H.7.4.2.1.1).
Submit Prefix NAL headers (nal_unit_type = 14) before every packed
slice header (nal_unit_type = 1 or 5) only for the base view. In non
base views, a Coded Slice Extension NAL header (nal_unit_type = 20)
is required, with an appropriate nal_unit_header_mvc_extension() in
the NAL header bytes.
https://bugzilla.gnome.org/show_bug.cgi?id=732083
Fix search for a picture in the DPB that has a lower POC value than
the current picture. The dpb_find_lowest_poc() function will return
a picture with the lowest POC in DPB and that is marked as "needed
for output", but an additional check against the actual POC value
of the current picture is needed.
This is a regression from 1c46990.
https://bugzilla.gnome.org/show_bug.cgi?id=732130
Fix dpb_clear() to clear previous frame buffers only if they actually
exist to begin with. If the decoder bailed out early, e.g. when it
does not support a specific profile, that array of previous frames
might not be allocated beforehand.
We can avoid scanning for start codes again if the bitstream is fed
in NALU chunks. Currently, we always scan for start codes, and keep
track of remaining bits in a GstAdapter, even if, in practice, we
are likely receiving one GstBuffer per NAL unit. i.e. h264parse with
"nal" alignment.
https://bugzilla.gnome.org/show_bug.cgi?id=723284
[use gst_adapter_available_fast() to determine the top buffer size]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
The `vaapipostproc' element could never determine if the H.264 stream
was interlaced, and thus always assumed it to be progressive. Fix the
H.264 decoder to report interlace-mode accordingly, thus allowing the
vaapipostproc element to automatically enable deinterlacing.
The packed slice header and packed raw data need to be paired with
the submission of VAEncSliceHeaderParameterBuffer. So handle them
on a per-slice basis insted of a per-picture basis.
[removed useless initializer]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Factor out the removal process of unused inter-view only reference
pictures from the DPB, prior to the possible insertion of the current
picture.
Ideally, the compiler could still opt for generating two loops. But
at least, the code is now clearer for maintenance.
Improve process for the removal of pictures from DPB before possible
insertion of the current picture (C.4.4) for H.264 MVC inter-view only
reference components. In particular, handle cases where picture to be
inserted is not the last one of the access unit and if it was already
output and is no longer marked as used for reference, including for
decoding next view components within the same access unit.
While invoking the DPB bumping process in presence of many views,
it could be necessary to output previous pictures that are ready,
in a whole. i.e. emitting all view components from the very first
view order index zero to the very last one in its original access
unit; and not starting from the view order index of the picture
that caused the DPB bumping process to be invoked.
As a reminder, the maximum number of frames in DPB for MultiView
High profile with more than 2 views is not necessarily a multiple
of the number of views.
This fixes decoding of MVCNV-4.264.
Let the utility layer handle dynamic growth of the inter-view pictures
array. By definition, setting a new size to the array will effectively
grow the array, but would also fill in the newly created elements with
empty entries (NULL), thus also increasing the reported length, which
is not correct.
When decoding Multiview High profile streams with a large number of
views, it is not possible to make the VAPictureParameterBufferH264.
ReferenceFrames[] array hold the complete DPB, with all possibly
active pictures to be used for inter-view prediction in the current
access unit.
So reduce the scope of the ReferenceFrames[] array to only include
the set of reference pictures that are going to be used for decoding
the current picture. Basically, this is a union of all RefPicListX[]
array, for all slices constituting the decoded picture.
The inter-view reference components and inter-view only reference
components that are included in the reference picture lists shall
be considered as not being marked as "used for short-term reference"
or "used for long-term reference". This means that reference flags
should all be removed from VAPictureH264.flags.
This fixes decoding of MVCNV-2.264.
If the VA driver exposes ad-hoc H.264 MVC profiles, then we have to
be careful to detect profiles changes and not reset the underlying
VA context erroneously. In MVC situations, we could indeed get a
profile_idc change for every SPS that gets activated, alternatively
(base-view -> non-base view -> base-view, etc.).
An improved fix would be to characterize the exact profile to use
once and for all when SPS NAL units are parsed. This would also
allow for fallbacks to a base-view decoding only mode.
Exclusively use VA drivers that support raw packed headers for encoding.
i.e. simply submit packed headers Subset SPS and Prefix NAL units. This
provides for better compatibility accross the various VA drivers and HW
generations since no particular API is needed beyond what readily exists.
Since we are encoding each view independently from each other, we
need a higher number of pre-allocated surfaces to be used as the
reconstructed frames. For Stereo High profile encoding, this means
to effectively double the number of frames to be stored in the DPB.
Add initial support for Subset SPS, Prefix NAL and Slice Extension NAL
for non-base-view streams encoding, and the usual SPS, PPS and Slice
NALs for base-view encoding.
The H.264 Stereo High profile encoding mode will be turned on when the
"num-views" parameter is set to 2. The source (raw) YUV frames will be
considered as Left/Right view, alternatively.
Each of the two views has its own frames reordering pool and reference
frames list management system. Inter-view references are not supported
yet, so the views are encoded independently from each other.
Signed-off-by: Li Xiaowei <xiaowei.a.li@intel.com>
[limited to Stereo High profile per the definition of MAX_NUM_VIEWS]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Create structures to maintain the reference frames list (RefPool) and
frames reordering (ReorderPool) logic.
This is a prerequisite for H.264 MVC support.
Signed-off-by: Li Xiaowei <xiaowei.a.li@intel.com>
Add provisions to write subset SPS headers to the bitstream in view
to supporting the H.264 MVC specification.
This assumes the libva "staging" branch is in use.
Signed-off-by: Li Xiaowei <xiaowei.a.li@intel.com>
Optimize lookups of view ids / view order indices by caching the result
of the calculatiosn right into the GstVaapiParserInfoH264 struct. This
terribly simplifies is_new_access_unit() and find_first_field() functions.
Add safe fallbacks for MVC profiles:
- all MultiView High profile streams with 2 views at most can be decoded
with a Stereo High profile compliant decoder ;
- all Stereo High profile streams with only progressive views can be
decoded with a MultiView High profile compliant decoder ;
- all drivers that support slice-level decoding could normally support
MVC profiles when the DPB holds at most 16 frames.
In order to have a stricter conforming implementation, we need to carefully
detect access unit boundaries. Additional operations could be necessary to
perform at those boundaries.
Detect the first VCL NAL unit of a picture for MVC, based on the
view_id as per H.7.4.1.2.4. Note that we only need to detect new
view components.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Always cache the previous NAL unit so that we could check whether
there is a Prefix NAL unit immediately preceding the current slice
or IDR NAL unit. In that case, the NAL unit metadata is copied into
the current NAL unit. Otherwise, some default values are inferred,
tentatively. e.g. view_id shall be set to 0 and inter_view_flag to 1.
[infer default values for slice if previous NAL was not a Prefix]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Allow decoding for base views of MVC encoded streams. For now, just skip
the slice extension and prefix NAL units, and skip non-base view frames.
Signed-off-by: Xiaowei Li <xiaowei.a.li@intel.com>
[fixed memory leak, improved check for MVC NAL units]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Factor out process by which the decoded picture with the lowest POC
is found, and possibly output. Likewise, the storage and marking of
a reference decoded, or non-reference decoded picture, into the DPB
could also be simplified as they mostly share the same operations.
Make init_picture_ref_lists() more consistent with other functions
related to the reference marking process by supplying the current
picture as argument.
Add gst_vaapi_display_get_vendor_string() helper function to query
the underlying VA driver name. The display object owns the resulting
string, so it shall not be deallocated.
That function is thread-safe. It could be used for debugging purposes,
for instance.
Make sure to initialize one GstVaapiDisplay at a time, even in threaded
environments. This makes sure the display cache is also consistent
during the whole display creation process. In the former implementation,
there were risks that display cache got updated in another thread.
Add support for dynamic growth of the VA surfaces pool. For decoding,
this implies the recreation of the underlying VA context, as per the
requirement from VA-API. Besides, only increases are supported, not
shrinks.
It is a requirement from VA-API specification that the VA context got
from vaCreateContext(), for decoding purposes, binds the supplied set
of VA surfaces. This means that if the set of VA surfaces is to be
changed for the current decode session, then the VA context needs to
be recreated with the new set of VA surfaces.
Complement fix committed as e95a42e.
The H.264 AVC standard has to say: if the field is part of a reference
frame or a complementary reference field pair, and the other field of
the same reference frame or complementary reference field pair is also
marked as "used for long-term reference", the reference frame or
complementary reference field pair is also marked as "used for long-term
reference" and assigned LongTermFrameIdx equal to long_term_frame_idx.
This fixes decoding of MR9_BT_B in strict mode.
https://bugs.freedesktop.org/show_bug.cgi?id=64624https://bugzilla.gnome.org/show_bug.cgi?id=724518
Request the correct chroma format for decoding grayscale streams.
i.e. make lookups of the VA chroma format more generic, thus possibly
supporting more formats in the future.
This means that, if a VA driver doesn't support grayscale formats,
it is now going to fail. We cannot safely assume that maybe grayscale
was implemented on top of some YUV 4:2:0 with the chroma components
all set to 0x80.
Fix reference picture marking process with memory_management_control_op
set to 3 and 6, i.e. assign LongTermFrameIdx to a short-term reference
picture, or the current picture.
This fixes decoding of FRExt_MMCO4_Sony_B.
https://bugs.freedesktop.org/show_bug.cgi?id=64624https://bugzilla.gnome.org/show_bug.cgi?id=724518
[squashed, edited to use GST_VAAPI_PICTURE_IS_COMPLETE() macro]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
The initialization of reference picture lists (8.2.4.2) applies to all
slices. So, the RefPicList0/1 lists need to be constructed prior to
each slice submission to the HW decoder.
This fixes decoding of video sequences where frames are encoded with
multiple slices of different types, e.g. 4 slices in this order I, P,
I, and P. More precisely, CABAST3_Sony_E and CABASTBR3_Sony_B.
https://bugzilla.gnome.org/show_bug.cgi?id=724518
When NAL units of type 13 (SPS extension) or type 19 (auxiliary slice)
are present in a video, decoders shall perform the (optional) decoding
process specified for these NAL units or shall ignore them (7.4.1).
Implement option 2 (skip) for now, as alpha composition is not
supported yet during the decoding process.
This fixes decoding of the primary coded video in alphaconformanceG.
https://bugzilla.gnome.org/show_bug.cgi?id=703928https://bugzilla.gnome.org/show_bug.cgi?id=728869https://bugzilla.gnome.org/show_bug.cgi?id=724518
[skip NAL units earlier, i.e. at parsing time]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
When MVC slice NAL units (coded slice extension and prefix NAL) are
present, the number of NAL header bytes is 3, not 1 as usual.
Signed-off-by: Li Xiaowei <xiaowei.a.li@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
At the time the first VCL NAL unit of a primary coded picture is found,
and if that NAL unit was parsed to be an SPS or PPS, then the entries
in the parser may have been overriden. This means that, when the picture
is to be decoded, slice_hdr->pps could point to an invalid (the next)
PPS entry.
So, one way to solve this problem is to not use the parser PPS and
SPS info but rather maintain our own activation chain in the decoder.
https://bugzilla.gnome.org/show_bug.cgi?id=724519https://bugzilla.gnome.org/show_bug.cgi?id=724518
Retain the SEI messages that were parsed from the access unit until we
have completely decoded the current frame. This is done so that we can
peek at that data whenever necessary during decoding. e.g. for exposing
3D stereoscopic information at a later stage.
Fix support for grayscale encoded video clips, and possibly others if
the underlying driver supports the non-YUV 4:2:0 formats. i.e. defer
the decision that a surface with the desired chroma format is not
supported to the actual VA driver implementation.
https://bugzilla.gnome.org/show_bug.cgi?id=728144
Don't force allocation of VA surfaces in YUV 4:2:0 format. Rather, allow
for the upper layer to specify the desired chroma type. If the chroma
type field is not set (or yields zero), then YUV 4:2:0 format is used
by default.
Fix possible bug when a per-segment deblocking filter level value
needs to be set in non-absolute mode, i.e. when the loop filter update
value is negative in delta mode.
Also clamp the resulting filter level value to 0..63 range.
Improve condition to disable the loop filter. The previous heuristic
used to check all filter levels, for all segments. It turns out that
only the base filter_level value defined in the frame header needs
to be checked.
This fixes 00-comprehensive-013.
Fix generation of source tarballs when certain conditionals are not
met. e.g. always include all buildable codecparsers sources in the
distribution tarball, fix plug-in element sources set to include X11
and encoder bits.
The built-in libvpx serves multiple purposes, among which the most
important ones could be: track the most up-to-date, and optimized,
range decoder; allow for future hybrid implementations (non-VLD);
and have a completely independent range decoder implementation.
Apply correct patch from fd.o #722760 to fix several issues: update the
license terms to LGPLv2.1+, fix dependencies to built-in libvpx and fix
make dist.
Add libvpx submodule that tracks the upstream version 1.3.0. This is
needed to build a libgstcodecparsers_vpx.so library with all symbols
placed into the GSTREAMER namespace.
The gst_h264_parse_parse_sei() function now returns an array of SEI
messages, instead of a single SEI message. Reason: it is allowed to
have several SEI messages packed into a single SEI NAL unit, instead
of multiple NAL units.
8fadf40 h264: Fix multiple SEI messages in one SEI RBSP parsing.
644825f h265: remove trailling 0x00 bytes as the spec doesn't allow them
95f9f0f h264: remove trailling 0x00 bytes as the spec doesn't allow them
766007b h265: Initialize pointer correctly that is never assigned but freed in error cases
8ec5816 h265: Fix segfault when parsing HRD parameter
5b1730f h265: Fix segfault when parsing VPS
983b7f7 h265: prevent to overrun chroma_weight_l0_flag
7ba641d h265: Fix debug output
d9f9f9b h264: not all startcodes should have 3-byte 0 prefix
Fix parser and decoder state to sync at the right locations. This is
because we could reset the parser state, while the decoder state was
not copied yet, e.g. when parsing several NAL units from multiple frames
whereas the current frame was not decoded yet.
This is a regression brought in by commit 6fe5496.
Fix gstreamer-vaapi includedir for GStreamer 1.2 setups. i.e. use
the pkgconfig version (1.0) instead of the intended API version (1.2).
libgstvaapi1.0-dev and libgstvaapi1.2-dev packages will now conflict,
as would core GStreamer 1.0 and GStreamer 1.2 dev packages anyway.
Add gst_vaapi_get_config_attribute() helper function that takes a
GstVaapiDisplay and the rest of the arguments with VA types. The aim
is to have thread-safe VA helpers by default.
Make sure to configure the encoder with the set of packed headers we
intend to generate and submit. i.e. make selection of packed headers
to submit more robust.
Cache the first compatible GstVaapiProfile found if the encoder is not
configured yet. Next, factor out the code to check for the supported
rate-control modes by moving out vaGetConfigAttributes() to a separate
function, while also making sure that the attribute type is actually
supported by the encoder.
Also fix the default set of supported rate control modes to not the
"none" variant. It's totally useless to expose it at this point.
Introduce GstVaapiContextUsage so that to explicitly determine the
usage of a VA context. This is useful in view to simplifying the
creation of VA context for VPP too.
Unknown attributes, or attributes that are not supported for the given
profile/entrypoint pair have a return value of VA_ATTRIB_NOT_SUPPORTED.
So, return failure in this case.
Move GstVideoOverlayComposition handling to separate source files.
This helps keeing GstVaapiContext core implementation to the bare
minimal, i.e. simpy helpers to create a VA context and handle pool
of associated VA surfaces.
Improve documentation and debug messages. Clean-up APIs, i.e. strip
them down to the minimal set of interfaces. They are private, so no
need expose getters for instance.
Make sure that libgstvaapi private headers remain internally used to
build libgstvaapi libraries only. All header dependencies were reviewed
and checks for IN_LIBGSTVAAPI definition were added accordingly.
Also rename GST_VAAPI_CORE definition to IN_LIBGSTVAAPI_CORE to keep
consistency.
Set a valid GstVideoCodecFrame.system_frame_number when decoding a
stream in standalone mode. While we are at it, improve the debugging
messages to also include that frame number.
When decoding failed, or that the frame was dropped, the associated
surface proxy is not guaranteed to be present. Thus, the GST_DEBUG()
message needs to check whether the proxy is actually present or not.
https://bugzilla.gnome.org/show_bug.cgi?id=722403
[fixed gst_vaapi_surface_proxy_get_surface_id() to return VA_INVALID_ID]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Don't emit NAL HRD parameters for now in the SPS headers because the
SEI buffering_period() and picture_timing() messages are not handled
yet. Some additional changes are necessary to get it right.
https://bugzilla.gnome.org/show_bug.cgi?id=722734
Fix default CPB buffer size to something more reasonable (1500 ms)
and that still fits the level limits. This is a non configurable
property for now. The initial CPB removal delay is also fixed to
750 ms.
https://bugzilla.gnome.org/show_bug.cgi?id=722087