Make gst_vaapi_display_get_display_type() return the actual VA display
type. Conversely, add a gst_vaapi_display_get_class_type() function to
return the type of the GstVaapiDisplay instance. The former is used to
identify the display server onto which the application is running, and
the latter to identify the original object class.
Record the underlying native display instance into the toplevel
GstVaapiDisplay object. This is useful for fast lookups to the
underlying native display, e.g. for creating an EGL display.
Add new generic helper functions gst_vaapi_texture_new_wrapped()
and gst_vaapi_texture_new() to create a texture without having
the caller to uselessly check for the display type himself. i.e.
internally, there is now a GstVaapiDisplayClass hook to create
textures, and the actual backend implementation fills it in.
This is a simplification in view to supporting EGL.
GstVaapiTexture is a generic abstraction that could be moved to the
core libgstvaapi library. While doing this, no extra dependency needs
to be added. This means that a GstVaapitextureClass is now available
for any specific code that needs to be added, e.g. creation of the
underlying GL texture objects, or backend dependent ways to upload
a surface to the texture object.
Generic OpenGL data types (GLuint, GLenum) are also replaced with a
plain guint.
https://bugzilla.gnome.org/show_bug.cgi?id=736715
The VA/GLX interfaces are obsolete. They used to exist for XvBA, and
ease of use, but they had other caveats to deal with. It's now better
to move on to legacy mode, whereby VA/GLX interop is two be provided
through (i) X11 Pixmap, and (ii) other modern means of buffer sharing.
https://bugzilla.gnome.org/show_bug.cgi?id=736711
The gst_vaapi_texture_put_surface() function is missing a crop_rect
argument that would be used during transfer for cropping the source
surface to the desired dimensions.
Note: from a user point-of-view, he should create the GstVaapiTexture
object with the cropped size. That's the default behaviour in software
decoding pipelines that we need to cope with.
This is an API/ABI change, and SONAME version needs to be bumped.
https://bugzilla.gnome.org/show_bug.cgi?id=736712
Add new gst_vaapi_surface_new_full() helper function that allocates
VA surface from a GstVideoInfo template in argument. Additional flags
may include ways to
- allocate linear storage (GST_VAAPI_SURFACE_ALLOC_FLAG_LINEAR_STORAGE) ;
- allocate with fixed strides (GST_VAPI_SURFACE_ALLOC_FLAG_FIXED_STRIDES) ;
- allocate with fixed offsets (GST_VAAPI_SURFACE_ALLOC_FLAG_FIXED_OFFSETS).
Add new gst_vaapi_surface_proxy_new() helper to wrap a surface into
a proxy. The main use case for that is to convey additional information
at the proxy level that would not be suitable to the plain surface.
Re-introduce a GST_VAAPI_ID_INVALID value that represents
a non-zero and invalid id. This is useful to have a value
that is still invalid for cases where zero could actually
be a valid value.
Make it possible to have all libgstvaapi backends (libs) access to a
common GstVaapiMiniObject API and implementation. This is a minor step
towards full exposure when needed, but restrict it to libgstvaapi at
this time.
Really report sample aspect ratio (SAR) as present, and make it match
what we have obtained from the user as pixel-aspect-ratio (PAR). i.e.
really make sure VUI parameter aspect_ratio_info_present_flag is set
to TRUE and that the indication from aspect_ratio_idc is Extended_SAR.
This is a leftover from git commit a12662f.
https://bugzilla.gnome.org/show_bug.cgi?id=740360
Fix gst_vaapi_decoder_mpeg4_parse() to initialize the packet type to
GST_MPEG4_USER_DATA so that a parse error would result in skipping
that packet. Also fix gst_vaapi_decoder_mpeg4_decode_codec_data() to
initialize status to GST_VAAPI_DECODER_STATUS_SUCCESS.
Use the SEI pic_timing() message to track and propagate down the repeat
first field (RFF) flag. This is only initial support as there is one
other condition that could induce the RFF flag, which is not handled
yet.
Fix the decoding process for picture order count type 0 when the previous
picture had a memory_management_control_operation = 5. In particular, fix
the actual variable type for prev_pic_structure to hold the full bits of
the picture structure.
In practice, this used to work though, due to the underlying type used to
express a gboolean.
Use the SEI pic_timing() message to track the pic_struct variable when
present, or infer it from the regular slice header flags field_pic_flag
and bottom_field_flag. This fixes temporal sequence ordering when the
output pictures are to be displayed.
https://bugzilla.gnome.org/show_bug.cgi?id=739291
Add support for DRM Render-Nodes. This is a new feature that appeared
in kernel 3.12 for experimentation purposes, but was later declared
stable enough in kernel 3.15 for getting enabled by default.
This allows headless usages without authentication at all, i.e. usages
through plain ssh connections is possible.
Fix gst_vaapi_surface_proxy_copy() to copy the view-id element, thus
fixing random frames skipped when vaapipostproc element is used in
passthrough mode. In that mode, GstMemory is copied, thus including
the underlying GstVaapiVideoMeta and associated GstVaapiSurfaceProxy.
Ensure the X11 implementation for GstVaapiWindow::get_geometry() is
thread-safe by default, so that upper layer users don't need to handle
that explicitly.
Add gst_vaapi_window_reconfigure() interface to force an update of
the GstVaapiWindow "soft" size, based on the current geometry of the
underlying native window.
This can be useful for instance to synchronize the window size when
the user changed it.
Thanks to Fabrice Bellet for rebasing the patch.
[changed interface to gst_vaapi_window_reconfigure()]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Add gst_vaapi_display_get_display_name() helper function to determine
the name associated with the underlying native display. Note that for
raw DRM backends, the display name is actually the device path.
The timestamp generator in gstvaapidecoder_mpeg2.c always interpolated
frame timestamps within a GOP, even when it's been fed input PTS for
every frame.
That leads to incorrect output timestamps in some situations - for example
live playback where input timestamps have been scaled based on arrival time
from the network and don't exactly match the framerate.
https://bugzilla.gnome.org/show_bug.cgi?id=732719
Forbid GstVaapiObject to be created without an associated klass spec.
It is mandatory that the subclass implements an adequate .finalize()
hook, so it shall provide a valid GstVaapiObjectClass.
https://bugzilla.gnome.org/show_bug.cgi?id=722757
[made non-NULL klass argument to gst_vaapi_object_new() a requirement]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Call the subclass .init() function in gst_vaapi_object_new(), if
needed. The default behaviour is to zero initialize the subclass
object data, then the .init() function can be used to initialize
fields to non-default values, e.g. VA object ids to VA_INVALID_ID.
Also fix the gst_vaapi_object_new() description, which was merely
copied from GstVaapiMiniObject.
https://bugzilla.gnome.org/show_bug.cgi?id=722757
[changed to always zero initialize the subclass]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
When a DPB flush is required, e.g. at a natural and of stream or issued
explicitly through an IDR, try to detect any frame left in the DPB that
is interlaced but does not contain two decoded fields. In that case, mark
the picture as having a single field only.
This avoids a hang while decoding tv_cut.mkv.
Simplify the dpb_output() function to exclusively rely on the frame store
buffer to output, since this is now always provided. Besides, also fix
cases where split fields would not be displayed.
This is a regression from f48b1e0.
Cope with latest changes from codecparsers/h264. It is now required
to explicitly clear the GstH264PPS structure as it could contain
additional allocations (slice_group_ids).
Add new GstVaapiSurfaceProxy flag FFB, which means "first frame in
bundle", and really expresses the first view component of a multi
view coded frame. e.g. in H.264 MVC, the surface proxy has flag FFB
set if VOIdx = 0.
Likewise, new API is exposed to retrieve the associated "view-id".
Allow decoders to set the "one-field" attribute when the decoded frame
genuinely has a single field, or if the second field was mis-decoded but
we still want to display the first field.
Make sure to output the decoded picture, and push the associated
GstVideoCodecFrame, only once. The frame fully represents what needs
to be output, included for interlaced streams. Otherwise, the base
GstVideoDecoder class would release the frame twice.
Anyway, the general process is to output decoded frames only when
they are complete. By complete, we mean a full frame was decoded or
both fields of a frame were decoded.
Slightly optimize decoding process by submitting the current VA surface
for decoding earlier to the hardware, and perform the reference picture
marking process and DPB update process afterwards.
This is a minor optimization to let the video decode engine kick in work
earlier, thus improving parallel resources utilization.
Fix decoding of interlaced streams where a first field (e.g. B-slice)
was immediately output and the current decoded field is to be paired
with that former frame, which is no longer in DPB.
https://bugzilla.gnome.org/show_bug.cgi?id=701340
Optimize the process to detect new pictures or start of new access
units by checking if the previous NAL unit was the end of a picture,
or the end of the previous access unit.