Add API to transfer VA urfaces to native pixmaps. Also add an API to
render a native pixmap, for completeness. In general, rendering to
pixmap would only be useful to certain VA drivers and use cases on
X11 display servers. e.g. GLX_EXT_texture_from_pixmap (TFP) handled
in an upper layer.
After the code got moved to create the gst_vaapi_create_display() helper,
this comparison was not updated to dereference the newly-created
pointer, so the code was comparing the pointer itself to the type, and
therefore failing to retrieve the VA display.
This fixes the following error (and gets gst-vaapi decoding again):
ERROR vaapidecode gstvaapidecode.c:807:gst_vaapidecode_ensure_allowed_caps: failed to retrieve VA display
https://bugzilla.gnome.org/show_bug.cgi?id=704410
Signed-off-by: Emilio López <emilio@elopez.com.ar>
Mark dummy pictures as output already so that we don't try to submit
them to the upper layer since this is purely internal / temporary
picture for helping the decoder.
Once the picture was output, it is no longer necessary to keep an extra
reference to the underlying GstVideoCodecFrame. So, we can release it
earlier, and maybe subsequently release the associate surface proxy
earlier.
Fix new internal video format API, based on GstVideoFormat, to not
clobber with system symbols. So replace the gst_video_format_* prefix
with gst_vaapi_video_format_ prefix, even if the format type remains
GstVideoFormat.
Bump the library major version due to API/ABI changes that occurred in
the imaging API. In particular, GstVaapiImageFormat type was replaced
with the standard GstVideoFormat type. All dependent APIs were updated
to match this change.
Fix memory leak when processing interlaced pictures and that occurs
because the first field, represented as a GstVideoCodecFrame, never
gets released. i.e. when the picture is completed, this is generally
the case when the second field is successfully decoded, we need to
propagate the GstVideoCodecFrame of the first field to the original
GstVideoDecoder so that it could reclaim memory.
Otherwise, we keep accumulating the first fields into GstVideoDecoder
private frames list until the end-of-stream is reached. The frames
are eventually released there, but too late, i.e. too much memory
may have been consumed.
https://bugzilla.gnome.org/show_bug.cgi?id=701257
Simplify gst_vaapi_create_display() helper as gst_vaapi_display_XXX_new()
performs the necessary validation checks for the underlying VA display
prior to returning to the caller. So, if an error occurred, then NULL is
really returned in that case.
If the video buffer pool config doesn't have new caps, then it's not
necessary to reinstantiate the allocator. That could be a costly
operation as we could do some extra heavy checking in there.
Fix reference counting issue whereby gst_memory_init() does not hold
an extra reference to the GstAllocator. So, there could be situations
where the last instance of GstVaapiVideoAllocator gets released before
a dangling GstVaapiVideoMemory object, thus possibly leading to a crash.
Always perform conversion of sources buffers to NV12 since this is
the way we tested for this capability in ensure_allowed_caps(). This
also saves memory bandwidth for further rendering. However, this may
not preserve quality since the YUV buffers are down-sampled to 4:2:0.
The queue of free objects to used was deallocated with g_queue_free_full().
However, this convenience function shall only be used if the original queue
was allocated with g_queue_new(). This caused memory corruption, eventually
leading to a crash.
The correct solution is to pair the g_queue_init() with the corresponding
g_queue_clear(), while iterating over all free objects to deallocate them.
This fixes direct linking of vaapidownload element to xvimagesink with
VA drivers supporting vaGetImage() from the native VA surface format to
a different VA image format. i.e. color conversion during download.
http://bugzilla.gnome.org/show_bug.cgi?id=703937
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
The image is now expressed as a standard GstVideoFormat, which is not
a FOURCC but rather a regular enum value.
This is a regression introduced in commit 09397fa.
Fix gst_vaapi_uploader_get_buffer() to not assign caps since they
were already negotiated beforehand, and they are not used from the
buffer in upstream elements.
Clean-up gst_vaapi_uploader_ensure_caps() to use the new image caps
represented as a GstVideoInfo.
Adapt GstVaapiVideoMemory allocator to support creation of VA surfaces
with an explicit pixel format. This allows for direct rendering to
VA surface memory from a software decoder.
Fix creation of surface pool objects to honour explicit pixel format
specification. If this operation is not supported, then fallback to
the older interface with chroma format.
If a VA surface was allocated with the chroma-format interface, try to
determine the underlying pixel format on gst_vaapi_surface_get_format(),
or return GST_VIDEO_FORMAT_ENCODED if this is not a supported operation.
Make it possible to create VA surfaces with a specific pixel format.
This is a new capability brought in by VA-API >= 0.34.0. If that
capability is not built-in (e.g. using VA-API < 0.34.0), then
gst_vaapi_surface_new_with_format() will return NULL.
Add gst_video_format_get_chroma_type() helper function to determine
the GstVaapiChromaType from a standard GStreamer video format. It is
possible to reconstruct that from GstVideoFormatInfo but it is much
simpler (and faster?) to use the local GstVideoFormatMap table.
Add new chroma formats available with VA-API >= 0.34.0. In particular,
this includes "RGB" chroma formats, and more YUV subsampled formats.
Also add a new from_GstVaapiChromaType() helper function to convert
libgstvaapi chroma type to VA chroma format.
Make gst_vaapi_image_pool_new() succeed, and thus returning a valid
image pool object, only if the underlying VA display does support the
requested VA image format.
Get rid of GstCaps to create surface/image pool, and use GstVideoInfo
structures instead. Those are smaller, and allows for streamlining
libgstvaapi more.
Add new video format mappings to VA image formats:
- YUV: packed YUV (YUY2, UYVY), grayscale (Y800) ;
- RGB: 32-bit RGB without alpha channel (XRGB, XBGR, RGBX, BGRX).
Fix debug message string with image format expressed with GstVideoFormat
instead of the obsolete format that turned out to be a fourcc.
This is a regression from git commit e61c5fc.
In particular, use gst_video_info_from_caps() helper function in VA image
for implementating gst_vaapi_image_get_buffer() [vaapidownload] and
gst_vaapi_image_update_from_buffer() [subpictures] in GStreamer 0.10 builds.
Drop GstVaapiImageFormat helpers since everything was moved to the new
GstVideoFormat based API. Don't bother with backwards compatibility and
just bump the library major version afterwards.
Fix creation of GLX texture, to not depend on the GstCaps video size that
could be wrong, especially in presence of frame cropping. So, use the size
from the source VA surfaces.
An optimization could be to reduce the texture size to the actual visible
size on screen. i.e. scale down the texture size to match the screen dimensions,
while preserving the VA surface aspect ratio. However, some VA drivers don't
honour that.
If the stream has a sequence_display_extenion, then attach the
display_horizontal/display_vertical dimension as the cropping
rectangle width/height to the GstVaapiPicture.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
If the Advanced profile has display_extension fields, then set the display
width/height dimension as cropping rectangle to the GstVaapiPicture.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
If the encoded stream has the frame_cropping_flag set, then associate
the cropping rectangle to GstVaapiPicture.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Change generic decoder of sample I-frame to return a GstVaapiSurfaceProxy
instead of a plain GstVaapiSurface. This means that we can now retrieve
the frame cropping rectangle from the surface proxy, along with additional
information if ever needed.
Add support for GstVideoCropMeta in GStreamer >= 1.0.x builds and gst-vaapi
specific meta information to hold video cropping details. Make the sink
support video cropping in X11 and GLX modes.
Some video clips may have a clipping region that needs to propogate to
the renderer. These helper functions make it possible to attach that
clipping region, as a GstVaapiRectangle, the the video meta associated
with the buffer.
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Make it possible associate an empty cropping rectangle to the surface
proxy, thus resetting any cropping rectangle that was previously set.
This allows for returning plain NULL when no cropping rectangle was
initially set up to the surface proxy, or if it was reset to defaults.
Add helper macros to retrieve the VA surface information like size
(width, height) or chroma type. This is a micro-optimization to avoid
useless function calls and NULL pointer re-checks in internal routines.