Fixes a crash when multiple vaapidecode elements are finalized since
the debug category is created once in the class init method.
This is a regression from git commit 7e58d60.
https://bugzilla.gnome.org/show_bug.cgi?id=721390
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Fix formts for various GST_DEBUG et al. invocations. More precisely,
make size_t arguments use the %zu format specifier accordingly; force
XID formats to be a 32-bit unsigned integer; and fix the format used
for gst_vaapi_create_surface_with_format() error cases since we have
been using strings nowadays.
Use standard GstVideoInfo related functions to build the output caps,
thus directly preserving additional fields as needed, instead of
manually copying them over through gst_vaapi_append_surface_caps().
Also ensure that the input caps are fixated first.
Add new helper functions to build video template caps.
- gst_vaapi_video_format_new_template_caps():
create GstCaps with size, frame rate and PAR to full range
- gst_vaapi_video_format_new_template_caps_from_list():
try to create a "simplified" list from the supplied formats
Add new helper functions to build GValues from GstVideoFormat:
- gst_vaapi_value_set_format():
build a GValue from the supplied video format
- gst_vaapi_value_set_format_list():
build a GValue list from the supplied array of video formats
Replace gst_vaapi_display_get_{decode,encode}_caps() APIs with more
more convenient APIs that return an array of GstVaapiProfile instead
of GstCaps: gst_vaapi_display_get_{decode,encode}_profiles().
Replace gst_vaapi_display_get_{image,subpicture}_caps() APIs, that
returned GstCaps, with more convenient APIs that return an array of
GstVideoFormat: gst_vaapi_display_get_{image,subpicture}_formats().
Makes the copies of a buffer reference their own GLTextureUploadMeta
user data and prevent the original buffer accessing already freed
memory if its copies has been released and freed.
https://bugzilla.gnome.org/show_bug.cgi?id=720336
[Propagate the original meta texture to the copy too]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Factor out propose_allocation() hooks, creation of video buffer pool
for the sink pad, conversion from raw YUV buffers to VA surface backed
buffers. Update vaapidecode, vaapiencode and vaapipostproc to cope
with the new GstVaapiPluginBase abilities.
Fix display creation code to check that any display obtained from a
neighbour actually has the type we expect. Note: if display type is
set to "any", we can then accept any VA display type.
Move common VA display creation code to GstVaapiPluginBase, with the
default display type remaining "any". Also add a "display-changed"
hook so that subclasses could perform additional tasks when/if the
VA display changed, due to a new display type request for instance.
All plug-ins are updated to cope with the new internal APIs.
Introduce a new GstVaapiPluginBase object that will contain all common
data structures and perform all common tasks. First step is to have a
single place to hold VA displays.
While we are at it, also make sure to store and subsequently release
the appropriate debug category for the subclasses.
The GLTextureUploadMeta implementation assumed that for each upload()
sequence, the supplied texture id is always the same as the one that
was previously cached into the underlying GstVaapiTexture. Cope with
any texture id change the expense to recreate the underlying VA/GLX
resources.
https://bugzilla.gnome.org/show_bug.cgi?id=719643
Requesting the GLTextureUpload meta on buffers in the bufferpool
prevents such metas from being de-allocated when buffers are released
in the sink.
This is particulary useful in terms of performance when using the
GLTextureUploadMeta API since the GstVaapiTexture associated with
the target texture is stored in the meta.
https://bugzilla.gnome.org/show_bug.cgi?id=712558
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Make GstVideoGLTextureUploadMeta::upload() implementation more robust
when the GstVaapiTexture associated with the supplied texture id could
not be created.
Clean public APIs up so that to better align with the decoder APIs.
Most importantly, gst_vaapi_encoder_get_buffer() is changed to only
return the VA coded buffer proxy. Also provide useful documentation
for the public APIs.
Refactor the GstVaapiCodedBuffer APIs so that to more clearly separate
public and private interfaces. Besides, the map/unmap APIs should not
be exposed as is but appropriate accessors should be provided instead.
* GstVaapiCodedBuffer: VA coded buffer abstraction
- gst_vaapi_coded_buffer_get_size(): get coded buffer size.
- gst_vaapi_coded_buffer_copy_into(): copy coded buffer into GstBuffer
* GstVaapiCodedBufferPool: pool of VA coded buffer objects
- gst_vaapi_coded_buffer_pool_new(): create a pool of coded buffers of
the specified max size, and bound to the supplied encoder
* GstVaapiCodedBufferProxy: pool-allocated VA coded buffer object proxy
- gst_vaapi_coded_buffer_proxy_new_from_pool(): create coded buf from pool
- gst_vaapi_coded_buffer_proxy_get_buffer(): get underlying coded buffer
- gst_vaapi_coded_buffer_proxy_get_buffer_size(): get coded buffer size
Rationale: more optimized transfer functions might be provided in the
future, thus rendering the map/unmap mechanism obsolete or sub-optimal.
https://bugzilla.gnome.org/show_bug.cgi?id=719775
Fix GstElement::set_context() implementation for all plug-in elements
to avoid leaking an extra reference to the VA display, thus preventing
correct cleanup of VA resources in GStreamer 1.2 builds.
Return earlier if the creation of a VA display failed. Likewise, simplify
gst_vaapi_video_context_propagate() now that we are guaranteed to have a
valid VA display.
When GstVideoMeta maps were used, the supporting functions incorrectly
used gst_buffer_get_memory() instead of gst_buffer_peek_memory(), thus
always increasing the associated GstMemory reference count and giving
zero chance to actually release that, and subsequently the VA display.
Simplify GstVaapiVideoMeta to only hold a surface proxy, which is
now allocated from a surface pool. This also means that the local
reference to the VA surface is also gone, as it could be extracted
from the associated surface proxy.
Drop the following functions that are not longer used:
- gst_vaapi_video_buffer_new_with_surface()
- gst_vaapi_video_meta_new_with_surface()
- gst_vaapi_video_meta_set_surface()
- gst_vaapi_video_meta_set_surface_from_pool()
Fix gst_vaapi_video_meta_new_from_pool() to allocate VA surface proxies
from surface pools instead of plain VA surfaces. This is to simplify
allocations now that surface proxies are created from a surface pool.
Optimize gst_vaapiencode_handle_frame() to avoid extra memory allocation,
and in particular the GstVaapiEncObjUserData object. i.e. directly use
the VA surface proxy from the source buffer. This also makes the user
data attached to the GstVideoCodecFrame more consistent between both
the decoder and encoder plug-in elements.
Simplify gst_vaapiencode_push_frame(), while also removing the call
to gst_video_encoder_negotiate() since this is implicit in _finish()
if caps changed. Also fixed memory leaks that occured on error.
Constify pointers wherever possible. Drop unused variables, and use
consistent variable names. Fix gst_vaapiencode_h264_allocate_buffer()
to correctly report errors, especially when in-place conversion from
bytestream to avcC format failed.
Move "rate-control" mode and "bitrate" properties to the GstVaapiEncode
base class. The actual range of supported rate control modes is currently
implemented as a plug-in element hook. This ought to be determined from
the GstVaapiEncoder object instead, i.e. from libgstvaapi.
Align the plug-in debug category to its actual name. i.e. enable debug
logs through vaapiencode_<CODEC> where <CODEC> is mpeg2, h264, etc. Fix
the plug-in element description to make it more consistent with other
VA-API plug-ins.
Add a GST_VAAPIENCODE_CAST() helper to avoid run-time checks against
the GObject type system. We are guaranteed to only deal with the same
plug-in element object.
Allow vaapiencode plug-in elements to encode from raw YUV buffers.
The most efficient way to do so is to let the vaapiencode elements
allocate a buffer pool, and subsequently buffers from it. This means
that upstream elements are expected to honour downstream pools.
If upstream elements insist on providing their own allocated buffers
to the vaapiencode elements, then it possibly would be more efficient
to insert a vaapipostproc element before the vaapiencode element.
This is because vaapipostproc currently has better support than other
elements for "foreign" raw YUV buffers.
Add GstVaapiEncodeMPEG2 element object. The actual plug-in element
is called "vaapiencode_mpeg2".
Valid properties:
- rate-control: rate control mode (default: cqp - constant QP)
- bitrate: desired bitrate in kbps (default: auto-calculated)
- key-period: maximal distance between two key frames (default: 30)
- max-bframes: number of B-frames between I and P (default: 2)
- quantizer: constant quantizer (default: 8)
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Add GstVaapiEncodeH264 element object. The actual plug-in element
is called "vaapiencode_h264".
Valid properties:
- rate-control: rate control mode (default: none)
- bitrate: desired bitrate in kbps (default: auto-calculated)
- key-period: maximal distance between two key frames (default: 30)
- num-slices: number of slices per frame (default: 1)
- max-bframes: number of B-frames between I and P (default: 0)
- min-qp: minimal quantizer (default: 1)
- init-qp: initial quantizer (default: 26)
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Fix build when Wayland headers don't live in plain system include dirs
like /usr/include but rather in /usr/include/wayland for instance.
Original patch written by Dominique Leuenberger <dimstar@opensuse.org>
https://bugzilla.gnome.org/show_bug.cgi?id=712282
Destroy VPP output surface pool on exit. Also avoid a possible crash
in double-free situation caused by insufficiently reference counted
array of formats returned during initialization.
Fix advanced deinterlacing modes with VPP to track only up to 2 past
reference buffers. This used to be 3 past reference buffers but this
doesn't fit with the existing decode pipeline that only has 4 extra
scratch surfaces.
Also optimize references tracking to be only enabled when needed, i.e.
when advanced deinterlacing mode is used. This means that we don't
need to track past references for basic bob or weave deinterlacing.
In "mixed" interlaced streams, the buffer contains additional flags that
specify whether the frame contained herein is interlaced or not. This means
that we can alternatively get progressive or interlaced frames. Make sure
to disable deinterlacing at the VPP level when the source buffer is no longer
interlaced.
Fix memory leaks with advanced deinterlacing, i.e. when we keep track
of past buffers. Completely reset the deinterlace state, thus destroying
any buffer currently held, on _start(), _stop() and _destroy().
Port vaapipostproc element to GStreamer 1.2. Support is quite minimal
right now so that to cope with auto-plugging issues/regressions. e.g.
this happens when the correct set of expected caps are being exposed.
This means that, currently, the proposed caps are not fully accurate.
Fix basic deinterlacing flags provided to gst_vaapi_set_deinterlacing()
for the first field. Render flags were supplied instead of the actual
deinterlacing flags (deint_flags).
Fix GstBaseTransform::transform_caps() implementation to always return
the complete set of allowed sink pad caps (unfixated) even if the src
pad caps we are getting are fixated. Rationale: there are just so many
possible combinations, and it was wrong to provide a unique set anyway.
As a side effect, this greatly simplifies the ability to derive src pad
caps from fixated sink pad caps.
Fix deinterlacing flags to make more sense. The TFF (top-field-first)
flag is meant to specify the organization of reference frames used in
advanced deinterlacing modes. Introduce the more explicit flag TOPFIELD
to specify that the top-field of the supplied input surface is to be
used for deinterlacing. Conversely, if not set, this means that the
bottom field of the supplied input surface will be used instead.
There are situations where gst_video_decoder_flush() is called, and
this subsequently produces a gst_video_decoder_reset() that kills the
currently active GstVideoCodecFrame. This means that it no longer
exists by the time we reach GstVideoDecoder::finish() callback, thus
possibly resulting in a crash if we assumed spare data was still
available for decode (current_frame_size > 0).
Try to honour GstVideoDecoder::reset() behaviour from GStreamer 1.0
that means a flush, thus performing the actual operations there like
calling gst_video_decoder_have_frame() if pending data is available.
Review all interactions between the main video decoder stream thread
and the decode task to derive a correct sequence of operations for
decoding. Also avoid extra atomic operations that become implicit under
the GstVideoDecoder stream lock.
Fix hard reset for seek cases by flushing the GstVaapiDecoder queue
and completely purge any decoded output frame that may come out from
it. At this stage, the GstVaapiDecoder shall be in a complete clean
state to start decoding over new buffers.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
vaapidecode used to wait up to one second past the expected time of
presentation for the last decoded frame. This is not realistic in
practice when it comes to video pause/resume. Changed behaviour to
unconditionnally wait for a free VA surface prior to continuing the
decoding. The decode task will continue pushing the output frames to
the downstream element while also reporting errors at the same time
to the main thread.
https://bugzilla.gnome.org/show_bug.cgi?id=707108
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
The srcpad caps exposed for GStreamer 1.2 were missing any useful info
like framerate, pixel-aspect-ratio, interlace-mode et al. Not to mention
that it relied on possibly un-initialized data. Fix srcpad caps to be
initialized from a sanitized copy of GstVideoDecoder output state caps.
Note: the correct way to expose the srcpad caps triggers an additional
issue in core GStreamer auto-plugging capabilities as the correct caps
to be exposed should be format=ENCODED with memory:VASurface caps feature
at the minimum. In some situations, we could determine the underlying
VA surface format, but this is not always possible. e.g. cases where it
is not allowed to expose the underlying VA surface data, or when the
VA driver implementation cannot actually provide such information.
Currently, the decoder only supports YUV 4:2:0 output. So, expose the
output formats for GStreamer 1.2 in caps to a realistic subset. This
means NV12, I420 or YV12 but also ENCODED if we cannot determine the
underlying VA surface format, or if it is actually not allowed to get
access to the surface contents.
Fix vaapidecode srcpad caps to only expose RGBA video format for the
meta:GstVideoGLTextureUploadMeta feature. That's only what is supported
so far. Besides, drop this meta from the vaapisink sinkpad caps since
we really don't support that for rendering.
https://bugzilla.gnome.org/show_bug.cgi?id=711828
Fix raw YUV data uploaded as in the following pipeline:
$ gst-launch-1.0 filesrc video.yuv ! videoparse ! vaapipostproc ! vaapisink
The main reason why it failed was that the videoparse element simply
allocates GstBuffer with raw data chunk'ed off the sink pad without
any prior knowledge of the actual frame info. i.e. it basically just
calls gst_adapter_take_buffer().
We could avoid the extra copy performed in vaapipostproc if the videoparse
element was aware of the downstream pool and bothers copying line by
line, for each plane. This means that, for a single frame per buffer,
the optimizatin will be to allocate the video buffer downstream, map
it, and copy each line that is coming through until we need to fills
in the successive planes.
Still, optimized raw YUV uploads already worked with the following:
$ gst-launch-1.0 videotestsrc ! vaapipostproc ! vaapisink
https://bugzilla.gnome.org/show_bug.cgi?id=711250
[clean-ups, fixed error cases to unmap and unref outbuf]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
If the currently selected deinterlacing method is not supported by the
underlying hardware, then try to downgrade the method to a supported one.
At the minimum, basic bob-deinterlacing shall always be supported.
Allow basic bob-deinterlacing to work when VPP is enabled. Currently,
this only covers bob-deinterlacing when the output pixel format is
explicitly set.
Add initial support for basic scaling with size specified through the
"width" and "height" properties. If either user-provided dimension is
zero and "force-aspect-ratio" is set to true (the default), then the
other dimension is scaled to preserve the aspect ratio.
If VPP is available, we always try to implicitly convert the source
buffer to the "native" surface format for the underlying accelerator.
This means that no optimization is performed yet to propagate raw YUV
buffers to the downstream element as is, if VPP is available. i.e. it
will always cause a color conversion.
Even if we only support deinterlacing for now, use flags to specify
which filters are to be applied to each frame we receive in transform().
This is preparatory work for integrating new filters.
Add support for "mixed" interlace-mode, whereby the video frame buffer
shall be deinterlaced only if its flags mention that's actually an
interlaced frame buffer.
Reset the buffer pool allocator only if the config caps changed in a
sensible way: format or resolution change. i.e. don't bother with
other caps like colorimetry et al. as this doesn't affect the way to
allocate VA surfaces or images.
Enable read and write mappings only if direct-rendering is supported.
Otherwise, this means that we may need to download data from the VA
surface first for correctness, even if the VA surface doesn't need to
be read at all. i.e. sometimes, READWRITE mappings are meant for
surfaces that are written to first, and read afterwards for further
processing.
https://bugzilla.gnome.org/show_bug.cgi?id=704078
Fix check for direct-rendering if the creation of VA surfaces with
an explicit pixel format is not support, e.g. VA-API < 0.34.0, and
that we tried to allocate a VA surface based on the corresponding
chroma type. i.e. in that particular case, we have to make sure that
the derived image has actually the expected format.
Fix GstVaapiVideoBufferPool::reset_buffer() to reset the underlying
memory resources, and more particularly the VA surface proxy. Most
importantly, the GstVaapiVideoMeta is retained. Cached surface in
memory are released, thus triggering a new allocation the next time
we need to map the buffer.
Make sure GstVaapiVideoMemory allocates VA surface proxies from a
pool stored in the parent VA memory allocator.
This fixes the following scenario:
- VA video buffer 1 is allocated from a buffer pool
- Another video buffer is created, and inherits info from buffer 1
- Buffer 1 is released, thus pushing it back to the buffer pool
- New buffer alloc request comes it, this yields buffer 1 back
- At this stage, buffers 1 and 2 still share the same underlying VA
surface, but buffer 2 was already submitted downstream for further
processing, thus conflicting with additional processing we were
about to perform on buffer 1.
Maybe the core GstBufferPool implementation should have been fixed
instead to actually make sure that the returned GstBuffer memory we
found from the pool is writable?
Always make sure to allocate a VA surface proxy for GstVaapiUploader
allocated buffers, i.e. make gst_vaapi_uploader_get_buffer() allocate
a proxy surface.
This fixes cases where we want to retain the underlying surface longer,
instead of releasing it back to the surface pool right away.
Add gst_caps_set_interlaced() helper function that would reset the
interlace-mode field to "progressive" for GStreamer >= 1.0, or the
interlaced field to "false" for GStreamer 0.10.
Fix gst_vaapi_video_context_prepare() to also query upstream elements
for a valid GstContext. Improve comments regarding the steps used to
lookup or build that context, thus conforming to the GstContext API
recommendations.
https://bugzilla.gnome.org/show_bug.cgi?id=709112
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
If the allocation meta GST_VIDEO_GL_TEXTURE_UPLOAD_META_API_TYPE is
requested, and more specifically under a GLX configuration, then add
the GstVideoGLTextureUploadMeta to the output buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=703236
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Move VA video buffer memory from "video/x-surface,type=vaapi" format,
as expressed in caps, to the more standard use of caps features. i.e.
add "memory:VASurface" feature attribute to the associated caps.
https://bugzilla.gnome.org/show_bug.cgi?id=703271
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Fix gst_vaapidecode_query() to correctly display the query type name,
instead of randomly displaying that we shared the underlying display.
Also add debug info for the GstVaapiSink::query() handler, i.e. the
supplied query type name actually.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Add support for the new GstContext API from GStreamer 1.2.x.
- implement the GstElement::set_context() hook ;
- reply to the `context' query from downstream elements.
https://bugzilla.gnome.org/show_bug.cgi?id=703235
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Add thin compatibility layer for the deprecated GstVideoContext API.
For GStreamer API >= 1.2, this involves the following two functions:
- gst_vaapi_video_context_prepare(): queries if a context is already
set in the pipeline ;
- gst_vaapi_video_context_propagate(): propagates the newly-created
context to the rest of the pipeline.
https://bugzilla.gnome.org/show_bug.cgi?id=703235
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Port vaapidecode and vaapisink plugins to GStreamer API >= 1.2. This
is rather minimalistic so that to test the basic functionality.
Disable vaapipostproc plugin for now as further polishing is needed.
Also disable GstVideoContext interface support since this API is now
gone in 1.2.x. This is preparatory work for GstContext support.
https://bugzilla.gnome.org/show_bug.cgi?id=703235
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
In GStreamer 0.10 builds, make sure that the GstVaapiUploader helper
is setup in case upstream elements allocate buffers themselves without
honouring our GstVaapiSink::bufer_alloc() hook.
In particular, this fixes support for OGG video streams with WebKit.
https://bugzilla.gnome.org/show_bug.cgi?id=703934
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Handle raw video buffers that were not created from a VA video buffer
pool. Use the generic GstVideo API to copy buffers in GStreamer 1.0.x
builds instead of the GstVaapiUploader.
https://bugs.freedesktop.org/show_bug.cgi?id=55818
Fix _getcaps() implementation to not report codecs with size information
filled in the returned caps. That's totally useless nowadays. Ideally,
this is a hint to insert a video parser element, thus allowing future
optimizations, but this is not a strict requirement for gstreamer-vaapi,
which is able to parse the elementary bitstreams itself.
https://bugzilla.gnome.org/show_bug.cgi?id=704734
If there is no frame delimiter at the end of the stream, e.g. no
end-of-stream or end-of-sequence marker, and that the current frame
was fully parsed correctly, then assume that last frame is complete
and submit it to the decoder.
https://bugzilla.gnome.org/show_bug.cgi?id=705123
Signed-off-by: Guangxin.Xu <Guangxin.Xu@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Fix creation of GstVaapiVideoBuffer objects (i) to have that type for real;
and (ii) to correctly extract the GstSurfaceConverter from the video buffer
object meta.
This fixes support for cluttersink with GStreamer 0.10 builds.
Other GStreamer sinks, like xvimagesink, have a force-aspect-ratio property,
which allows you to say that you don't want the sink to respect aspect
ratio. Add the same property to vaapisink.
http://lists.freedesktop.org/archives/libva/2012-September/001298.html
Signed-off-by: Simon Farnsworth <simon.farnsworth at onelan.co.uk>
Fix GstBaseSink::get_caps() implementation for GStreamer 1.0.X builds
by honouring the filter caps argument. More precisely, this fixes the
following pipeline: gst-launch-1.0 videotestsrc ! vaapisink
https://bugzilla.gnome.org/show_bug.cgi?id=705192
Signed-off-by: Guangxin.Xu <Guangxin.Xu@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Add basic deinterlacing support, i.e. bob-deinterlacing whereby only
the selected field from the input surface is kept for the target surface.
Setting gst_vaapi_filter_set_deinterlacing() method argument to
GST_VAAPI_DEINTERLACE_METHOD_NONE means to disable deinterlacing.
Also move GstVaapiDeinterlaceMethod definition from vaapipostproc plug-in
to libgstvaapi core library.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Install a new video converter that supports X11 pixmap targets for X11
backends only, or make the GLX converter creation function chain up to
the X11 converter whenever requested.
After the code got moved to create the gst_vaapi_create_display() helper,
this comparison was not updated to dereference the newly-created
pointer, so the code was comparing the pointer itself to the type, and
therefore failing to retrieve the VA display.
This fixes the following error (and gets gst-vaapi decoding again):
ERROR vaapidecode gstvaapidecode.c:807:gst_vaapidecode_ensure_allowed_caps: failed to retrieve VA display
https://bugzilla.gnome.org/show_bug.cgi?id=704410
Signed-off-by: Emilio López <emilio@elopez.com.ar>
Fix new internal video format API, based on GstVideoFormat, to not
clobber with system symbols. So replace the gst_video_format_* prefix
with gst_vaapi_video_format_ prefix, even if the format type remains
GstVideoFormat.
Simplify gst_vaapi_create_display() helper as gst_vaapi_display_XXX_new()
performs the necessary validation checks for the underlying VA display
prior to returning to the caller. So, if an error occurred, then NULL is
really returned in that case.
If the video buffer pool config doesn't have new caps, then it's not
necessary to reinstantiate the allocator. That could be a costly
operation as we could do some extra heavy checking in there.
Fix reference counting issue whereby gst_memory_init() does not hold
an extra reference to the GstAllocator. So, there could be situations
where the last instance of GstVaapiVideoAllocator gets released before
a dangling GstVaapiVideoMemory object, thus possibly leading to a crash.
Always perform conversion of sources buffers to NV12 since this is
the way we tested for this capability in ensure_allowed_caps(). This
also saves memory bandwidth for further rendering. However, this may
not preserve quality since the YUV buffers are down-sampled to 4:2:0.
This fixes direct linking of vaapidownload element to xvimagesink with
VA drivers supporting vaGetImage() from the native VA surface format to
a different VA image format. i.e. color conversion during download.
http://bugzilla.gnome.org/show_bug.cgi?id=703937
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
The image is now expressed as a standard GstVideoFormat, which is not
a FOURCC but rather a regular enum value.
This is a regression introduced in commit 09397fa.
Fix gst_vaapi_uploader_get_buffer() to not assign caps since they
were already negotiated beforehand, and they are not used from the
buffer in upstream elements.
Clean-up gst_vaapi_uploader_ensure_caps() to use the new image caps
represented as a GstVideoInfo.
Adapt GstVaapiVideoMemory allocator to support creation of VA surfaces
with an explicit pixel format. This allows for direct rendering to
VA surface memory from a software decoder.
Get rid of GstCaps to create surface/image pool, and use GstVideoInfo
structures instead. Those are smaller, and allows for streamlining
libgstvaapi more.
Fix creation of GLX texture, to not depend on the GstCaps video size that
could be wrong, especially in presence of frame cropping. So, use the size
from the source VA surfaces.
An optimization could be to reduce the texture size to the actual visible
size on screen. i.e. scale down the texture size to match the screen dimensions,
while preserving the VA surface aspect ratio. However, some VA drivers don't
honour that.
Add support for GstVideoCropMeta in GStreamer >= 1.0.x builds and gst-vaapi
specific meta information to hold video cropping details. Make the sink
support video cropping in X11 and GLX modes.
Some video clips may have a clipping region that needs to propogate to
the renderer. These helper functions make it possible to attach that
clipping region, as a GstVaapiRectangle, the the video meta associated
with the buffer.
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Expose all raw video formats in the static caps template since the
vaapisink is supporting raw data. We will get the exact set of formats
supported by the driver dynamically through the _get_caps() routine.
This also fixes an inconsistency wrt. GStreamer 0.10 builds.
https://bugzilla.gnome.org/show_bug.cgi?id=702178
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Now that VA/GLX capable buffers are generated by default on X11, thus
depending on a VA/GLX display, we stil want to use vaPutSurface() for
rendering since it is faster.
Anyway, OpenGL rendering in vaapisink was only meant for testing and
enabling "fancy" effects to play with. This has no real value. So,
disable OpenGL rendering by default.
If the gstreamer-vaapi plug-in elements are built with GLX support, then
try to allocate a GstVaapiDisplayGLX first before resorting to a VA/X11
display next.
https://bugzilla.gnome.org/show_bug.cgi?id=701742
Allow plain gst_buffer_map() interface to work with gstreamer-vaapi
video buffers, i.e. expose the underlying GstVaapiSurfaceProxy to the
caller. This is the only sensible enough thing to do in this mode as
the underlying surface pixels need to be extracted through an explicit
call to the gst_video_frame_map() function instead.
A possible use-case of this is to implement a "handoff" signal handler
to fakesink or identity element for further processing.
Fix gst_vaapi_video_allocator_new() to silently check for direct-rendering
mode support, and not trigger fatal-criticals if either test surface or
image could not be created. Typical case: pixel format mismatch, e.g. NV12
supported by most hardware vs. I420 supported by most software decoders.
On map, ensure we have GST_MAP_WRITE flags since this is only what we
support for now. Likewise, on unmap, make sure that the VA image is
unmapped for either read or write, while still committing it to the
VA surface if write was requested.
In GStreamer 0.10 builds, gst_vaapi_uploader_get_buffer() was used
but it exhibited a memory leak because the surface generated for the
GstVaapiVideoMeta totally lost its parent video pool. So, it was not
possible to release that surface back to the parent pool when the meta
gets released, and the memory consumption kept growing.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Since GST_VAAPI_IS_xxx_VIDEO_POOL() was only testing for NULL and not
the underlying object type, the gst_vaapi_video_meta_new_from_pool()
was hereby totally broken. Fixed this regression by using the newly
provided gst_vaapi_video_pool_get_object_type() function.
Add gst_vaapi_decoder_get_frame_with_timeout() helper function that will
wait for a frame to be decoded, until the specified timeout in microseconds,
prior to returning to the caller.
This is a fix to performance regression from 851cc0, whereby the vaapidecode
loop executed on the srcpad task was called to often, thus starving all CPU
resources.
Rework heuristics to detect when downstream element ran into errors,
and thus failing to release any VA surface in due time for the current
frame to get decoded. In particular, recalibrate the render time base
when the first frame gets submitted downstream, or when there is no
timestamp that could be inferred.
Rework GstVideoDecoder::handle_frame() to decode the current frame,
while possibly waiting for a free surface, and separately submit all
decoded frames from a task. This makes it possible to pop and render
decoded frames as soon as possible.
Fix reference counting bug for passthrough mode, whereby the input buffer
was propagated as is downstream through gst_pad_push() without increasing
its reference count before. The was a problem when gst_pad_push() returns
an error and we further decrease the reference count of the input buffer.
Add support for interlaced streams with GStreamer 1.0 too. Basically,
this enables vaapipostproc, though it is not auto-plugged yet. We also
make sure to reply to CAPS queries, and happily handle CAPS events.
Make gst_vaapi_decoder_get_codec_state() return the original codec state,
i.e. make the GstVaapiDecoder object own the return state so that callers
that want an extra reference to it would just gst_video_codec_state_ref()
it before usage. This aligns the behaviour with what we had before with
gst_vaapi_decoder_get_caps().
This is an ABI incompatible change, library major version was bumped from
previous release (0.5.2).
Mark the following functions are internal, i.e. private to the vaapi plug-in:
- gst_vaapi_video_buffer_pool_get_type()
- gst_vaapi_video_converter_glx_get_type()
- gst_vaapi_video_converter_glx_new()
Implement GstSurfaceMeta API for GStreamer 1.0.x. Even though this is
an unstable/deprecated API, this makes it possible to support Clutter
sink with minimal changes. Tested against clutter-gst 1.9.92.
When render-mode is "overlay", then it is not really useful to peek into
the GstBaseSink::last_buffer, since we have our own video_buffer already
recorded and maintained into GstVaapiSink.
Fix memory leak of GstSample objects in GstVideoOverlayInterface::expose().
This also fixes extra unreferencing of the underlying GstBuffer in the common
path afterwards (for both 0.10 or 1.0).
Fix the name of the plug-in element reported to gst-inspect-1.0. i.e. we
need an explicit definition for GStreamer >= 1.0 because the GST_PLUGIN_DEFINE
incorrectly uses #name for creating the plug-in name, instead of using macro
expansion (and let further expansion of macros) through e.g. G_STRINGIFY().
Fix make dist to allow build for either GStreamer 0.10 or 1.0. i.e. make
sure to include all source files in either case while generating source
tarballs.