Add a GST_VAAPIENCODE_CAST() helper to avoid run-time checks against
the GObject type system. We are guaranteed to only deal with the same
plug-in element object.
Allow vaapiencode plug-in elements to encode from raw YUV buffers.
The most efficient way to do so is to let the vaapiencode elements
allocate a buffer pool, and subsequently buffers from it. This means
that upstream elements are expected to honour downstream pools.
If upstream elements insist on providing their own allocated buffers
to the vaapiencode elements, then it possibly would be more efficient
to insert a vaapipostproc element before the vaapiencode element.
This is because vaapipostproc currently has better support than other
elements for "foreign" raw YUV buffers.
Add GstVaapiEncodeMPEG2 element object. The actual plug-in element
is called "vaapiencode_mpeg2".
Valid properties:
- rate-control: rate control mode (default: cqp - constant QP)
- bitrate: desired bitrate in kbps (default: auto-calculated)
- key-period: maximal distance between two key frames (default: 30)
- max-bframes: number of B-frames between I and P (default: 2)
- quantizer: constant quantizer (default: 8)
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Add GstVaapiEncodeH264 element object. The actual plug-in element
is called "vaapiencode_h264".
Valid properties:
- rate-control: rate control mode (default: none)
- bitrate: desired bitrate in kbps (default: auto-calculated)
- key-period: maximal distance between two key frames (default: 30)
- num-slices: number of slices per frame (default: 1)
- max-bframes: number of B-frames between I and P (default: 0)
- min-qp: minimal quantizer (default: 1)
- init-qp: initial quantizer (default: 26)
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Fix build when Wayland headers don't live in plain system include dirs
like /usr/include but rather in /usr/include/wayland for instance.
Original patch written by Dominique Leuenberger <dimstar@opensuse.org>
https://bugzilla.gnome.org/show_bug.cgi?id=712282
Destroy VPP output surface pool on exit. Also avoid a possible crash
in double-free situation caused by insufficiently reference counted
array of formats returned during initialization.
Fix advanced deinterlacing modes with VPP to track only up to 2 past
reference buffers. This used to be 3 past reference buffers but this
doesn't fit with the existing decode pipeline that only has 4 extra
scratch surfaces.
Also optimize references tracking to be only enabled when needed, i.e.
when advanced deinterlacing mode is used. This means that we don't
need to track past references for basic bob or weave deinterlacing.
In "mixed" interlaced streams, the buffer contains additional flags that
specify whether the frame contained herein is interlaced or not. This means
that we can alternatively get progressive or interlaced frames. Make sure
to disable deinterlacing at the VPP level when the source buffer is no longer
interlaced.
Fix memory leaks with advanced deinterlacing, i.e. when we keep track
of past buffers. Completely reset the deinterlace state, thus destroying
any buffer currently held, on _start(), _stop() and _destroy().
Port vaapipostproc element to GStreamer 1.2. Support is quite minimal
right now so that to cope with auto-plugging issues/regressions. e.g.
this happens when the correct set of expected caps are being exposed.
This means that, currently, the proposed caps are not fully accurate.
Fix basic deinterlacing flags provided to gst_vaapi_set_deinterlacing()
for the first field. Render flags were supplied instead of the actual
deinterlacing flags (deint_flags).
Fix GstBaseTransform::transform_caps() implementation to always return
the complete set of allowed sink pad caps (unfixated) even if the src
pad caps we are getting are fixated. Rationale: there are just so many
possible combinations, and it was wrong to provide a unique set anyway.
As a side effect, this greatly simplifies the ability to derive src pad
caps from fixated sink pad caps.
Fix deinterlacing flags to make more sense. The TFF (top-field-first)
flag is meant to specify the organization of reference frames used in
advanced deinterlacing modes. Introduce the more explicit flag TOPFIELD
to specify that the top-field of the supplied input surface is to be
used for deinterlacing. Conversely, if not set, this means that the
bottom field of the supplied input surface will be used instead.
There are situations where gst_video_decoder_flush() is called, and
this subsequently produces a gst_video_decoder_reset() that kills the
currently active GstVideoCodecFrame. This means that it no longer
exists by the time we reach GstVideoDecoder::finish() callback, thus
possibly resulting in a crash if we assumed spare data was still
available for decode (current_frame_size > 0).
Try to honour GstVideoDecoder::reset() behaviour from GStreamer 1.0
that means a flush, thus performing the actual operations there like
calling gst_video_decoder_have_frame() if pending data is available.
Review all interactions between the main video decoder stream thread
and the decode task to derive a correct sequence of operations for
decoding. Also avoid extra atomic operations that become implicit under
the GstVideoDecoder stream lock.
Fix hard reset for seek cases by flushing the GstVaapiDecoder queue
and completely purge any decoded output frame that may come out from
it. At this stage, the GstVaapiDecoder shall be in a complete clean
state to start decoding over new buffers.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
vaapidecode used to wait up to one second past the expected time of
presentation for the last decoded frame. This is not realistic in
practice when it comes to video pause/resume. Changed behaviour to
unconditionnally wait for a free VA surface prior to continuing the
decoding. The decode task will continue pushing the output frames to
the downstream element while also reporting errors at the same time
to the main thread.
https://bugzilla.gnome.org/show_bug.cgi?id=707108
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
The srcpad caps exposed for GStreamer 1.2 were missing any useful info
like framerate, pixel-aspect-ratio, interlace-mode et al. Not to mention
that it relied on possibly un-initialized data. Fix srcpad caps to be
initialized from a sanitized copy of GstVideoDecoder output state caps.
Note: the correct way to expose the srcpad caps triggers an additional
issue in core GStreamer auto-plugging capabilities as the correct caps
to be exposed should be format=ENCODED with memory:VASurface caps feature
at the minimum. In some situations, we could determine the underlying
VA surface format, but this is not always possible. e.g. cases where it
is not allowed to expose the underlying VA surface data, or when the
VA driver implementation cannot actually provide such information.
Currently, the decoder only supports YUV 4:2:0 output. So, expose the
output formats for GStreamer 1.2 in caps to a realistic subset. This
means NV12, I420 or YV12 but also ENCODED if we cannot determine the
underlying VA surface format, or if it is actually not allowed to get
access to the surface contents.
Fix vaapidecode srcpad caps to only expose RGBA video format for the
meta:GstVideoGLTextureUploadMeta feature. That's only what is supported
so far. Besides, drop this meta from the vaapisink sinkpad caps since
we really don't support that for rendering.
https://bugzilla.gnome.org/show_bug.cgi?id=711828
Fix raw YUV data uploaded as in the following pipeline:
$ gst-launch-1.0 filesrc video.yuv ! videoparse ! vaapipostproc ! vaapisink
The main reason why it failed was that the videoparse element simply
allocates GstBuffer with raw data chunk'ed off the sink pad without
any prior knowledge of the actual frame info. i.e. it basically just
calls gst_adapter_take_buffer().
We could avoid the extra copy performed in vaapipostproc if the videoparse
element was aware of the downstream pool and bothers copying line by
line, for each plane. This means that, for a single frame per buffer,
the optimizatin will be to allocate the video buffer downstream, map
it, and copy each line that is coming through until we need to fills
in the successive planes.
Still, optimized raw YUV uploads already worked with the following:
$ gst-launch-1.0 videotestsrc ! vaapipostproc ! vaapisink
https://bugzilla.gnome.org/show_bug.cgi?id=711250
[clean-ups, fixed error cases to unmap and unref outbuf]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
If the currently selected deinterlacing method is not supported by the
underlying hardware, then try to downgrade the method to a supported one.
At the minimum, basic bob-deinterlacing shall always be supported.
Allow basic bob-deinterlacing to work when VPP is enabled. Currently,
this only covers bob-deinterlacing when the output pixel format is
explicitly set.
Add initial support for basic scaling with size specified through the
"width" and "height" properties. If either user-provided dimension is
zero and "force-aspect-ratio" is set to true (the default), then the
other dimension is scaled to preserve the aspect ratio.
If VPP is available, we always try to implicitly convert the source
buffer to the "native" surface format for the underlying accelerator.
This means that no optimization is performed yet to propagate raw YUV
buffers to the downstream element as is, if VPP is available. i.e. it
will always cause a color conversion.
Even if we only support deinterlacing for now, use flags to specify
which filters are to be applied to each frame we receive in transform().
This is preparatory work for integrating new filters.
Add support for "mixed" interlace-mode, whereby the video frame buffer
shall be deinterlaced only if its flags mention that's actually an
interlaced frame buffer.
Reset the buffer pool allocator only if the config caps changed in a
sensible way: format or resolution change. i.e. don't bother with
other caps like colorimetry et al. as this doesn't affect the way to
allocate VA surfaces or images.
Enable read and write mappings only if direct-rendering is supported.
Otherwise, this means that we may need to download data from the VA
surface first for correctness, even if the VA surface doesn't need to
be read at all. i.e. sometimes, READWRITE mappings are meant for
surfaces that are written to first, and read afterwards for further
processing.
https://bugzilla.gnome.org/show_bug.cgi?id=704078
Fix check for direct-rendering if the creation of VA surfaces with
an explicit pixel format is not support, e.g. VA-API < 0.34.0, and
that we tried to allocate a VA surface based on the corresponding
chroma type. i.e. in that particular case, we have to make sure that
the derived image has actually the expected format.
Fix GstVaapiVideoBufferPool::reset_buffer() to reset the underlying
memory resources, and more particularly the VA surface proxy. Most
importantly, the GstVaapiVideoMeta is retained. Cached surface in
memory are released, thus triggering a new allocation the next time
we need to map the buffer.
Make sure GstVaapiVideoMemory allocates VA surface proxies from a
pool stored in the parent VA memory allocator.
This fixes the following scenario:
- VA video buffer 1 is allocated from a buffer pool
- Another video buffer is created, and inherits info from buffer 1
- Buffer 1 is released, thus pushing it back to the buffer pool
- New buffer alloc request comes it, this yields buffer 1 back
- At this stage, buffers 1 and 2 still share the same underlying VA
surface, but buffer 2 was already submitted downstream for further
processing, thus conflicting with additional processing we were
about to perform on buffer 1.
Maybe the core GstBufferPool implementation should have been fixed
instead to actually make sure that the returned GstBuffer memory we
found from the pool is writable?
Always make sure to allocate a VA surface proxy for GstVaapiUploader
allocated buffers, i.e. make gst_vaapi_uploader_get_buffer() allocate
a proxy surface.
This fixes cases where we want to retain the underlying surface longer,
instead of releasing it back to the surface pool right away.
Add gst_caps_set_interlaced() helper function that would reset the
interlace-mode field to "progressive" for GStreamer >= 1.0, or the
interlaced field to "false" for GStreamer 0.10.
Fix gst_vaapi_video_context_prepare() to also query upstream elements
for a valid GstContext. Improve comments regarding the steps used to
lookup or build that context, thus conforming to the GstContext API
recommendations.
https://bugzilla.gnome.org/show_bug.cgi?id=709112
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
If the allocation meta GST_VIDEO_GL_TEXTURE_UPLOAD_META_API_TYPE is
requested, and more specifically under a GLX configuration, then add
the GstVideoGLTextureUploadMeta to the output buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=703236
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>