Add gst_vaapi_fitler_set_deinterlacing_references() API to submit the
list of surfaces used for forward or backward reference in advanced
deinterlacing mode, e.g. Motion-Adaptive, Motion-Compensated.
The list of surfaces used as deinterlacing references shall be live
until the next call to gst_vaapi_filter_process().
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Fix deinterlacing flags to make more sense. The TFF (top-field-first)
flag is meant to specify the organization of reference frames used in
advanced deinterlacing modes. Introduce the more explicit flag TOPFIELD
to specify that the top-field of the supplied input surface is to be
used for deinterlacing. Conversely, if not set, this means that the
bottom field of the supplied input surface will be used instead.
Add a couple of helper functions:
- gst_vaapi_filter_has_operation(): checks whether the VA driver
advertises support for the supplied operation ;
- gst_vaapi_filter_use_operation(): checks whether the supplied
operation was already enabled to its non-default value.
The user-defined destroy notify function is meant to be called only when
the surface proxy was fully released, i.e. once it actually released the
VA surface back to the underlying pool.
Fix GstVaapiPicture, GstVaapiSlice and GstVaapiSurfaceProxy initialization
sequences to have the expected default values set beforehand in case of an
error raising up further during creation. i.e. make it possible to cleanly
destroy those partially initialized objects.
https://bugzilla.gnome.org/show_bug.cgi?id=707108
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Fix ensure_operations() to release the VPP operations array if non
NULL, prior to returning to the caller. The former function was also
renamed to a more meaningful get_operations() since the caller owns
the returned array that needs to be released.
Fix first-time operation lookup through find_operation() if the set
of supported operations was not initially determined through the
gst_vaapi_filter_get_operations() helper function.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Fix intiialization of GstVaapiFilterOpData for colorbalance related
operations. In particular, fill in the va_subtype field accordingly.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
VASurfaceAttrib and VAProcFilterParameterBufferType are symbols
that need to be guarded for libva 0.34 and 0.33, respectively.
https://bugzilla.gnome.org/show_bug.cgi?id=709102
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Fix calculation of MCU count for image sizes that are not a multiple
of 8 pixels in either dimension, but also for non-common sampling
factors like 4:2:2 in non-interleaved mode.
Add support for images with multiple scans per frame. The Huffman table
can be updated before SOS, and thus possibly requiring multiple uploads
of Huffman tables to the VA driver. So, the latter must be able to cope
with multiple VA buffers of type 'huffman-table' and with the correct
sequential order.
Improve robustness when some expected packets where not received yet
or that were not correctly decoded. For example, don't try to decode
a picture if there was no valid frame headers.
Improve debugging and error messages. Rename a few variables to fit the
existing naming conventions. Change some fatal asserts to non-fatal
error codes.
Split the input buffer data into decoder units that represent a JPEG
segment. Handle scan decoder unit specifically so that it can include
both the scan header (SOS) but also any other ECS or RSTi segment.
That way, we parse the input buffer stream only once at the gst-vaapi
level instead of (i) in gst_vaapi_decoder_jpeg_parse() to split the
stream into frames SOI .. EOI and (ii) in decode_buffer() to further
determine segment boundaries and decode them.
In practice, this is a +15 to +25% performance improvement.
Fix decode_buffer() function to gracefully skip comment (COM) segments.
This fixes decoding of streams generated by certain cameras, e.g. like
the Logitech Pro C920.
https://bugzilla.gnome.org/show_bug.cgi?id=708208
Signed-off-by: Junfeng Xu <jun.feng.xu@intel.com>
Look for the exact image bounds characterised by the <SOI> and <EOI>
markers. Use the gst_jpeg_parse() codec parser utility function to
optimize the lookup for the next marker segment.
https://bugzilla.gnome.org/show_bug.cgi?id=707447
Fix calculation of the offset to the next marker segment since the
correction of the codecparser part to match the API specification.
i.e. the GstJpegMarkerSegment.size field represents the size in bytes
of the segment minus any marker prefix.
https://bugzilla.gnome.org/show_bug.cgi?id=707447
Signed-off-by: Junfeng Xu <jun.feng.xu@intel.com>
Fix detection of VA/JPEG decoding API with non-standard libva packages.
More precisely, some packages were shipping with a <va/va.h> header that
did not include <va/va_dec_jpeg.h>.
https://bugzilla.gnome.org/show_bug.cgi?id=706055
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
As a last resort, if video processing capabilities (VPP) are not available,
or they did not produce anything conclusive enough, then try to fallback to
the original rendering code path whereby the whole VA surface is rendered
as is, no matter of video cropping or deinterlacing requests.
Note: under those conditions, the visual outcome won't be correct but at
least, something gets displayed instead of bailing out.
Try to use VA/VPP processing capabilities to handle video cropping and
additional rendering flags that may not be directly supported by the
underlying hardware when exposing a suitable Wayland buffer for the
supplied VA surface. e.g. deinterlacing, different color primaries than
BT.601, etc.
Update the frame redraw infrastructure with a new FrameState stucture
holds all the necessary information used to display the next pending
surface.
While we are at it, delay the sync operation down to when it is actually
needed. That way, we keep performing additional tasks meanwhile.
Disable video cropping in MPEG-2 codec because it is partially implemented
and actually because nobody implements it that way, and the standard spec
does not specify the display process either anyway.
Most notably, there are two possible use cases for sequence_display_extension()
horizontal_display_size & vertical_display_size: (i) guesstimating the
pixel-aspect-ratio, or (ii) implement some kind of span & scan process
in conjunction with picture_display_extension() information.
https://bugzilla.gnome.org/show_bug.cgi?id=704848
This fixes the following issue:
CC libgstvaapi_0.10_la-gstvaapidecoder_mpeg4.lo
gstvaapidecoder_mpeg4.c:113: error: redefinition of typedef
'GstVaapiDecoderMpeg4Class'
gstvaapidecoder_mpeg4.c:44: note: previous declaration of
'GstVaapiDecoderMpeg4Class' was here
make[5]: *** [libgstvaapi_0.10_la-gstvaapidecoder_mpeg4.lo] Error 1
make[5]: Leaving directory
`/builddir/build/BUILD/gstreamer-vaapi-0.5.5.1/gst-libs/gst/vaapi'
https://bugzilla.gnome.org/show_bug.cgi?id=705148
Add basic deinterlacing support, i.e. bob-deinterlacing whereby only
the selected field from the input surface is kept for the target surface.
Setting gst_vaapi_filter_set_deinterlacing() method argument to
GST_VAAPI_DEINTERLACE_METHOD_NONE means to disable deinterlacing.
Also move GstVaapiDeinterlaceMethod definition from vaapipostproc plug-in
to libgstvaapi core library.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Add ProcAmp (color balance) adjustments for hue, saturation, brightness
and contrast. The respective range for each filter shall be the same as
for the VA display attributes.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Sharpening is configured with a float value. The supported range is
-1.0 .. 1.0 with 0.0 being the default, and that means no sharpening
operation at all.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Noise reduction is configured with a float value. The supported range
is 0.0 .. 1.0 with 0.0 being the default, and that means no denoise
operation at all.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Add helper functions to ensure an operation VA buffer is allocated to
the right size; that filter caps get parsed and assigned to the right
operation too; and that float parameters are correctly scaled to fit
the reported range from the VA driver.
Add gst_vaapi_video_format_from_string() helper function to convert from
a video format string representation to a suitable GstVideoFormat. This
is just an alias to gst_video_format_from_string() for GStreamer 1.0.x
builds, and a proper iteration over all GstVideoFormat string representations
otherwise for earlier GStreamer 0.10.x builds.
Add helper functions to describe GstVaapiPoint and GstVaapiRectangle
structures as a standard GType. This could be useful to have them
described as a GValue later on.
Don't expose GstVaapiContext APIs and make them totally private to
libgstvaapi core library. That API would also tend to disappear in
a future revision. Likewise, don't expose GstVaapiDisplayCache API
but keep symbols visible so that the various render backends could
share a common display cache implementation in libgstvaapi.
Try to clean-up the documentation from any stale entry too.
Don't expose functions that reference a GstVaapiImageRaw, those are
meant to be internal only for implementing subpictures sync. Also add
a few private definitions to avoid functions calls for retrieving
image size and format information.
Use hardware accelerated XRenderComposite() function, from the RENDER
extension, to blit a pixmap to screen. Besides, this can also support
cropping and scaling.
Implement the new render-to-pixmap API. The only supported pixmap format
that will work is xRGB, with native byte ordering. Others might work but
they were not tested.
Add API to transfer VA urfaces to native pixmaps. Also add an API to
render a native pixmap, for completeness. In general, rendering to
pixmap would only be useful to certain VA drivers and use cases on
X11 display servers. e.g. GLX_EXT_texture_from_pixmap (TFP) handled
in an upper layer.
Mark dummy pictures as output already so that we don't try to submit
them to the upper layer since this is purely internal / temporary
picture for helping the decoder.
Once the picture was output, it is no longer necessary to keep an extra
reference to the underlying GstVideoCodecFrame. So, we can release it
earlier, and maybe subsequently release the associate surface proxy
earlier.
Fix new internal video format API, based on GstVideoFormat, to not
clobber with system symbols. So replace the gst_video_format_* prefix
with gst_vaapi_video_format_ prefix, even if the format type remains
GstVideoFormat.
Fix memory leak when processing interlaced pictures and that occurs
because the first field, represented as a GstVideoCodecFrame, never
gets released. i.e. when the picture is completed, this is generally
the case when the second field is successfully decoded, we need to
propagate the GstVideoCodecFrame of the first field to the original
GstVideoDecoder so that it could reclaim memory.
Otherwise, we keep accumulating the first fields into GstVideoDecoder
private frames list until the end-of-stream is reached. The frames
are eventually released there, but too late, i.e. too much memory
may have been consumed.
https://bugzilla.gnome.org/show_bug.cgi?id=701257
The queue of free objects to used was deallocated with g_queue_free_full().
However, this convenience function shall only be used if the original queue
was allocated with g_queue_new(). This caused memory corruption, eventually
leading to a crash.
The correct solution is to pair the g_queue_init() with the corresponding
g_queue_clear(), while iterating over all free objects to deallocate them.