Instead of recalculating the miscellaneous buffer parameters for
every buffer, it is only done once, when the encoder is configured.
And for every buffer, the same structures are just copied.
https://bugzilla.gnome.org/show_bug.cgi?id=783449
This is patch pretends to decouple the assignation of the values
in the parameter structures and the VA buffer's parameters setting.
It may lead to some issues since HRD, framerate or controlrate may
not be handled by the specific encoder, but they are set in
the VA buffer's parameters.
I leave as it because this patch is just a transitional patch.
https://bugzilla.gnome.org/show_bug.cgi?id=783449
According to the VA documentation:
The framerate is specified as a number of frames per second,
as a fraction. The denominator of the fraction is given in
the top half (the high two bytes) of the framerate field, and
the numerator is given in the bottom half (the low two bytes).
For example, if framerate is set to (100 << 16 | 750), this is
750 / 100, hence 7.5fps.
If the denominator is zero (the high two bytes are both zero)
then it takes the value one instead, so the framerate is just
the integer in the low 2 bytes.
This patch fixes the the framerate calculation in vp8 encoder
according to this.
https://bugzilla.gnome.org/show_bug.cgi?id=783449
Move frame-rate parameter from ensure_misc_params() to
ensure_contro_rate_param() since it only has meaning when the
control rate is either VBR or CBR.
https://bugzilla.gnome.org/show_bug.cgi?id=783449
Move the Hypothetical Reference Decoder (HRD) parameter, from
ensure_misc_params() to ensure_control_rate_params(), since it
only shall be defined when the control rate is either VBR or CBR.
https://bugzilla.gnome.org/show_bug.cgi?id=783449
Instead of filling the control rate param in ensure_misc_params(),
this patch refactor it out, as a first step to merge the same code
for all the encoders.
https://bugzilla.gnome.org/show_bug.cgi?id=783449
Instead of using a proxy to story the buffer quality level, the
encoder now uses the native VA structure, which is copied to the
dynamically allocated VAEncMiscParameterBuffer.
This approach is computationally less expensive.
Right now, H264 and HEVC can set as a property the number of slices to
process. But each driver can set a maximum number of slices, depending
on the supported profile & entry point.
This patch verifies the current num_slices to process against the maximum
permitted by the driver and the media size.
https://bugzilla.gnome.org/show_bug.cgi?id=780955
Since we started using VPP in VaapiWindowX11, we need to care about
the case that src rect and window's size are different.
So, once VPP has converted to other format, we should honor the
size of the VPP's surface as source rect. Otherwise, it is cropped
according the previous size of the source rect.
https://bugzilla.gnome.org/show_bug.cgi?id=782542
Set ROI params during encoding each frame, which are set via
gst_vaapi_encoder_add_roi ()
https://bugzilla.gnome.org/show_bug.cgi?id=768248
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
Queries if the driver supports "Region of Interest" (ROI) during the config
creation.
This attribute conveys whether the driver supports region-of-interest (ROI)
encoding, based on user provided ROI rectangles. The attribute value is
partitioned into fields as defined in the VAConfigAttribValEncROI union.
If ROI encoding is supported, the ROI information is passed to the driver
using VAEncMiscParameterTypeROI.
https://bugzilla.gnome.org/show_bug.cgi?id=768248
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
This patch adds the handling of VAEncMiscParameterTypeQualityLevel,
in gstreamer-vaapi encoders:
The encoding quality could be set through this structure, if the
implementation supports multiple quality levels. The quality level set
through this structure is persistent over the entire coded sequence, or
until a new structure is being sent. The quality level range can be queried
through the VAConfigAttribEncQualityRange attribute. A lower value means
higher quality, and a value of 1 represents the highest quality. The quality
level setting is used as a trade-off between quality and speed/power
consumption, with higher quality corresponds to lower speed and higher power
consumption.
The quality level is set by the element's parameter "quality-level" with a
hard-coded range of 1 to 8.
Later, when the encoder is configured in run time, just before start
processing, the quality level is scaled to the codec range. If
VAConfigAttribEncQualityRange is not available in the used VA backend, then
the quality level is set to zero, which means "disabled".
All the available codecs now process this parameter if it is available.
https://bugzilla.gnome.org/show_bug.cgi?id=778733
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
Sometimes gst_vaapi_window_wayland_sync returns FALSE when poll returns EBUSY
during destruction.
In this case, if GstVaapiWindow is using vpp, leak of vpp surface happens.
This surface is not attached to anything at this moment, so we should release
it manually.
https://bugzilla.gnome.org/show_bug.cgi?id=781695
When the frame listener callbacks 'done', the number of pending
frames are decreased. Nonetheless, there might be occasions where
the buffer listener callbacks 'release', without calling previously
frame's 'done'. This leads to problem with
gst_vaapi_window_wayland_sync() operation.
This patch marks as done those frames which were callbacked, but if
the buffer callbacks 'release' and associated frame is not marked
as 'done' it is so, thus the number of pending frames keeps correct.
https://bugzilla.gnome.org/show_bug.cgi?id=780442
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
Don't call gst_vaapi_window_wayland_sync() when destroying the
wayland window instance, since it might lead to a lock at
gst_poll_wait() when more than one instances of vaapisink are
rendering in the same pipeline, this is because they share the
same window.
Since now all the frames are freed we don't need to freed the
private last_frame, since its address is invalid now.
https://bugzilla.gnome.org/show_bug.cgi?id=780442
Signed-off-by: Hyunjun Ko <zzoon@igalia.com>
Fix leakage of the last wl buffer.
VAAPI wayland sink needs to send a null buffer while destruction,
it assures that all the wl buffers are released. Otherwise, the last
buffer's callback might be not called, which leads to leak of
GstVaapiDisplay.
This was inspired by gstwaylandsink.
https://bugzilla.gnome.org/show_bug.cgi?id=774029
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
The proxy object of wl_buffer for the last frame remains in the
wl_map. Even though we call wl_buffer_destroy() in
frame_release_callback(), the proxy object remains without being
removed, since proxy object is deleted when wayland server sees the
delete request and sends 'delete_id' event.
We need to call roundtrip before destroying event_queue so that the
proxy object is removed. Otherwise, it would be mess up as receiving
'delete_id' event from previous play, when playing in the next
va/wayland window with the same wl_display connection.
https://bugzilla.gnome.org/show_bug.cgi?id=773689
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
Clear decoders out on a flush but keep the same instance,
rather than completely recreating them. That avoids
unecessarily freeing and recreating surface pools
and contexts, which can be quite expensive
https://bugzilla.gnome.org/show_bug.cgi?id=781142
The macro GST_VAAPI_OBJECT_DEFINE_CLASS_WITH_CODE only defines
a function that is never used, thus when compiling we might see
this warning (clang):
gstvaapiwindow.c:147:1: warning: unused function 'gst_vaapi_window_class' [-Wunused-function]
GST_VAAPI_OBJECT_DEFINE_CLASS_WITH_CODE (GstVaapiWindow,
^
https://bugzilla.gnome.org/show_bug.cgi?id=759533
Since gst_vaapi_window_vpp_convert_internal is created,
GstVaapiWindowX11/Wayland can use it for conversion.
Note that once it chooses to use vpp, it's going to use vpp
until the session is finished.
https://bugzilla.gnome.org/show_bug.cgi?id=759533
If a backend doesn't support specific format, we can use vpp for conversion
and make it playing.
This api is originated from GstVaapiWindowWayland and moved to GstVaapiWindow,
so that GstVaapiWindowX11 could use it.
https://bugzilla.gnome.org/show_bug.cgi?id=759533
Currently, GstVaapiWindowX11/Wayland are not descendants of GstVaapiWindow.
This patch chains them up to GstVaapiWindow to handle common members in GstVaapiWindow.
https://bugzilla.gnome.org/show_bug.cgi?id=759533
If the profile is main-10 the bit_depth_luma_minus8, in the sequence
parameter buffer, shall be the color format bit depth minus 8, 10-8
which is 2. Also for bit_depth_chroma_minus8.
This patch gets the negotiated sink caps format and queries its
luma's depth and uses that value to fill the mentioned parameters.
https://bugzilla.gnome.org/show_bug.cgi?id=771291
When the function gst_vaapi_encoder_get_surface_formats() was added
it was under the assumption that any VA profile of the specific codec
supported the same format colors. But it is not, for example the
profiles that support 10bit formats.
In other words, different VA profiles of a same codec may support
different color formats in their upload surfaces.
In order to expose all the possible color formats, if no profile is
specified via source caps, or if the encoder doesn't have yet a
context, all the possible VA profiles for the specific codec are
iterated and their color formats are merged.
https://bugzilla.gnome.org/show_bug.cgi?id=771291
In order to get the supported surface formats within a specific
profile this patch adds the GstVaapiProfile as property to
gst_vaapi_encoder_get_surface_formats().
Currently the extracted formats are only those related with the
default profile of the element's codec.
https://bugzilla.gnome.org/show_bug.cgi?id=771291
Instead of creating (if it doesn't exist, yet) the encoder's context
the method gst_vaapi_encoder_get_surface_formats() now it creates
dummy contexts, unless the encoder has it already created.
The purpose of this is to avoid setting a encoder's context with a
wrong profile.
https://bugzilla.gnome.org/show_bug.cgi?id=771291
In order to generate vaapi contexts iterative, the function
init_context_info() is refactored to pass, as parameters the
GstVaapiContextInfo and the GstVaapiProfile.
https://bugzilla.gnome.org/show_bug.cgi?id=771291
Instead of initialize the chroma_type with a undefined value, which
will be converted to GST_VAAPI_CHROMA_TYPE_YUV420 by GstVaapiContext,
this patch queries the VA config, given the received
GstVaapiContextInfo's parameters, and gets the first response.
In order to get the GstVaapiChromaType value, also it was needed to
add a new utility function: to_GstVaapiChromaType(), which, given a
VA_RT_FORMAT_* will return the associated GstVaapiChromaType.
https://bugzilla.gnome.org/show_bug.cgi?id=771291
gcc 7.0.1 gives a memset-elt-size warning in gst_vaapi_encoder_vp9_init:
'memset' used with length equal to number of elements without
multiplication by element size [-Werror=memset-elt-size]
https://bugzilla.gnome.org/show_bug.cgi?id=780947
The current implementation is updating the POC values only
in Slice parameter Buffer.But we are not filling the
picture order count and reference flags in VAPictureH264
while populating VA Picture/Slice structures.The latest
intel-vaapi-driver is directly accessing the above fields
from VAPicutreH264 provided as RefPicLists, which resulted
some wrong maths and prediction errors in driver.
https://bugzilla.gnome.org/show_bug.cgi?id=780620
Implements gst_vaapi_utils_h26x_write_nal_unit(), which writes NAL
unit length and data to a bitwriter.
Note that this helper function applies EPB (Emulation Prevention
Bytes), since otherwise produced codec_data might be broken when
decoder/parser considering EPB, starts parsing.
See sections 7.3 and 7.4 of the H264 and H264 specifications, which
describes the emulation_prevention_three_byte.
https://bugzilla.gnome.org/show_bug.cgi?id=778750
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
Since there is duplicated code in h264/265 encoder, we could
refactor it to avoid duplicated code.
https://bugzilla.gnome.org/show_bug.cgi?id=778750
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
From man open(2):
The O_CLOEXEC, O_DIRECTORY, and O_NOFOLLOW flags are not specified
in POSIX.1-2001, but are specified in POSIX.1-2008. Since glibc
2.12, one can obtain their definitions by defining either
_POSIX_C_SOURCE with a value greater than or equal to 200809L or
_XOPEN_SOURCE with a value greater than or equal to 700. In glibc
2.11 and earlier, one obtains the definitions by defining
_GNU_SOURCE.
And indeed, with the uClibc C library, O_CLOEXEC is not exposed if
_GNU_SOURCE is not defined. Therefore, this commit fixes the build of
gstreamer-vaapi with the uClibc C library.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
https://bugzilla.gnome.org/show_bug.cgi?id=779953
Currently, it set rate control information even when query fails.
In addition, it doesn't update any more since the flag
got_rate_control_mask is set to TRUE.
https://bugzilla.gnome.org/show_bug.cgi?id=779120
The encoder should be able to change its caps even it is already
processing a stream.
This is suppose to happen after a flush so the codedbuf_queue should
be empty.
https://bugzilla.gnome.org/show_bug.cgi?id=775490
Configuring GCC to verify possible usage of uninitialized variables,
shows that found_index might be used without previous assignation.
This patch assigns a initial value to found_index, also avoid a
branching when returning the result value.
https://bugzilla.gnome.org/show_bug.cgi?id=778782
Rename to be consistent with H.264 and also H.265 encoder. The
meson build assumed this was already consistently named, and so
previously was not able to actually build the H.265 decoder.
https://bugzilla.gnome.org/show_bug.cgi?id=778576
This patch enables the Constant BitRate encoding mode in VP8 encoder.
Basically it adds the configuration parameters required by libva to
CBR enconding.
Original-Patch-By: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=749950
Reduce frame num gaps so that we don't have to create unnecessary
dummy pictures, just throw them away.
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=777506
This patch adds a GstMemory as a variable member of the buffer proxy,
because we will need to associate the buffer proxy with the memory
which exposes it. Later, we will know which memory, in the video buffer
pool, is attached to the processed surface.
https://bugzilla.gnome.org/show_bug.cgi?id=755072
Thus, when generating the allowed caps, the element will throw a
warning and it will use its caps template.
This behavior might be a bug in the VA driver.
https://bugzilla.gnome.org/show_bug.cgi?id=775490
When a new sink caps arrive the internal decoder state is updated
and, if it is, request a downstream renegotiation.
Previously, when new caps arrived the whole decoder where destroyed
and recreated. Now, if the caps are compatible or has the same codec,
the internal decoder is kept, but a downstream renegotiation is
requested.
https://bugzilla.gnome.org/show_bug.cgi?id=776979
Redirect libva's logs to GStreamer logging mechanism. This is
particularly useful when VA is initialized, because it always logs
out the drivers details.
In order to achieve this a new helper function was added as a wrapper
for the vaInitialize() function.
https://bugzilla.gnome.org/show_bug.cgi?id=777115
If the frame is a cloned picture, its PTS comes from its parent
picture. In addition, the base decoder doesn't set a valid PTS to
the frame corresponding to the cloned picture.
https://bugzilla.gnome.org/show_bug.cgi?id=774254
This method will return the valid surface formats in the current
config. If the are no VAConfig it is created with the information
available.
https://bugzilla.gnome.org/show_bug.cgi?id=769266
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
Split set_context_info() adding init_context_info() which only
initialises the GstVaapiContextInfo structure inside GstVaapiEncoder
required for VAConfig.
https://bugzilla.gnome.org/show_bug.cgi?id=769266
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
If GstVaapiContextInfo has just initial information, without frame's
width and height, skip the creation of the VAContext, just keep the
VAConfig.
https://bugzilla.gnome.org/show_bug.cgi?id=769266
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
Split the funcion context_create() into context_create() and
config_create().
Decoupling VAConfig and VAContext during context creation, we could
query the VAConfig for the supported surface's formats without creating
a VAContext.
https://bugzilla.gnome.org/show_bug.cgi?id=769266
Originally the drm backend only tried to open the first render node
found. But in hybrid system this first render node might not support
VA-API (propietary Nvidia driver, for example).
This patch tries all the available nodes until a finding one with a
VA-API supported driver.
https://bugzilla.gnome.org/show_bug.cgi?id=774811
Original-patch-by: Stirling Westrup <swestrup@gmail.com> and
Reza Razavi <reza@userful.com>
In case libva-wayland has its headers not installed in default
locations (like /usr/include), the build fails to include "wayland-client.h":
CC libgstvaapi_egl_la-gstvaapiutils_egl.lo
In file included from gstvaapidisplay_wayland.h:27:0,
from gstvaapidisplay_egl.c:35:
/usr/include/va/va_wayland.h:31:28: fatal error: wayland-client.h: No such file or directory
#include <wayland-client.h>
As we already passed VA_CLAGS, /usr/include/va/va_wayland.h could be found, but it is
our fault not to instruct the system that we ALSO care for va_wayland. We correctly query
for libva-wayland.pc in configure and use this in other places as well. It is thus only
correct and consequent, to do it also at this spot.
https://bugzilla.gnome.org/show_bug.cgi?id=773946
This patch ensures to get the formats, as filter does, available in the
decoder / encoder context.
The context fills up the array as soon it is created, otherwise the pipeline
could get stalled (perhaps this is a bug in my HSW backend).
https://bugzilla.gnome.org/show_bug.cgi?id=752958
The query of all the supported formats for a VA config were only used by the
postprocessor (vaapifilter). But, in order to enable the vaapidecoder to
negotiate a suitable raw format with downstream, we need to query these
formats against the decoder's config.
This patch is the first step: moves the code in filter's ensure_image() to a
generic gst_vaapi_get_surface_formats() in vaapiutils_core, so it can be
shared later by the decoder.
https://bugzilla.gnome.org/show_bug.cgi?id=752958
When creating a GstVaapiWindowEGL, it also creates native window by its own
native display. It should pass the native display, either X11 or Wayland.
https://bugzilla.gnome.org/show_bug.cgi?id=768266
This patch is to change the inheritance of GstVaapiDisplay to GstObject,
instead of GstVaapiMiniObject. In this way we can use all the available
infrastructure for GObject/GstObject such as GstTracer, GIR, etc.
In addition, a new debug category for GstVaapiDisplay is created to make it
easier to trace debug messages. It is named "vaapidisplay" and it transverse
all the VA display backends (DRM, GLX, EGL, Wayland, ...)
This patch is a step forward to expose GstVaapiDisplay for users in a future
library.
https://bugzilla.gnome.org/show_bug.cgi?id=768266
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
GstVaapiDecode is a descendant of GstVaapiMiniObject, so, thought we should
use its methods, even though it doesn't change functionality.
GstVaapiPixmap, GstVaapiTexture and GstVaapiWindow are descendant of
GstVaapiObject, hence its methods shall be used.
https://bugzilla.gnome.org/show_bug.cgi?id=772554
instances when created and reuse
This patch improves performance when glimagesink uploads a GL texture.
It caches the GStVaapiTexture instances in GstVaapiDisplay{GLX,EGL}, using an
instance of GstVaapiTextureMap, so our internal texture structure can be found
by matching the GL texture id for each frame upload process, avoiding the
internal texture structure creation and its following destruction.
https://bugzilla.gnome.org/show_bug.cgi?id=769293
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
Implement GstVaapiTextureMap object, which caches VAAPI textures, so them can be
reused. Internally it is a hash table.
Note that it is GstObject based rather than GstVaapiObject, as part of the future
converstion to GstObject of most of the code.
https://bugzilla.gnome.org/show_bug.cgi?id=769293
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
We are not getting enough compression for some streams and
encoded frame end up with more size than allocated.
Assuming a compression ratio of 4, which should be good enough
for holding the frames.
https://bugzilla.gnome.org/show_bug.cgi?id=771528
While doing the mode-1 referece picture selection,
the circular buffer logic was not correctly setting the
refresh frame flags as per VP9 spec.
Make sure refresh_flag[0] get updated correclty after
each cycle of GST_VP9_REF_FRAMES.
https://bugzilla.gnome.org/show_bug.cgi?id=771507
When the format of a H.264 stream is AVC3, the SPS and PPS are inside the
stream, not in the codec_data, so the size of codec_data might be 7.
This patch reduces the minimal size of the codec_data buffer from 8 to 7.
https://bugzilla.gnome.org/show_bug.cgi?id=771441
In commit 2eb4394 the frame coding mode was verified for progressive
regardless the profile. But the FCM is only valid in the advanced
profile. This patch checks for the advanced profile before verifying FCM for
progressive.
https://bugzilla.gnome.org/show_bug.cgi?id=769250
In the earlier patch:
f31d9f3 decoder: vc1: Print error on interlaced content
Decoding would error out if the interlace flag was set in the
sequence bdu. This isn't quite right because a video can have this
flag set and yet not have any interlaced pictures.
Here instead we error out when either parsing a field bdu or
decoding a frame bdu which has fcm set to anything other than
progressive.
Signed-off-by: Scott D Phillips <scott.d.phillips@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=769250
Validate that fps numerator is non-zero so it can be used to calculate
the duration of the B frame.
Also it gst_util_uint64_scale() is used instead of normal arithmetic in
order to aviod overflows, underflows and loss of precision.
https://bugzilla.gnome.org/show_bug.cgi?id=768458
Since the upstream of gstreamer-vaapi, the library is not a public shared
object anymore. But the EGL support depended on this dynamic library, so the
EGL support was broken.
This patch removes the dynamic library loading code and instantiates the
EGL display using either X11 or Wayland if available.
https://bugzilla.gnome.org/show_bug.cgi?id=767203
This reverts commit 1dbcc8a0e1 and commit
372a03a9e3.
While the dmabuf handle is exported, the derive image must exist, otherwise
the image's VA buffer is invalid, thus the dmabuf handle is never released,
leading into a file descriptors leak.
A planar(or some other) buffer allocation may fail on the driver, then
the wayland connection becomes invalid, not able to send request or
receive any event. So we need to set up a new wayland connection if
there's an error detected on the cached wl_display.
https://bugzilla.gnome.org/show_bug.cgi?id=768761
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
Change defaults for max-bframes, cabac, and dct8x8 to be enabled
by default. This will cause the default profile to be high instead
of baseline. In most situations this is the right decision, and
the profile can still be lowered in the case of caps restrictions.
Signed-off-by: Scott D Phillips <scott.d.phillips@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=757941
Without this, all dpb pictures are not released during flush,
because we used the global dpb_count variable for checking the
dpb fullness which get decremented in dpb_remove_index()
routine during each loop iteration.
https://bugzilla.gnome.org/show_bug.cgi?id=767934
Clarify that vaapi context resets are never needed for vp9, but
that ensure_context() needs called when the size increases so that
new surfaces can be allocated.
Signed-off-by: Scott D Phillips <scott.d.phillips@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=767474
Added two modes(as properties) for reference picture selection:
ref-mode-0: AltRef and GoldRef pointing to the recent keyframe
and LastRef is pointing to the previous frame.
ref-mode-1: Previous frame (n) as LastRef , n-1 th frame as GoldRef
and n-2 th frame as AltRef
https://bugzilla.gnome.org/show_bug.cgi?id=766048
gst_video_info_set_format() and gst_video_info_from_caps() call, internally,
gst_video_info_init(), hence it is not required to call it before them. This
patch removes these spurious calls.
This is a workaround since vaapi-intel-driver doesn't have
support for B-frame encode when utilizing low-power-enc
hardware block.
Fixme :We should query the VAConfigAttribEncMaxRefFrames
instead of blindly disabling b-frame support and set b/p frame count,
buffer pool size etc based on the query result.
https://bugzilla.gnome.org/show_bug.cgi?id=766050
Added a new property "low-power-enc" for enabling low power
encoding mode. Certain encoding tools may not be available
with the VAEntrypointEncSliceLP.
https://bugzilla.gnome.org/show_bug.cgi?id=766050
Call gst_vaapi_video_pool_finalize() in coded_buffer_pool_finalize().
Otherwise it is not called when the pool is destroyed and all objects
referenced by the GstVaapiVideoPool are never released.
https://bugzilla.gnome.org/show_bug.cgi?id=764993
If gst_vaapi_image_new_with_image() fails, the created derived image should be
destroyed, otherwise the surface cannot be processed because is being used.
https://bugzilla.gnome.org/show_bug.cgi?id=764607
The subsampling_x, subsampling_y, bit_depth, color_space and color_range
fileds are moved from GstVp9FrameHdr to the global GstVp9Parser structure.
These fields are only present in keyframe or intra-only frame, no need to
duplicate them for inter-frames.
https://bugzilla.gnome.org/show_bug.cgi?id=764082
The array_completeness, reserved bit and num_nal_units fields
in HEVCDecoderConfigurationRecord will be present for each VPS/SPS/PPS array list,
but not for each occurance of similar headers.
https://bugzilla.gnome.org/show_bug.cgi?id=764274
The P010 video format is the native format used by the vaapi intel driver
for HEVCMain10 decode . Add support for planes and images of this video format.
https://bugzilla.gnome.org/show_bug.cgi?id=759181
The colour depth is clamped to 24 when it is not equal {15,16,24,32}. But this
fails with the NVIDIA binary driver as it doesn't advertise a TrueColor visual
with a depth of 24 (only 30 and 32). Allowing the depth to be 30, lets everything
work as expected.
https://bugzilla.gnome.org/show_bug.cgi?id=764256
Copy the data into the dependent slice segment header from the
corresponding independent slice segment header during parsing.
Previously the reference to the "previous" independent header was
held through the parsing phase and then dereferenced during the
decoding phase. This caused all dependent headers to be populated
with the data of the AU's last independent header instead of the
proper corresponding header.
https://bugzilla.gnome.org/show_bug.cgi?id=762352
Changes since v1:
- Reworded commit message
gst_vaapi_buffer_proxy_{acquire_handle,release_handle,finalize,class}
functions are used only when libva's API version is greater than 0.36.0
This patch guards those functions completely rather than just their
content. The patch is a continuation of commit 38f8fea4
Original-patch-by: Vineeth TM <vineeth.tm@samsung.com>
https://bugzilla.gnome.org/show_bug.cgi?id=762055
After setting the release flags, the compiler warns about a couple
initialized variables.
Also marked a couple of set variables as unused, because they are only
used for assertion.
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
As part of the upstreaming process of gstreamer-vaapi into the GStreamer
umbrella, we need to comply with the project's code style. This meant to
change a lot of code.
It was decided to use a single massive patch to update the code style.
I would like to apologize with the original developers of this code because of
the history breakage.
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
As gst-indent generated ugly code in these cases, this patch changes the used
idiomatic into other one.
No functional changes were introduced.
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
Somehow this didn't show up earlier, but gst_adapter_prev_timestamp() got
deprecated since GStreamer 1.0.
This patch replace it with gst_adapter_prev_pts()
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
This a very pathological situation: when we have a HEVC encoder but not a HEVC
decoder.
The encoder needs functions that are only available when the decoder is
enabled.
This patch moves the utils functions into the generic sources, such as the
rest of the utils.
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
As gstreamer-vaapi now only supports from GStreamer 1.6, this patch removes
all the old GStreamer version guards.
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
Since we don't install libraries anymore, it makes no sense to keep
versioning them according to the gstreamer's version.
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
This basically reverts 62c3888b76 (wayland:
decouple wl_buffer from frame).
Otherwise the frame may be overwritten while it is still used by the
compositer:
The frame done callback (frame_done_callback()) is called, when the
compositor is done processing the frame and hands it to the hardware.
The buffer release callback (frame_release_callback()) is called when the
buffer memory is no longer used.
This can be quite some time later: E.g. if weston (with the DRM backend)
puts the buffer on a hardware plane, then then buffer release callback is
called when the kernel is done with the buffer. This is usually when the
next frame is shown, so most likely after the frame done callback for the
next frame!
Since 70eff01d36 "wayland: sync() when
destroy()" the mentioned possible leak should no longer be a problem, so
reverting this change should cause no leaking buffers.
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=758848
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
Only supporting vaapidecode ! vaapisink combination for now.
Missing dependencies:
1: No support for P010 video format in GStreamer
2: No support for P010 vaGetImage()/vaPutimage() in vaapi-intel-driver
3: As a result of 1&2 , we have no support for Vaapi Video memory mapping
through GstVideoMeta.
Right now we only set chroma format (YUV420 with more than 8 bits per channel)
for surface pool and keeping GST_VIDEO_FORMAT as ENCODED. The underlying format
of the surfaces is implementation (driver) defined, which is P010.
This new API gst_vaapi_surface_pool_new_with_chroma_type() is for
creating a new GstVaapiVideoPool of GstVaapiSurfaces with the specified
chroam type and dimensions. The underlying format of the surfaces is
implementation (driver) defined.
When receiving the texture from the application or the video sink, we must
know it size and border. To query the texture the API has changed according to
the OpenGL version used in the GL context of the application/vsink.
This patch checks the current context API type and queries the texture
according to this detected API.
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=753099
gst_vaapi_texture_glx_new_wrapped() only handles a GL_TEXTURE_2D target and
formats GL_RGBA or GL_BGRA.
This patch adds a debugging verification of those values.
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=753099
In order to know which OpenGL API use, we must detect the API type of current
context. This patch adds the function gl_get_current_api() which returns the
OpenGL API type.
This function is an adaptation of gst_gl_context_get_current_gl_api() from
GstGL.
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=753099
We always set GST_VAAPI_PICTURE_FLAG_SKIPPED for DECODE_ONLY frames and the
gstvaapidecoder base calss is reponsible for handling those frames later on.
No need for explicit verification of frame header's show_frame in order to
do picture outputing.
Set crop rectange if:
There is display_width and display_height which is different from actual width/height
or
The changed resolution is less than the actual configured dimension of surfaces
Unlike other decoders, vp9 decoder doesn't need to reset the
whole context and surfaces for each resolution change. Context
reset only needed if resolution of any frame is greater than
what actullay configured. There are streams where a bigger
resolution set in ivf header or webm header but actual resolution
of all frames are less. Also it is possible to have inter-prediction
between these multi resolution frames.
clang complains about a couple variables and one label which were not
used. This patch removes them.
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=757958
Since gstvaapidisplay_glx.h do not expose gl.h/glx.h structures, it is not
required to include them in the header. It is not also required to include
them in gstvaapidisplay_glx.c, since gstvaapiutils_glx.h includes them and
exposes their structures (e.g. GLXPixmap).
Nonetheless, glext.h neither glxext.h are required to include, they are
already included conditionally by gl.h and glx.h, respectively.
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=757577
When the GstVaapiParserInfoH264 is allocated, the memory is not initialized,
so it contains random data.
When gst_h264_parser_parse_pps() fails, the PPS structure keeps slice_group_id
pointer uninitialized, leading to a segmentation fault when the memory is
freed.
This patch prevents this by initializing the slice_group_id before the PPS
parsing.
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=754845
Explicit flushing of dpb for EOS and EOB nal decoding is wrong,
the dpb_add() itself will handle the flusing(if needed) of dpb
for end of sequence and end of bitstream.
https://bugzilla.gnome.org/show_bug.cgi?id=754010
This fix is based on the V3 vesion of spec which was missing in older versions.
When the current picture has PicOutputFlag equal to 1, for each picture in the
DPB that is marked as "needed for output" and follows the current picture in output order,
the associated variable PicLatencyCount is set equal to PicLatencyCount + 1 (C.5.2.3).
https://bugzilla.gnome.org/show_bug.cgi?id=754010
The default scan order of scaling lists are up-right-diagonal
as per hevc specification. Use the newly implemented
uprightdiagonal_to_raster conversion codecparser APIs to
get the the scaling_list values in raster order, which is
what the VA intel driver requires.
Setting the sink to flushing causes gst_vaapi_window_wayland_sync() to
return FALSE which makes gst_vaapi_window_wayland_render() return
FALSE which ends up posting an ERROR message in
gst_vaapisink_show_frame_unlocked(). Solution is to just return TRUE
in the EBUSY case.
https://bugzilla.gnome.org/show_bug.cgi?id=753598
First added the function gst_vaapi_video_format_get_best_native(), which
returns the best native format that matches a particular chroma type:
YUV 4:2:0 -> NV12, YUV 4:2:2 -> YUY2, YUV 4:0:0 -> Y800
RGB32 chroma and encoded format map to NV12 too.
That format is used to configure, initially, the surface's pool for the
allocator.
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=744042
Before, only YUV420 color space where supported. With this patch, the
encoder is queried to know the supported formats and admits YUV422
color space if its available.
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=744042
This is necessary for finding ChromaOffsetL0/ChromaOffsetL1
prediction weight table values with out using any hard coding.
Fixme: We don't have parser API for sps_range_extension, so
assumed zero value for high_precision_offsets_enabled_flag.
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Based on ITU-T rec H265(4/2015): 7-56
This was a wrong equation in rec H265 (4/2013): 7-44...
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
HACK: This is a work-around to identify some main profile streams having wrong profile_idc.
There are some wrongly encoded main profile streams(eg: ENTP_C_LG_3.bin) which doesn't
have any of the profile_idc values mentioned in Annex-A, instead general_profile_idc
has been set as zero and having general_profile_compatibility_flag[general_profile_idc]
is TRUE. Assuming them as MAIN profile for now.
https://bugzilla.gnome.org/show_bug.cgi?id=753226
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
We are calculating the dpb size based on max_dec_pic_buffering.
But if there are more than one temporal sublayers, we are supposed
to use the max_dec_pic_buffering[max_sub_layers_minus] for dpb
size calculation (Assuming HighestTid as max_sub_layers_minus).
Sample streams: TSCL_A_VIDYO_5.bin, TSCL_B_VIDYO_4.bin
https://bugzilla.gnome.org/show_bug.cgi?id=753226
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Decoding process for reference picture list construction needs to be
invoked only for P and B slice and the value for slice_type of dependent slice
segment should be taken from the previous independent slice segment header
of the same pic.
https://bugzilla.gnome.org/show_bug.cgi?id=753226
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
When we query for the VAConfigAttribEncJPEG, we get a value which packs the
VAConfigAttribValEncJPEG structure, but we did not assign it. This patch
assigns the returned value to the attribute.
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=744042
Framerate 0/1 is valid, and it is particularly useful for picture
encoding, such as jpeg. This patch makes the encoder to admit that
framerate.
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=744042
Improve closure of gaps in frame_num by strictly following and trying
to fill them with previous reference frames. So, they are now tracked
thus avoiding insertion of dummy ("greenish") frames.
If the new picture to be added to the DPB is not a first field, then
it shall be the second field of the previous picture that was added
before.
This removes the need for dpb_find_picture() now that we track the
immediately preceding decoded picture, in decode order.
When a dummy "other-field" is inserted, it is assumed to inherit the
reference flags from the first field, and the sliding window decoded
reference picture marking process is also executed so that corrupted
frames are moved out as early as possible.
While doing so, we also try to output frames that now contain a single
valid field picture, prior to inserting any other picture into the DPB.
Note: this may be superfluous currently based on the fact that dpb_add()
combines the two most recent pairable fields, but this process would be
further simplified later on.
Mark the picture as "corrupted" if it is reconstructed from corrupted
references or if those references are fake, e.g. resulting from lost
frames.
This is useful for notifying the upper layer, or downstream elements,
that the decoded frame may contain artefacts.
https://bugzilla.gnome.org/show_bug.cgi?id=703921
Add initial infrastructure in core codec library and vaapidecode to mark
corrupted frames as such. A corrupted frame is such a frame that was
reconstructed from invalid references for instance.
https://bugzilla.gnome.org/show_bug.cgi?id=751434
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
This patch fix the auto-plugging problem in gstreamer 1.2 and gstreamer 1.4
Right now there is not a primary ranked parser for vc1 and the demuxers
delivers caps without specifying the profile. This situation is not an issue
for avdec_vc1 but for vaapidecode it is, which refuses to negotiate without a
explicit profile defined in the negotiated caps.
Nonetheless, in gstreamer 1.5 it seems not to be a problem since the
negotiation admits caps subsets try outs.
This patch solves the issue ignoring the profile negotiation in the caps. For
gstreamer < 1.5 the profile string is not handled, so the auto-plugging get
done without the vc1 parser, such as happens in gstreamer 1.5.
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
Fix the regression introduced in commit eb465fb.
VAProcFilterSkinToneEnhancement is avaialbe from VA >= 0.36.
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
One buffering_period() SEI message shall be present in every IDR access unit
when NalHrdBpPresentFlag is inferred to be equal to 1. This is the case when we
use a non-CQP mode, e.g. CBR. In other words, when
nal_hrd_parameters_present_flag is set to 1.
One picture_timing() SEI messages shall be present in every access unit
if CpbDpbDelaysPresentFlag is equal to 1 or pic_struct_present_flag is equal to 1
https://bugzilla.gnome.org/show_bug.cgi?id=722734https://bugzilla.gnome.org/show_bug.cgi?id=751831
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
This is a work-around to satisfy the va-intel-driver.
Normalize the quality factor and scale QM values (only for packed header
generation) similar to what VA-Intel driver is doing . Otherwise the
generated packed headers will be wrong, since the driver itself is
scaling the QM values using the normalized quality factor.
https://bugzilla.gnome.org/show_bug.cgi?id=748335
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Fix uninitialized variables when decoding SPS and PPS NAL units from
"codec-data" buffers. This is particularly important when seeking ops
are involved, and the new persistent states are used more often.
https://bugzilla.gnome.org/show_bug.cgi?id=750094
The approximation of 6 times compression ratio migh not
work in all cases. Especially when enabling I frames.
Provide large enough size for coded-buffer creation.
The approximation of 4 times compression ratio will not
work in all cases. Especially when enabling I frames.
Provide large enough size for coded-buffer creation.
Implement decoding process for gaps in frame_num (8.5.2). This
also somewhat supports unintentional loss of pictures.
https://bugzilla.gnome.org/show_bug.cgi?id=745048https://bugzilla.gnome.org/show_bug.cgi?id=703921
Original-patch-by: Wind Yuan <feng.yuan@intel.com>
[fixed derivation of POC, ensured clone is valid for reference,
actually fixed detection of gaps in FrameNum by PrevRefFrameNum]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Try to identify missing first fields too, thus disregarding any
intermediate gaps in frames. We also assume that we keep the same
field sequence, i.e. if previous frames were in top-field-first
(TFF) order, then so are subsequent frames.
Note that insertion of dummy first fields need to operate in two
steps: (i) create the original first field that the current field
will inherit from, and (ii) submit that field into the DPB prior
to initializing the current (other) field POC values but after any
reference flag was set. i.e. copy reference flags from the child
(other field) to the parent (first field).
https://bugzilla.gnome.org/show_bug.cgi?id=745048
Interlaced H.264 video frames always have two fields to decode and
display. However, in some cases, e.g. packet loss, one of the field
can be missing. This perturbs the reference picture marking process,
whereby the number of references available in DPB no longer matches
the expected value.
This patch adds initial support for missing field within a decoded
frame. The current strategy taken is to find out the nearest field,
by POC value, and with the same parity.
https://bugzilla.gnome.org/show_bug.cgi?id=745048
Try to maintain a "top-field-first" (TFF) flag, even if the H.264 standard
does not mandate it. This will be useful for tracking missing fields, and
also for more correct _split_fields() implementation for frames in the DPB.
Don't try to decode pictures until the first I-frame is received within
the currently active sequence. There is no point is decoding and then
displaying frames with artifacts.
Fix decoding of end_of_seq() NAL unit so that to not submit the current
picture for decoding again. This is pretty vintage code that dates back
before the existing of the whole decoder units machinery.
One issue that could be arising if that code was kept is that we could
have submitted a picture, and subsequently a GstVideoCodec frame, twice.
Once without the decode_only flag set, and once with that flag set. The
end result is that the GstVideoDecoder would release the codec frame
twice, thus releasing stale data.
In short, the piece of code that is removed by this patch is for once
completely obsolete for a while, and secondly error-prone in corner
cases.
If the GOP temporal sequence number (TSN) is interpolated from a valid
PTS, then we need to compensate that PTS corresponding to the start of
GOP with the next picture to be decoded, which shall be an I-frame,
based on its sequence number.
https://bugzilla.gnome.org/show_bug.cgi?id=748676
Before pushing a the new frame, the render() method calls sync() to flush the
pending frames. Nonetheless, the last pushed frame never gets rendered, leading
to a memory leak too.
This patch calls sync() in the destroy() to flush the pending frames before
destroying the window.
Also a is_cancelled flag is added. This flag tells to not flush the event
queue again since the method failed previously or were cancelled by the user.
https://bugzilla.gnome.org/show_bug.cgi?id=749078
Otherwise wl_display_dispatch_queue() might prevent the pipeline from
shutting down. This can happen e.g. if the wayland compositor exits while
the pipeline is running.
Changes:
* renamed unlock()/unlock_stop() to unblock()/unblock_cancel() in gstvaapiwindow
* splitted the patch removing wl_display_dispatch_queue()
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=747492https://bugzilla.gnome.org/show_bug.cgi?id=749078
wl_display_dispatch_queue() might prevent the pipeline from shutting
down. This can happen e.g. if the wayland compositor exits while the
pipeline is running.
This patch replaces it with these steps:
- With wl_display_prepare_read() all threads announce their intention
to read.
- wl_display_read_events() is thread save. On threads reads, the other
wait for it to finish.
- With wl_display_dispatch_queue_pending() each thread dispatches its
own events.
wl_display_dispatch_queue_pending() was defined since wayland 1.0.2
Original-patch-by: Michael Olbrich <m.olbrich@pengutronix.de>
* stripped out the unlock() unlock_stop() logic
* stripped out the poll handling
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=749078https://bugzilla.gnome.org/show_bug.cgi?id=747492
Since frame in the private data means the last frame sent, it would
semantically better use last_frame.
Also, this patch makes use of g_atomic_pointer_{compare_and_exchange, set}()
functions.
https://bugzilla.gnome.org/show_bug.cgi?id=749078
Wayland window has a pointer to the last pushed frame and use it to set the
flag for stopping the queue dispatch loop. This may lead to memory leaks,
since we are not keeping track of all the queued frames structures.
This patch removes the last pushed frame pointer and change the binary flag
for an atomic counter, keeping track of number of queued frames and use it for
the queue dispatch loop.
https://bugzilla.gnome.org/show_bug.cgi?id=749078
This patch takes out the wayland's buffer from the the frame structure. The
buffer is queued to wayland and destroyed in the "release" callback. The
frame is freed in the surface's "done" callback.
In this way a buffer may be leaked but not the whole frame structure.
- surface 'done' callback is used to throttle the rendering operation and to
unallocate the frame, but not the buffer.
- buffer 'release' callback is used to destroy wl_buffer.
Original-patch-by: Zhao Halley <halley.zhao@intel.com>
* code rebase
* kept the the event_queue for buffer's proxy
Signed-off-by: Víctor Manuel Jáquez Leal <victorx.jaquez@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=749078
This patch fixes several issues found when running the `make distcheck`
target:
- In commit c561b8da, the update of gstcompat.h in Makefile.am was
forgotten.
- In commit c5756a91 add the simple_encoder_source_h in EXTRA_DIST was
forgotten.
- vpx.build.stamp is not generated at all, only vpx.configure.stamp.
- The make target distcleancheck failed because some autogenerated files
were not handled with the DISTCLEANFILES variable.
Note: `make distcheck -jXX` is not currently supported.
GST_VAAPI_ENCODER_STATUS_NO_SURFACE and GST_VAAPI_ENCODER_STATUS_NO_BUFFER
are not errors, so they do not have the ERROR namespace.
This patch fixes this typo in documentation.
On s390x, guintptr and GstVaapiID are not compatible types. The
implementation of gst_vaapi_window_new_internal() and all its callers
seem to assume that its third argument is a GstVaapiID, while the
header gives it guintptr type.
https://bugzilla.gnome.org/show_bug.cgi?id=744559
Since bug #745728 was fixed the oldest supported version of GStreamer is
1.2. That GStreamer release requires glib 2.32, so we can upgrade our
requirement too.
This patch changes the required version of glib in configure.ac and removes
the hacks in glibcompat.h
https://bugzilla.gnome.org/show_bug.cgi?id=748698
This patch only intends to improve readability: in the method
gst_vaapi_window_wayland_sync() the if/do instructions are squashed into a
single while loop.
Also renames the frame_redraw_callback() callback into frame_done_callback(),
which is a bit more aligned to Wayland API.
The Wayland compositor may still use the buffer when the frame done
callback is called.
This patch destroys the frame (which contains the buffer) until the
release callback is called. The draw termination callback only controls
the display queue dispatching.
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
https://bugzilla.gnome.org/show_bug.cgi?id=747492
Based up on the value of uniform_spacing_flag in Picture Parameter Set,
the tile column width and tile row height should be calculated.
Equations: 6-1, 6-2
Tiled video Descriptions: 7.3.2.3, 7.4.3.3
-- Set NoRaslOutputFlag based on EOS and EOB Nal units
-- Fix PicOutputFlag setting for RASL picture
-- Fix prev_poc_lsb/prev_poc_msb calculation
-- Drop the RASL frames if NoRaslOutputFlag is TRUE for the associated IRAP picture
-- Fixed couple of crashes and added cosmetics
There is a race condition where g_drm_device_type can be left set to
DRM_DEVICE_RENDERNODES when it shouldn't.
If thread 1 comes in and falls into the last else statement setting up both
RENDERNODES and LEGACY types. And begins to process the first type (RENDERNODES),
it sets g_drm_device_type = RENDERNODES.
Now when thread 2 comes in and sees g_drm_device_type is RENDERNODES, it queues
up that type to be tried but then encounters the lock and has to wait until the
first thread finishes. Once the lock is acquired it will then proceed to ONLY try
RENDERNODES and fail it. But it doesn't try LEGACY. And from then on, all future
attempts will only try RENDERNODES.
So to avoid this situation I have simply moved the acquisition of the lock higher
up in the attached patch.
https://bugzilla.gnome.org/show_bug.cgi?id=747914
The video pool can be accessed with the display lock held, for example,
when releasing a buffer from inside vaapisink_render, but allocating
a new object can may also take the display lock. Which means a possible
deadlock.
https://bugzilla.gnome.org/show_bug.cgi?id=747944
The support for buffer exports in VA-API was added in version 0.36. These
interfaces are for interop with EGL, OpenCL, etc.
GStreamer-VAAPI uses it for a dmabuf memory allocator. Though, gstreamer-vaapi
has to support VA-API versions ranging from 0.30.4, which doesn't support it.
This patch guards all the buffer exports handling (and dmabuf allocator) if
the detected VA-API version is below 0.36.
https://bugzilla.gnome.org/show_bug.cgi?id=746405
The member size in GstMpeg4Packet is gsize which is unsigned, which cannot be
less than zero. Hence this pre-condition test is a no-op. This patch removes
that code.
https://bugzilla.gnome.org/show_bug.cgi?id=747312
slice_type in slice_param is defined as (char *), but it is compared against a
signed integer. clang complains about this comparison.
This patch casts the variable.
https://bugzilla.gnome.org/show_bug.cgi?id=747312
The symbol GstVaapiCodedBuffer is already defined in
gst-libs/gst/vaapi/gstvaapicodedbuffer.h which is loaded, at the end, by
gstvaapiencoder_objects.h. Clang complains about the symbol re-definition.
This patch removes that redefinition.
https://bugzilla.gnome.org/show_bug.cgi?id=747312
The member value in frame_rate_tab is float, the result of the abs() function
should be float too. But abs() only manages integers.
This patch replaces abs() with fabsf() to handle correctly the possible floats
values.
https://bugzilla.gnome.org/show_bug.cgi?id=747312
The purpose of gstcompat.h is to couple the API differences among
gstreamer-1.0 and gstreamer-0.10. Since gstreamer-0.10 is obsolete, the code
in this compatibility layer shall be removed.
Nevertheless, the gstcompat.h header should be kept, if new incompatibilites
appear in the future, but it shall live in gst/vaapi, not in gst-libs.
This patch removes the crumbs defined gstcompat.h and moves it to gst/vaapi.
In order to avoid layer violations, gstcompat.h includes sysdeps.h and all
the includes in gst/vaapi of sysdeps.h are replaced with gstcompat.h
https://bugzilla.gnome.org/show_bug.cgi?id=745728
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
This library was intended to add the base classes for video decoders which
where not included in gstreamer-0.10.
Since the support of gstreamer-0.10 is deprecated those classes are not
required, thus the whole library is removed.
https://bugzilla.gnome.org/show_bug.cgi?id=745728https://bugzilla.gnome.org/show_bug.cgi?id=732666
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Add support for H.264 MVC Multiview High profile encoding with
more than 2 views. All views within the same accesss unit are
provided in increasing order of view order index (VOIdx).
Upto 10 view are supported for now.
A new property "view-ids" has been provided for the plugins to
set the view ids (which is an array of guint values) to be used
for mvc encoding.
https://bugzilla.gnome.org/show_bug.cgi?id=732453
Add support for GstVideoGLTextureOrientation modes. In particular,
add orientation flags to the GstVaapiTexture wrapper and the GLX
implementations. Default mode is that texture memory is laid out
with top lines first, left row first. Flags indicate whether the
X or Y axis need to be inverted.
Add GstVaapiTextureEGL abstraction that can create its own GL texture,
or import a foreign allocated one, while still allowing updates from a
VA surface.
Add helpers to import EGLImage objects into VA surfaces. There are
two operational modes: (i) gst_vaapi_surface_new_from_egl_image(),
which allows for implicit conversion from EGLImage to a VA surface
in native video format, and (ii) gst_vaapi_surface_new_with_egl_image(),
which exactly wraps the source EGLImage, typically in RGBA format
with linear storage.
Note: in case of (i), the EGLImage can be disposed right after the
VA surface creation call, unlike in (ii) where the user shall ensure
that the EGLImage is live until the associated VA surface is no longer
needed.
https://bugzilla.gnome.org/show_bug.cgi?id=743847
Add initial support for EGL to libgstvaapi core library. The target
display server and the desired OpenGL API can be programmatically
selected at run-time.
A comprehensive set of EGL utilities are provided to support those
dynamic selection needs, but also most importantly to ensure that
the GL command stream is executed from within a single thread.
https://bugzilla.gnome.org/show_bug.cgi?id=743846
Added rounding control handling for VC1 simple and Main profile
based on VC1 standard spec: section 8.3.7
https://bugzilla.gnome.org/show_bug.cgi?id=743958
Signed-off-by: Lim Siew Hoon <siew.hoon.lim@intel.com>
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Practically we should be able to support more formats, for eg:
JPEG Encoder can support YUV422, RGBA and all.
But this is causing more issues which need proper fix here and there.
Otherwise the condition could become true before the lock
is taken and the g_cond_signal() could be called
before the g_cond_wait(), so the g_cond_wait() is never
awoken.
https://bugzilla.gnome.org/show_bug.cgi?id=740645
Add support for GEM buffer imports. This is useful for VA/EGL interop
with legacy Mesa implementations, or when it is desired or required to
support outbound textures for instance.
https://bugzilla.gnome.org/show_bug.cgi?id=736718
Add new gst_vaapi_surface_new_with_dma_buf_handle() helper function
to allow for creating VA surfaces from a foreign DRM PRIME fd. The
resulting VA surface owns the supplied buffer handle.
https://bugzilla.gnome.org/show_bug.cgi?id=735362
Add gst_vaapi_surface_new_from_buffer_proxy() helper function to
create a VA surface from an external buffer provided throug the
new GstVaapiBufferProxy object.
Add support for GEM buffer exports. This will only work with VA drivers
based off libdrm, e.g. the Intel HD Graphics VA driver. This is needed
to support interop with EGL and the "Desktop" GL specification. Indeed,
the EXT_image_dma_buf_import extension is not going to be supported in
Desktop GL, due to the lack of support for GL_TEXTURE_EXTERNAL_OES targets
there.
This is useful for implementing VA/EGL interop with legacy Mesa stacks,
in Desktop OpenGL context.
https://bugzilla.gnome.org/show_bug.cgi?id=736717
Use the new VA buffer export APIs to allow for a VA surface to be
exposed as a plain PRIME fd. This is in view to simplifying interop
with EGL or OpenCL for instance.
https://bugzilla.gnome.org/show_bug.cgi?id=735364
The VA buffer export APIs work for a particular lifetime starting from
vaAcquireBufferHandle() and ending with vaReleaseBufferHandle(). As such,
it could be much more convenient to support implicit releases by simply
having a refcount reaching zero.
https://bugzilla.gnome.org/show_bug.cgi?id=736721
Reword surface pool allocation helpers so that to allow for a simple
form, e.g. gst_vaapi_surface_pool_new(format, width, height); and a
somewhat more elaborated/flexible form with optional allocation flags
and precise GstVideoInfo specification.
This is an API/ABI change, and SONAME version needs to be bumped.
Add GstVaapiDisplay::get_{visual_id,colormap}() helpers to help determine
the best suitable window visual id and colormap. This is an indirection in
view to supporting EGL and custom/generic replacements.
Add GstVaapiWindowClass::get_colormap() hook to help determine the
currently active colormap bound to the supplied window, or actually
create it if it does not already exist yet.
Add GstVaapiWindowClass::get_visual_id() function hook to help find
the best suitable visual id for the supplied window. While doing so,
also simplify the process by which an X11 window is created with a
desired Visual, i.e. now use a visual id instead of a Visual object.
Add a new generic helper function gst_vaapi_window_new() to create
a window without having the caller to check for the display type
himself. i.e. internally, there is now a GstVaapiDisplayClass hook
to create windows, and the actual backend implementation fills it in.
Add new generic helper functions gst_vaapi_texture_new_wrapped()
This is a simplification in view to supporting EGL.
Add gst_vaapi_display_has_opengl() helper function to help determining
whether the display can support OpenGL context to be bound to it, i.e.
if the class is of type GST_VAAPI_DISPLAY_TYPE_GLX.
Make gst_vaapi_display_get_display_type() return the actual VA display
type. Conversely, add a gst_vaapi_display_get_class_type() function to
return the type of the GstVaapiDisplay instance. The former is used to
identify the display server onto which the application is running, and
the latter to identify the original object class.
Record the underlying native display instance into the toplevel
GstVaapiDisplay object. This is useful for fast lookups to the
underlying native display, e.g. for creating an EGL display.
Add new generic helper functions gst_vaapi_texture_new_wrapped()
and gst_vaapi_texture_new() to create a texture without having
the caller to uselessly check for the display type himself. i.e.
internally, there is now a GstVaapiDisplayClass hook to create
textures, and the actual backend implementation fills it in.
This is a simplification in view to supporting EGL.
GstVaapiTexture is a generic abstraction that could be moved to the
core libgstvaapi library. While doing this, no extra dependency needs
to be added. This means that a GstVaapitextureClass is now available
for any specific code that needs to be added, e.g. creation of the
underlying GL texture objects, or backend dependent ways to upload
a surface to the texture object.
Generic OpenGL data types (GLuint, GLenum) are also replaced with a
plain guint.
https://bugzilla.gnome.org/show_bug.cgi?id=736715
The VA/GLX interfaces are obsolete. They used to exist for XvBA, and
ease of use, but they had other caveats to deal with. It's now better
to move on to legacy mode, whereby VA/GLX interop is two be provided
through (i) X11 Pixmap, and (ii) other modern means of buffer sharing.
https://bugzilla.gnome.org/show_bug.cgi?id=736711
The gst_vaapi_texture_put_surface() function is missing a crop_rect
argument that would be used during transfer for cropping the source
surface to the desired dimensions.
Note: from a user point-of-view, he should create the GstVaapiTexture
object with the cropped size. That's the default behaviour in software
decoding pipelines that we need to cope with.
This is an API/ABI change, and SONAME version needs to be bumped.
https://bugzilla.gnome.org/show_bug.cgi?id=736712
Add new gst_vaapi_surface_new_full() helper function that allocates
VA surface from a GstVideoInfo template in argument. Additional flags
may include ways to
- allocate linear storage (GST_VAAPI_SURFACE_ALLOC_FLAG_LINEAR_STORAGE) ;
- allocate with fixed strides (GST_VAPI_SURFACE_ALLOC_FLAG_FIXED_STRIDES) ;
- allocate with fixed offsets (GST_VAAPI_SURFACE_ALLOC_FLAG_FIXED_OFFSETS).
Add new gst_vaapi_surface_proxy_new() helper to wrap a surface into
a proxy. The main use case for that is to convey additional information
at the proxy level that would not be suitable to the plain surface.
Re-introduce a GST_VAAPI_ID_INVALID value that represents
a non-zero and invalid id. This is useful to have a value
that is still invalid for cases where zero could actually
be a valid value.
Make it possible to have all libgstvaapi backends (libs) access to a
common GstVaapiMiniObject API and implementation. This is a minor step
towards full exposure when needed, but restrict it to libgstvaapi at
this time.
Really report sample aspect ratio (SAR) as present, and make it match
what we have obtained from the user as pixel-aspect-ratio (PAR). i.e.
really make sure VUI parameter aspect_ratio_info_present_flag is set
to TRUE and that the indication from aspect_ratio_idc is Extended_SAR.
This is a leftover from git commit a12662f.
https://bugzilla.gnome.org/show_bug.cgi?id=740360
Fix gst_vaapi_decoder_mpeg4_parse() to initialize the packet type to
GST_MPEG4_USER_DATA so that a parse error would result in skipping
that packet. Also fix gst_vaapi_decoder_mpeg4_decode_codec_data() to
initialize status to GST_VAAPI_DECODER_STATUS_SUCCESS.
Use the SEI pic_timing() message to track and propagate down the repeat
first field (RFF) flag. This is only initial support as there is one
other condition that could induce the RFF flag, which is not handled
yet.
Fix the decoding process for picture order count type 0 when the previous
picture had a memory_management_control_operation = 5. In particular, fix
the actual variable type for prev_pic_structure to hold the full bits of
the picture structure.
In practice, this used to work though, due to the underlying type used to
express a gboolean.
Use the SEI pic_timing() message to track the pic_struct variable when
present, or infer it from the regular slice header flags field_pic_flag
and bottom_field_flag. This fixes temporal sequence ordering when the
output pictures are to be displayed.
https://bugzilla.gnome.org/show_bug.cgi?id=739291
Add support for DRM Render-Nodes. This is a new feature that appeared
in kernel 3.12 for experimentation purposes, but was later declared
stable enough in kernel 3.15 for getting enabled by default.
This allows headless usages without authentication at all, i.e. usages
through plain ssh connections is possible.
Fix gst_vaapi_surface_proxy_copy() to copy the view-id element, thus
fixing random frames skipped when vaapipostproc element is used in
passthrough mode. In that mode, GstMemory is copied, thus including
the underlying GstVaapiVideoMeta and associated GstVaapiSurfaceProxy.
Ensure the X11 implementation for GstVaapiWindow::get_geometry() is
thread-safe by default, so that upper layer users don't need to handle
that explicitly.
Add gst_vaapi_window_reconfigure() interface to force an update of
the GstVaapiWindow "soft" size, based on the current geometry of the
underlying native window.
This can be useful for instance to synchronize the window size when
the user changed it.
Thanks to Fabrice Bellet for rebasing the patch.
[changed interface to gst_vaapi_window_reconfigure()]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Add gst_vaapi_display_get_display_name() helper function to determine
the name associated with the underlying native display. Note that for
raw DRM backends, the display name is actually the device path.
The timestamp generator in gstvaapidecoder_mpeg2.c always interpolated
frame timestamps within a GOP, even when it's been fed input PTS for
every frame.
That leads to incorrect output timestamps in some situations - for example
live playback where input timestamps have been scaled based on arrival time
from the network and don't exactly match the framerate.
https://bugzilla.gnome.org/show_bug.cgi?id=732719
Forbid GstVaapiObject to be created without an associated klass spec.
It is mandatory that the subclass implements an adequate .finalize()
hook, so it shall provide a valid GstVaapiObjectClass.
https://bugzilla.gnome.org/show_bug.cgi?id=722757
[made non-NULL klass argument to gst_vaapi_object_new() a requirement]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Call the subclass .init() function in gst_vaapi_object_new(), if
needed. The default behaviour is to zero initialize the subclass
object data, then the .init() function can be used to initialize
fields to non-default values, e.g. VA object ids to VA_INVALID_ID.
Also fix the gst_vaapi_object_new() description, which was merely
copied from GstVaapiMiniObject.
https://bugzilla.gnome.org/show_bug.cgi?id=722757
[changed to always zero initialize the subclass]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
When a DPB flush is required, e.g. at a natural and of stream or issued
explicitly through an IDR, try to detect any frame left in the DPB that
is interlaced but does not contain two decoded fields. In that case, mark
the picture as having a single field only.
This avoids a hang while decoding tv_cut.mkv.
Simplify the dpb_output() function to exclusively rely on the frame store
buffer to output, since this is now always provided. Besides, also fix
cases where split fields would not be displayed.
This is a regression from f48b1e0.
Cope with latest changes from codecparsers/h264. It is now required
to explicitly clear the GstH264PPS structure as it could contain
additional allocations (slice_group_ids).
Add new GstVaapiSurfaceProxy flag FFB, which means "first frame in
bundle", and really expresses the first view component of a multi
view coded frame. e.g. in H.264 MVC, the surface proxy has flag FFB
set if VOIdx = 0.
Likewise, new API is exposed to retrieve the associated "view-id".
Allow decoders to set the "one-field" attribute when the decoded frame
genuinely has a single field, or if the second field was mis-decoded but
we still want to display the first field.
Make sure to output the decoded picture, and push the associated
GstVideoCodecFrame, only once. The frame fully represents what needs
to be output, included for interlaced streams. Otherwise, the base
GstVideoDecoder class would release the frame twice.
Anyway, the general process is to output decoded frames only when
they are complete. By complete, we mean a full frame was decoded or
both fields of a frame were decoded.
Slightly optimize decoding process by submitting the current VA surface
for decoding earlier to the hardware, and perform the reference picture
marking process and DPB update process afterwards.
This is a minor optimization to let the video decode engine kick in work
earlier, thus improving parallel resources utilization.
Fix decoding of interlaced streams where a first field (e.g. B-slice)
was immediately output and the current decoded field is to be paired
with that former frame, which is no longer in DPB.
https://bugzilla.gnome.org/show_bug.cgi?id=701340
Optimize the process to detect new pictures or start of new access
units by checking if the previous NAL unit was the end of a picture,
or the end of the previous access unit.
Add support for MVC streams with multiple SPS and subset SPS headers
emitted regularly, e.g. at around every I-frame. Track the maximum
number of views in ensure_context() and really reset the DPB size to
the expected value, always. i.e. even if it decreased. dpb_reset()
only cares of ensuring the DPB allocation.
Fix the compaction process when the DPB is cleared for a specific
view, i.e. fix the process of filling in the holes resulting from
removing frame buffers matching the current picture.
It is not necessary to periodically send SPS or subset SPS headers.
This is up to the upper layer (e.g. transport layer) to decide on
if/how to periodically submit those. For now, only generate new SPS
or subset SPS headers when the codec config changed.
Note: the upper layer could readily determine the config headers
(SPS/PPS) through the gst_vaapi_encoder_h264_get_codec_data() function.
https://bugzilla.gnome.org/show_bug.cgi?id=732083
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Report sample aspect ratio (SAR) as present, and make it match what
we have obtained from the user as pixel-aspect-ratio (PAR). i.e. the
VUI parameter aspect_ratio_info_present_flag now defaults to TRUE.
Set the value of num_anchor_refs_l0, num_anchor_refs_l1, num_non_anchor_refs_l0,
and num_non_anchor_refs_l1 to zero since the inter-view prediction is not yet
supported.
When the seq_parameter_set_data() syntax structure is present in a subset
sequence parameter set and vui_parameters_present_flag is equal to 1, then
timing_info_present_flag shall be equal to 0 (H.7.4.2.1.1).
Submit Prefix NAL headers (nal_unit_type = 14) before every packed
slice header (nal_unit_type = 1 or 5) only for the base view. In non
base views, a Coded Slice Extension NAL header (nal_unit_type = 20)
is required, with an appropriate nal_unit_header_mvc_extension() in
the NAL header bytes.
https://bugzilla.gnome.org/show_bug.cgi?id=732083
Fix search for a picture in the DPB that has a lower POC value than
the current picture. The dpb_find_lowest_poc() function will return
a picture with the lowest POC in DPB and that is marked as "needed
for output", but an additional check against the actual POC value
of the current picture is needed.
This is a regression from 1c46990.
https://bugzilla.gnome.org/show_bug.cgi?id=732130
Fix dpb_clear() to clear previous frame buffers only if they actually
exist to begin with. If the decoder bailed out early, e.g. when it
does not support a specific profile, that array of previous frames
might not be allocated beforehand.
We can avoid scanning for start codes again if the bitstream is fed
in NALU chunks. Currently, we always scan for start codes, and keep
track of remaining bits in a GstAdapter, even if, in practice, we
are likely receiving one GstBuffer per NAL unit. i.e. h264parse with
"nal" alignment.
https://bugzilla.gnome.org/show_bug.cgi?id=723284
[use gst_adapter_available_fast() to determine the top buffer size]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
The `vaapipostproc' element could never determine if the H.264 stream
was interlaced, and thus always assumed it to be progressive. Fix the
H.264 decoder to report interlace-mode accordingly, thus allowing the
vaapipostproc element to automatically enable deinterlacing.
The packed slice header and packed raw data need to be paired with
the submission of VAEncSliceHeaderParameterBuffer. So handle them
on a per-slice basis insted of a per-picture basis.
[removed useless initializer]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Factor out the removal process of unused inter-view only reference
pictures from the DPB, prior to the possible insertion of the current
picture.
Ideally, the compiler could still opt for generating two loops. But
at least, the code is now clearer for maintenance.
Improve process for the removal of pictures from DPB before possible
insertion of the current picture (C.4.4) for H.264 MVC inter-view only
reference components. In particular, handle cases where picture to be
inserted is not the last one of the access unit and if it was already
output and is no longer marked as used for reference, including for
decoding next view components within the same access unit.
While invoking the DPB bumping process in presence of many views,
it could be necessary to output previous pictures that are ready,
in a whole. i.e. emitting all view components from the very first
view order index zero to the very last one in its original access
unit; and not starting from the view order index of the picture
that caused the DPB bumping process to be invoked.
As a reminder, the maximum number of frames in DPB for MultiView
High profile with more than 2 views is not necessarily a multiple
of the number of views.
This fixes decoding of MVCNV-4.264.
Let the utility layer handle dynamic growth of the inter-view pictures
array. By definition, setting a new size to the array will effectively
grow the array, but would also fill in the newly created elements with
empty entries (NULL), thus also increasing the reported length, which
is not correct.
When decoding Multiview High profile streams with a large number of
views, it is not possible to make the VAPictureParameterBufferH264.
ReferenceFrames[] array hold the complete DPB, with all possibly
active pictures to be used for inter-view prediction in the current
access unit.
So reduce the scope of the ReferenceFrames[] array to only include
the set of reference pictures that are going to be used for decoding
the current picture. Basically, this is a union of all RefPicListX[]
array, for all slices constituting the decoded picture.
The inter-view reference components and inter-view only reference
components that are included in the reference picture lists shall
be considered as not being marked as "used for short-term reference"
or "used for long-term reference". This means that reference flags
should all be removed from VAPictureH264.flags.
This fixes decoding of MVCNV-2.264.
If the VA driver exposes ad-hoc H.264 MVC profiles, then we have to
be careful to detect profiles changes and not reset the underlying
VA context erroneously. In MVC situations, we could indeed get a
profile_idc change for every SPS that gets activated, alternatively
(base-view -> non-base view -> base-view, etc.).
An improved fix would be to characterize the exact profile to use
once and for all when SPS NAL units are parsed. This would also
allow for fallbacks to a base-view decoding only mode.
Exclusively use VA drivers that support raw packed headers for encoding.
i.e. simply submit packed headers Subset SPS and Prefix NAL units. This
provides for better compatibility accross the various VA drivers and HW
generations since no particular API is needed beyond what readily exists.
Since we are encoding each view independently from each other, we
need a higher number of pre-allocated surfaces to be used as the
reconstructed frames. For Stereo High profile encoding, this means
to effectively double the number of frames to be stored in the DPB.
Add initial support for Subset SPS, Prefix NAL and Slice Extension NAL
for non-base-view streams encoding, and the usual SPS, PPS and Slice
NALs for base-view encoding.
The H.264 Stereo High profile encoding mode will be turned on when the
"num-views" parameter is set to 2. The source (raw) YUV frames will be
considered as Left/Right view, alternatively.
Each of the two views has its own frames reordering pool and reference
frames list management system. Inter-view references are not supported
yet, so the views are encoded independently from each other.
Signed-off-by: Li Xiaowei <xiaowei.a.li@intel.com>
[limited to Stereo High profile per the definition of MAX_NUM_VIEWS]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Create structures to maintain the reference frames list (RefPool) and
frames reordering (ReorderPool) logic.
This is a prerequisite for H.264 MVC support.
Signed-off-by: Li Xiaowei <xiaowei.a.li@intel.com>
Add provisions to write subset SPS headers to the bitstream in view
to supporting the H.264 MVC specification.
This assumes the libva "staging" branch is in use.
Signed-off-by: Li Xiaowei <xiaowei.a.li@intel.com>
Optimize lookups of view ids / view order indices by caching the result
of the calculatiosn right into the GstVaapiParserInfoH264 struct. This
terribly simplifies is_new_access_unit() and find_first_field() functions.
Add safe fallbacks for MVC profiles:
- all MultiView High profile streams with 2 views at most can be decoded
with a Stereo High profile compliant decoder ;
- all Stereo High profile streams with only progressive views can be
decoded with a MultiView High profile compliant decoder ;
- all drivers that support slice-level decoding could normally support
MVC profiles when the DPB holds at most 16 frames.
In order to have a stricter conforming implementation, we need to carefully
detect access unit boundaries. Additional operations could be necessary to
perform at those boundaries.
Detect the first VCL NAL unit of a picture for MVC, based on the
view_id as per H.7.4.1.2.4. Note that we only need to detect new
view components.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Always cache the previous NAL unit so that we could check whether
there is a Prefix NAL unit immediately preceding the current slice
or IDR NAL unit. In that case, the NAL unit metadata is copied into
the current NAL unit. Otherwise, some default values are inferred,
tentatively. e.g. view_id shall be set to 0 and inter_view_flag to 1.
[infer default values for slice if previous NAL was not a Prefix]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Allow decoding for base views of MVC encoded streams. For now, just skip
the slice extension and prefix NAL units, and skip non-base view frames.
Signed-off-by: Xiaowei Li <xiaowei.a.li@intel.com>
[fixed memory leak, improved check for MVC NAL units]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Factor out process by which the decoded picture with the lowest POC
is found, and possibly output. Likewise, the storage and marking of
a reference decoded, or non-reference decoded picture, into the DPB
could also be simplified as they mostly share the same operations.
Make init_picture_ref_lists() more consistent with other functions
related to the reference marking process by supplying the current
picture as argument.
Add gst_vaapi_display_get_vendor_string() helper function to query
the underlying VA driver name. The display object owns the resulting
string, so it shall not be deallocated.
That function is thread-safe. It could be used for debugging purposes,
for instance.
Make sure to initialize one GstVaapiDisplay at a time, even in threaded
environments. This makes sure the display cache is also consistent
during the whole display creation process. In the former implementation,
there were risks that display cache got updated in another thread.
Add support for dynamic growth of the VA surfaces pool. For decoding,
this implies the recreation of the underlying VA context, as per the
requirement from VA-API. Besides, only increases are supported, not
shrinks.
It is a requirement from VA-API specification that the VA context got
from vaCreateContext(), for decoding purposes, binds the supplied set
of VA surfaces. This means that if the set of VA surfaces is to be
changed for the current decode session, then the VA context needs to
be recreated with the new set of VA surfaces.
Complement fix committed as e95a42e.
The H.264 AVC standard has to say: if the field is part of a reference
frame or a complementary reference field pair, and the other field of
the same reference frame or complementary reference field pair is also
marked as "used for long-term reference", the reference frame or
complementary reference field pair is also marked as "used for long-term
reference" and assigned LongTermFrameIdx equal to long_term_frame_idx.
This fixes decoding of MR9_BT_B in strict mode.
https://bugs.freedesktop.org/show_bug.cgi?id=64624https://bugzilla.gnome.org/show_bug.cgi?id=724518
Request the correct chroma format for decoding grayscale streams.
i.e. make lookups of the VA chroma format more generic, thus possibly
supporting more formats in the future.
This means that, if a VA driver doesn't support grayscale formats,
it is now going to fail. We cannot safely assume that maybe grayscale
was implemented on top of some YUV 4:2:0 with the chroma components
all set to 0x80.
Fix reference picture marking process with memory_management_control_op
set to 3 and 6, i.e. assign LongTermFrameIdx to a short-term reference
picture, or the current picture.
This fixes decoding of FRExt_MMCO4_Sony_B.
https://bugs.freedesktop.org/show_bug.cgi?id=64624https://bugzilla.gnome.org/show_bug.cgi?id=724518
[squashed, edited to use GST_VAAPI_PICTURE_IS_COMPLETE() macro]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
The initialization of reference picture lists (8.2.4.2) applies to all
slices. So, the RefPicList0/1 lists need to be constructed prior to
each slice submission to the HW decoder.
This fixes decoding of video sequences where frames are encoded with
multiple slices of different types, e.g. 4 slices in this order I, P,
I, and P. More precisely, CABAST3_Sony_E and CABASTBR3_Sony_B.
https://bugzilla.gnome.org/show_bug.cgi?id=724518
When NAL units of type 13 (SPS extension) or type 19 (auxiliary slice)
are present in a video, decoders shall perform the (optional) decoding
process specified for these NAL units or shall ignore them (7.4.1).
Implement option 2 (skip) for now, as alpha composition is not
supported yet during the decoding process.
This fixes decoding of the primary coded video in alphaconformanceG.
https://bugzilla.gnome.org/show_bug.cgi?id=703928https://bugzilla.gnome.org/show_bug.cgi?id=728869https://bugzilla.gnome.org/show_bug.cgi?id=724518
[skip NAL units earlier, i.e. at parsing time]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
When MVC slice NAL units (coded slice extension and prefix NAL) are
present, the number of NAL header bytes is 3, not 1 as usual.
Signed-off-by: Li Xiaowei <xiaowei.a.li@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
At the time the first VCL NAL unit of a primary coded picture is found,
and if that NAL unit was parsed to be an SPS or PPS, then the entries
in the parser may have been overriden. This means that, when the picture
is to be decoded, slice_hdr->pps could point to an invalid (the next)
PPS entry.
So, one way to solve this problem is to not use the parser PPS and
SPS info but rather maintain our own activation chain in the decoder.
https://bugzilla.gnome.org/show_bug.cgi?id=724519https://bugzilla.gnome.org/show_bug.cgi?id=724518
Retain the SEI messages that were parsed from the access unit until we
have completely decoded the current frame. This is done so that we can
peek at that data whenever necessary during decoding. e.g. for exposing
3D stereoscopic information at a later stage.
Fix support for grayscale encoded video clips, and possibly others if
the underlying driver supports the non-YUV 4:2:0 formats. i.e. defer
the decision that a surface with the desired chroma format is not
supported to the actual VA driver implementation.
https://bugzilla.gnome.org/show_bug.cgi?id=728144
Don't force allocation of VA surfaces in YUV 4:2:0 format. Rather, allow
for the upper layer to specify the desired chroma type. If the chroma
type field is not set (or yields zero), then YUV 4:2:0 format is used
by default.
Fix possible bug when a per-segment deblocking filter level value
needs to be set in non-absolute mode, i.e. when the loop filter update
value is negative in delta mode.
Also clamp the resulting filter level value to 0..63 range.
Improve condition to disable the loop filter. The previous heuristic
used to check all filter levels, for all segments. It turns out that
only the base filter_level value defined in the frame header needs
to be checked.
This fixes 00-comprehensive-013.
The gst_h264_parse_parse_sei() function now returns an array of SEI
messages, instead of a single SEI message. Reason: it is allowed to
have several SEI messages packed into a single SEI NAL unit, instead
of multiple NAL units.
Fix parser and decoder state to sync at the right locations. This is
because we could reset the parser state, while the decoder state was
not copied yet, e.g. when parsing several NAL units from multiple frames
whereas the current frame was not decoded yet.
This is a regression brought in by commit 6fe5496.
Fix gstreamer-vaapi includedir for GStreamer 1.2 setups. i.e. use
the pkgconfig version (1.0) instead of the intended API version (1.2).
libgstvaapi1.0-dev and libgstvaapi1.2-dev packages will now conflict,
as would core GStreamer 1.0 and GStreamer 1.2 dev packages anyway.
Add gst_vaapi_get_config_attribute() helper function that takes a
GstVaapiDisplay and the rest of the arguments with VA types. The aim
is to have thread-safe VA helpers by default.
Make sure to configure the encoder with the set of packed headers we
intend to generate and submit. i.e. make selection of packed headers
to submit more robust.
Cache the first compatible GstVaapiProfile found if the encoder is not
configured yet. Next, factor out the code to check for the supported
rate-control modes by moving out vaGetConfigAttributes() to a separate
function, while also making sure that the attribute type is actually
supported by the encoder.
Also fix the default set of supported rate control modes to not the
"none" variant. It's totally useless to expose it at this point.
Introduce GstVaapiContextUsage so that to explicitly determine the
usage of a VA context. This is useful in view to simplifying the
creation of VA context for VPP too.
Unknown attributes, or attributes that are not supported for the given
profile/entrypoint pair have a return value of VA_ATTRIB_NOT_SUPPORTED.
So, return failure in this case.
Move GstVideoOverlayComposition handling to separate source files.
This helps keeing GstVaapiContext core implementation to the bare
minimal, i.e. simpy helpers to create a VA context and handle pool
of associated VA surfaces.
Improve documentation and debug messages. Clean-up APIs, i.e. strip
them down to the minimal set of interfaces. They are private, so no
need expose getters for instance.
Make sure that libgstvaapi private headers remain internally used to
build libgstvaapi libraries only. All header dependencies were reviewed
and checks for IN_LIBGSTVAAPI definition were added accordingly.
Also rename GST_VAAPI_CORE definition to IN_LIBGSTVAAPI_CORE to keep
consistency.
Set a valid GstVideoCodecFrame.system_frame_number when decoding a
stream in standalone mode. While we are at it, improve the debugging
messages to also include that frame number.
When decoding failed, or that the frame was dropped, the associated
surface proxy is not guaranteed to be present. Thus, the GST_DEBUG()
message needs to check whether the proxy is actually present or not.
https://bugzilla.gnome.org/show_bug.cgi?id=722403
[fixed gst_vaapi_surface_proxy_get_surface_id() to return VA_INVALID_ID]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Don't emit NAL HRD parameters for now in the SPS headers because the
SEI buffering_period() and picture_timing() messages are not handled
yet. Some additional changes are necessary to get it right.
https://bugzilla.gnome.org/show_bug.cgi?id=722734
Fix default CPB buffer size to something more reasonable (1500 ms)
and that still fits the level limits. This is a non configurable
property for now. The initial CPB removal delay is also fixed to
750 ms.
https://bugzilla.gnome.org/show_bug.cgi?id=722087
Round down the calculated, or supplied, bitrate (kbps) into a multiple
of the HRD bitrate scale factor. Use a bitrate scale factor of 64 so
that to have less losses in precision. Likewise, don't round up because
that could be a strict constraint imposed by the user.
Fix the level calculation involving bitrate limits. Since we are
targetting NAL HRD conformance, the check against MaxBR from the
Table A-1 limits shall involve cpbBrNalFactor depending on the
active profile.
Submit sequence parameter buffers only once, or when the bitstream
was reconfigured in a way that requires such. Always submit packed
sequence parameter buffers at I-frame period, if the VA driver needs
those.
https://bugzilla.gnome.org/show_bug.cgi?id=722737
Make sure to submit the packed headers only if the underlying VA driver
requires those. Currently, only handle packed sequence and picture
headers.
https://bugzilla.gnome.org/show_bug.cgi?id=722737
The VAEncSequenceParameterBuffer.ip_period value reprents the distance
between the I-frame and the next P-frame. So, this also accounts for
any additional B-frame in the middle of it.
This fixes rate control heuristics for certain VA drivers.
https://bugzilla.gnome.org/show_bug.cgi?id=722735
Fix level characterisation when the bitrate is automatically computed
from the active coding tools. i.e. ensure the bitrate once the profile
is completely characterized but before the level calculation process.
Document and rename a few functions here and there. Drop code that
caps num_bframes variable in reset_properties() since they shall
have been checked beforehand, during properties initialization.
Clean-up GstBitWriter related utility functions and simplify notations.
While we are at it, also make bitstream writing more robust should an
overflow occur. We could later optimize for writing headers capped to
their maximum possible size by using the _unchecked() helper variants.
Drop private header since it was originally used to expose internals
to the plugin element. The proper interface is now the properties API,
thus rendering private headers totally obsolete.
Add support for hue, saturation, brightness and constrat adjustments.
Also fix cap info local copy to match the really expected cap subtype
of interest.
https://bugzilla.gnome.org/show_bug.cgi?id=720376
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Change the submission order of VA objects so that to make that process
more logical. i.e. submit sequence parameter first, if any; next the
packed headers associated to sequece, picture or slices; and finally
the actual picture and associated slices.
Various clean-ups to improve consistency and readability: rename some
variables, drop unused macro definitions, drop initialization of vars
that are zero-initialized from the base class, drop un-necessary casts,
allocate GPtrArrays with a destroy function.
Fix frame cropping rectangle calculation to handle horizontal resolutions
that don't match a multiple of 16 pixels, but also the vertical resolution
that was incorrectly computed for progressive sequences too.
https://bugzilla.gnome.org/show_bug.cgi?id=722089
For non "Constant-QP" modes, we could provide more reasonable heuristics
for the target bitrate. In general, 48 bits per macroblock with all the
useful coding tools enable looks safe enough. Then, this rate is raised
by +10% to +15% for each coding tool that is disabled.
https://bugzilla.gnome.org/show_bug.cgi?id=719699
Add support for "high-compression" tuning option. First, determine the
largest supported profile by the hardware. Next, check any target limit
set by the user. Then, enable each individual coding tool based on the
resulting profile_idc value to use.
https://bugzilla.gnome.org/show_bug.cgi?id=719696
Allow user to precise the largest profile to use for encoding due
to target decoder constraints. For instance, if CABAC entropy coding
mode is requested by "constrained-baseline" profile only is desired,
then an error is returned during codec configuration.
Also make sure that the suitable profile we derived actually matches
what the HW can cope with.
https://bugzilla.gnome.org/show_bug.cgi?id=719694
Refine the heuristic to determine the maximum size of a coded buffer
to account for the exact number of slices. set_context_info() is the
last step during codec reconfiguration, no additional change is done
afterwards, so re-using the num_slices field here is fine.
https://bugzilla.gnome.org/show_bug.cgi?id=719953
Automatically derive the minimum profile and level to be used for
encoding, based on the activated coding tools. The encoder will
be trying to generate a bitstream that has the best chances to be
decoded on most platforms by default.
Also change the default profile to "constrained-baseline" so that
to ensure maximum compatibility when the stream is decoded.
https://bugzilla.gnome.org/show_bug.cgi?id=719691
Fix lookup for a suitable HW profile, as to be used by the underlying
hardware, based on heuristics that lead to characterize the SW profile,
i.e. the one used by the SW level encoding logic.
Also fix constraint_set0_flag (A.2.1) and constraint_set1_flag (A.2.2)
as they should respectively match the baseline and main profile.
https://bugzilla.gnome.org/show_bug.cgi?id=719827
The libgstvaapi core encoders are meant to support raw bitstreams only.
Henceforth, we are always producing a stream in "byte-stream" format.
However, the "codec-data" buffer which holds SPS and PPS headers is
always available. The "lengthSizeMinusOne" field is always set to 3
so that in-place "byte-stream" format to "avc" format conversion could
be performed.
Various clean-ups to improve consistency and readability: rename some
variables, drop unused macro definitions, drop initialization of vars
that are zero-initialized from the base class, drop un-necessary casts.
Fix lookup for a suitable HW profile, as to be used by the underlying
hardware, based on heuristics that lead to characterize the SW profile,
i.e. the one used by the SW level encoding logic.
Automatically derive the minimum profile and level to be used for
encoding, based on the activated coding tools. Improve lookup for
the best suitable level with the new MPEG-2 helper functions.
Also change the default profile to "simple" so that to ensure maximum
compatibility when the stream is decoded.
https://bugzilla.gnome.org/show_bug.cgi?id=719703
Various clean-ups to improve consistency and readability: drop unused
macro definitions, drop initialization of vars that are zero-initialized
from the base class, drop un-necessary casts.
Add encoder "tune" option to override the default behaviour that is to
favor maximum decoder compatibility at the expense of lower compression
ratios.
Expected tuning options to be developed are:
- "high-compression": improve compression, target best-in-class decoders;
- "low-latency": tune for low-latency decoding;
- "low-power": tune for encoding in low power / resources conditions.
Only expose the exact static set of supported rate-control properties
to the upper layer. For instance, if the GstVaapiEncoderXXX class does
only support CQP rate control, then only add it the the exposed enum
type.
Add helper macros and functions to build a GType for an enum subset.
Add gst_vaapi_encoder_set_keyframe_period() interface to allow the
user control the maximum distance between two keyframes. This new
property can only be set prior to gst_vaapi_encoder_set_codec_state().
A value of zero for "keyframe-period" gets it re-evaluated to the
actual framerate during encoder reconfiguration.
Improve codec reconfiguration to be performed only through a single
function. That is, remove the _set_context_info() hook as subclass
should not alter the parent GstVaapiContextInfo itself. Besides, the
VA context is constructed only at the final stages of reconfigure().
Add interface to communicate the encoder resolution and related info
like framerate, interlaced vs. progressive, etc. This new interface
supersedes gst_vaapi_encoder_set_format() and doesn't use any GstCaps
but rather use GstVideoCodecState.
Note that gst_vaapi_encoder_set_codec_state() is also a synchronization
point for codec config. This means that the encoder is reconfigured
there to match the latest properties.
Add interface to communicate configurable properties to the encoder.
This covers both the common ones (rate-control, bitrate), and the
codec specific properties.
https://bugzilla.gnome.org/show_bug.cgi?id=719529
Add gst_vaapi_encoder_set_bitrate() interface to allow the user control
the bitrate for encoding. Currently, changing this parameter is only
valid before the first frame is encoded. Should the value be modified
afterwards, then GST_VAAPI_ENCODER_STATUS_ERROR_OPERATION_FAILED is
returned.
https://bugzilla.gnome.org/show_bug.cgi?id=719529
Add gst_vaapi_encoder_set_rate_control() interface to request a new
rate control mode for encoding. Changing the rate control mode is
only valid prior to encoding the very first frame. Afterwards, an
error ("operation-failed") is issued.
https://bugzilla.gnome.org/show_bug.cgi?id=719529
Instal <gst/vaapi/gstvaapiutils_h264.h> header but only expose the
H.264 levels in there. The additional helper functions are meant
to be private for now.
Fix formts for various GST_DEBUG et al. invocations. More precisely,
make size_t arguments use the %zu format specifier accordingly; force
XID formats to be a 32-bit unsigned integer; and fix the format used
for gst_vaapi_create_surface_with_format() error cases since we have
been using strings nowadays.
The following helper functions are no longer used, thus are removed:
- gst_vaapi_video_format_from_structure()
- gst_vaapi_video_format_from_caps()
- gst_vaapi_video_format_to_caps()
Replace gst_vaapi_display_get_{decode,encode}_caps() APIs with more
more convenient APIs that return an array of GstVaapiProfile instead
of GstCaps: gst_vaapi_display_get_{decode,encode}_profiles().
Replace gst_vaapi_display_get_{image,subpicture}_caps() APIs, that
returned GstCaps, with more convenient APIs that return an array of
GstVideoFormat: gst_vaapi_display_get_{image,subpicture}_formats().
Fix display creation code to check that any display obtained from a
neighbour actually has the type we expect. Note: if display type is
set to "any", we can then accept any VA display type.
The GLTextureUploadMeta implementation assumed that for each upload()
sequence, the supplied texture id is always the same as the one that
was previously cached into the underlying GstVaapiTexture. Cope with
any texture id change the expense to recreate the underlying VA/GLX
resources.
https://bugzilla.gnome.org/show_bug.cgi?id=719643
Improve robustness when some expected packets where not received yet
or that were not correctly decoded. For example, don't try to decode
a picture if there was no valid frame headers parsed so far.
https://bugs.freedesktop.org/show_bug.cgi?id=57902
Conformance test Base_Ext_Main_profiles/BA3_SVA_C.264 complys with
extended profile specifications. However, the SPS header has the
constraint_set1_flag syntax element set to 1. This means that, if
a Main profile compliant decoder is available, then it should be
able to decode this stream.
This changes makes it possible to fall-back from Extended profile
to Main profile if constraint_set1_flag is set to 1.
https://bugzilla.gnome.org/show_bug.cgi?id=720190
Recognize streams marked as conforming to the "Constrained Baseline
Profile". If VA driver supports that as is, fine. Otherwise, fallback
to baseline, main or high profile.
Constrained Baseline Profile conveys coding tools that are common
to baseline profile and main profile.
https://bugzilla.gnome.org/show_bug.cgi?id=719947
[Added fallbacks to main and high profiles]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
The GStreamer codecparser layer now parses the scaling lists in zigzag
scan order, as expected, so that to match the original bitstream layout
and specification. However, further convert the scaling lists into
raster scan order to fit the existing practice in most VA drivers.
https://bugzilla.gnome.org/show_bug.cgi?id=706406
- gst_vaapi_utils_h264_get_level():
Returns GstVaapiLevelH264 from H.264 level_idc value
- gst_vaapi_utils_h264_get_level_idc():
Returns H.264 level_idc value from GstVaapiLevelH264
- gst_vaapi_utils_h264_get_level_limits():
Returns level limits as specified in Table A-1 of the H.264 standard
- gst_vaapi_utils_h264_get_level_limits_table():
Returns the Table A-1 specification