gst_vaapi_video_buffer_pool_reset_buffer() is called when the sink
releases the last reference on an exported DMA buffer. This should
release the underlying surface proxy. To avoid releasing the wrong
surface due to a stale surface proxy reference in the buffer's
GstVaapiVideoMeta, always update the reference to the correct surface
in gst_vaapi_video_buffer_pool_acquire_buffer().
When baseline-as-constrained is set, the decoder will expose support
for baseline decoding and assume that the baseline content is
constrained-baseline. This can be handy to decode streams in hardware
that would otherwise not be possible to decode. A lot of baseline
content is in fact constrained.
And, if downstream requests a specific level, the caps are not
negotiated, because there is no mechanism right now to specify a
custom level in the internal encoder.
When the available caps doesn't intersect with the allowed caps in the
pipeline, a new caps is proposed rather than just expecting to
iterate.
Later, the intersected caps (profile_caps) is fixated in order to
extract the configuration.
FEI encoders are not actively mantained neither tested, and it is
using infrastructure that is changing and FEI is stopping this
effort.
Also it is required to rethink how FEI can be used in GStreamer.
Use the gst_video_aggregator_pad_has_current_buffer API
to check if the current sinkpad has a queued buffer before
attempting to obtain a input buffer from the base plugin.
If the sinkpad does not have a current buffer, then it is
either not producing them yet (e.g. current time < sinkpad
start time) or it has reached EOS.
Previously, we only handled EOS case.
Example:
gst-launch-1.0 videotestsrc num-buffers=100 \
! vaapipostproc ! vaapioverlay name=overlay \
! vaapisink videotestsrc timestamp-offset=1000000000 \
num-buffers=100 ! video/x-raw,width=160,height=120 \
! overlay.
Recursive functions are elegant but dangerous since they might
overflow the stack. It is better to turn them into a list tranversal
if possible, as this case.
Instead of using a parent structure that has to be derived by API
consumers, this change propse a simplification by using the common
pattern of GTK of passing a function pointer and user data which will
be passed as its parameter. That user data contains the state and the
function will be called to update that state.
The current get_profile just return one possible profile for the encode,
which is not enough. For example, if we want to support HEVC 4:4:4
profile, the input of encode should be VYUA rather than NV12 in HEVC
main profile. So the command line:
gst-launch-1.0 videotestsrc num-buffers=200 ! capsfilter \
caps=video/x-raw,format=VUYA,width=800,height=600 ! vaapih265enc \
tune=low-power init-qp=30 ! fakesink
can not work because vaapih265enc just report NV12 in sink caps, we need
to specify the profile obviously like:
gst-launch-1.0 videotestsrc num-buffers=200 ! capsfilter \
caps=video/x-raw,format=VUYA,width=800,height=600 ! vaapih265enc \
tune=low-power init-qp=30 ! capsfilter caps=video/x-h265, \
profile=main-444 ! fakesink
The encode should have the ability to choose the profile based on input
format automatically. If the input video format is VUYA, the main-444
profile should be auto choosed.
We modify to let get_allowed_profiles of each encode sub class to return
an array of all supported profiles based on downstream's allowed caps, or
return NULL if no valid profiles specified by downstream.
If no allowed profiles found, all profiles which belong to the current
encoder's codec will be the candidates.
The function gst_vaapi_encoder_get_surface_attributes collects the surface's
attributes for that profile list we just get.
So for this case, both NV12 and VUYA should be returned.
TODO: some codec like VP9, need to implement the get_profile() function.
A plugin similar to the base compositor element but
uses VA-API VPP blend functions to accelerate the
overlay/compositing.
Simple example:
gst-launch-1.0 -vf videotestsrc ! vaapipostproc \
! tee name=testsrc ! queue \
! vaapioverlay sink_1::xpos=300 sink_1::alpha=0.75 \
name=overlay ! vaapisink testsrc. ! queue ! overlay.
We can get all the information about the video format at one shot
when we create the test context for getting the supported formats.
The current way to get the width and height ranges are inefficient,
since it calls the function gst_vaapi_profile_caps_append_encoder()
and it creates another temporal context to detect the resolution
information.
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
We now set encode->allowed_sinkpad_caps to NULL if we fail to get
surfaces formats. This causes two problem:
1. gst_video_encoder_proxy_getcaps use NULL as its caps parameter,
which changes its behavior. It will use encode's sinkpad template
rather than empty caps to do the clip job. So even if we fail to set
allowed_sinkpad_caps, gst_video_encoder_proxy_getcaps can still return
valid caps.
2. We should just set the allowed_sinkpad_caps once. The NULL point
make the ensure_allowed_sinkpad_caps function works again and again.
Don't reset the can_dmabuf field. This restores the
close/reset logic that existed prior to commit
ca2942176b in regards to
dmabuf support.
Plugins only call gst_vaapi_plugin_base_set_srcpad_can_dmabuf
once during startup, but may need to reset the other private
fields multiple times during negotiation. Thus, can_dmabuf
should be exempt from the resets.
Fixes#208
GstVaapiMiniObject and GstVaapiObject are deprecated.
This is the first step to remove them by porting GstVaapiSurface as
a GstMiniBuffer descendant.
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
GstVaapiMiniObject and GstVaapiObject are deprecrated. This is the
first step to remove them, by porting GstVaapiImage as a
GstMiniObject.
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
The base plugin public API function implementations determine
which pad should be passed to the internal helper functions.
Currently, only the base plugin static sinkpad and static
srcpad are supported/used. However, this change enables future
API functions to be added that can accept a pad (i.e. request pad)
from an element subclass (e.g. a GstVideoAggregator subclass).
Define a struct (GstVaapiPadPrivate) to encapsulate the
pad-specific data (i.e. buffer pool, allocator, info,
caps, etc.).
Add an interface to retrieve the data struct for a given
pad.
Finally, update the base plugin to use the data struct
throughout the implementation.
This will enable us to easily extend the base plugin in the
future to allow for N-to-1 pad subclasses (e.g. overlay/
composite).
We do not need to maintain a standalone list of decoder's output
template for raw formats and that is easy to make mistake(for
example, the AYVU is wrong in that list, should be VUYA).
Just use GST_VAAPI_FORMATS_ALL to replace the raw formats list for
src template.
We need to provide more precise template caps for postproc's src
and sink pads. The GST_VIDEO_FORMATS_ALL make all video formats
available which are really superfluous.
GST_VAAPI_FORMATS_ALL collects all declared formats in video-format
as a caps template string, and make them available in caps with
memory:VASurface feature.
Fixes: #199
When translating navigation x,y coordinates for
video-direction, it is necessary to subtract 1
when using the video dimensions to compute the
new x,y coordinates. That is, a 100x200 image
should map coordinates in x=[0-99],y=[0-199].
This issue was found with unit tests provided
in !182.
Currently the parameter of skin-tone-enhancement filter is forced
to zero. In fact it could be set different value by the user.
So create a new property named as "skin-tone-enhancement-level"
for accepting the used defined parameter value.
At the same time, skin-tone-enhancement is marked as deprecated.
When skin-tone-enhancement-level is set, skin-tone-enhancement
will be ignored.
The expression "len >= 0" is always true since "len"
is an unsigned type. And it is clear that the writers
intention was not to write "len > 0" since we handle
len == 0 in the ensuing "if (len < 3)" conditional
block.
This patch makes use of GST_PARAM_USER_SHIFT to define the internal
param in encoders to decide which parameters to expose. Thus
gstreamer-vaapi will not interfere with any change in GStreamer in the
future.
Also, the internal symbol was change to
GST_VAAPI_PARAM_ENCODER_EXPOSURE to keep the namespacing.
The Mesa Gallium driver is poorly tested currently, leading to bad user
experience for AMD users. The driver can be added back to the white list at
runtime using the GST_VAAPI_ALL_DRIVERS environment variable.
Adding crop meta x,y to w,h only compensates for left,top
cropping. But we also need to compensate for right,bottom
cropping.
The video meta contains the appropriate w,h (uncropped)
values, so use it instead.
Test:
gst-launch-1.0 -vf videotestsrc num-buffers=500 \
! videocrop top=50 bottom=30 left=40 right=20 \
! vaapipostproc ! vaapisink & \
gst-launch-1.0 -vf videotestsrc num-buffers=500 \
! videocrop top=50 bottom=30 left=40 right=20 \
! vaapipostproc ! identity drop-allocation=1 \
! vaapisink
Mapping a pointer event needs to consider both size and
video-direction operations together, not just one or the other.
This fixes an issue where x,y were not being mapped correctly
for 90r, 90l, ur-ll and ul-lr video-direction. In these directions,
the WxH are swapped and GST_VAAPI_POSTPROC_FLAG_SIZE is set. Thus,
the first condition in the pointer event handling was entered and
x,y scale factor were incorrectly computed due to srcpad WxH
swap.
This also fixes all cases where both video-direction and scaling
are enabled at the same time.
Test that all pointer events map appropriately:
for i in `seq 0 7`
do
GST_DEBUG=vaapipostproc:5 gst-launch-1.0 -vf videotestsrc \
! vaapipostproc video-direction=${i} width=300 \
! vaapisink
GST_DEBUG=vaapipostproc:5 gst-launch-1.0 -vf videotestsrc \
! vaapipostproc video-direction=${i} width=300 height=200 \
! vaapisink
GST_DEBUG=vaapipostproc:5 gst-launch-1.0 -vf videotestsrc \
! vaapipostproc video-direction=${i} height=200 \
! vaapisink
GST_DEBUG=vaapipostproc:5 gst-launch-1.0 -vf videotestsrc \
! vaapipostproc video-direction=${i} \
! vaapisink
done
Advertise to upstream that vaapipostproc can handle
crop meta.
When used in conjunction with videocrop plugin, the
videocrop plugin will only do in-place transform on the
crop meta when vaapipostproc advertises the ability to
handle it. This allows vaapipostproc to apply the crop
meta on the output buffer using vaapi acceleration.
Without this advertisement, the videocrop plugin will
crop the output buffer directly via software methods,
which is not what we desire.
vaapipostproc will not apply the crop meta if downstream
advertises crop meta handling; vaapipostproc will just
forward the crop meta to downstream. If crop meta is
not advertised by downstream, then vaapipostproc will
apply the crop meta.
Examples:
1. vaapipostproc will forward crop meta to vaapisink
gst-launch-1.0 videotestsrc \
! videocrop left=10 \
! vaapipostproc \
! vaapisink
2. vaapipostproc will do the cropping
gst-launch-1.0 videotestsrc \
! videocrop left=10 \
! vaapipostproc \
! identity drop-allocation=1 \
! vaapisink
The encoder is a true gstobject now and all the properties are using
gobject's properties mechanism. Add help functions to handle the properties
between encode and encoder class.
The basic idea is mapping the same property between encoder and encode. All
the encoder's properties will have the same name, the same type in encode.
The set/get property function just forward the property setting/getting to
the encoder using the same property name and value. Because the encoder is
created on needed, we need to cache the property setting in encode.
While ensuring the allowed sink pad caps, the filter attributes set
the frame size restriction, but it is not ensured, at that moment,
that the filter is already instantiaded.
In order to silence the glib logs, this patch add only calls
gst_vaapi_filter_append_caps() if the filter is instantiated.
In commit 18031dc6 surface's parent context is not assigned because of
circular references. Since then (2013), there's has no issue with
subpictures attached to a context, the current only users of this API.
This patch cleans up all of related code with the unused surface's
parent context.
This code doesn't define the profile used by the internal encoder, but
it used to "predict" which is going to be used and to get the caps
restrictions.
Before the profile was predicted by checking the donwstream caps, but
sometimes they are not defined, setting an unknown profile. In order
to enhances this situation, the encoder asks to internal encoder if it
has one. If so, it is used.
To ask the internal encoder's profile a new accessor function was
added: gst_vaapi_encoder_get_profile()
Now that vaapipostproc can possible handle video-direction, it
should also handle the image-orientation event from upstream if
video-direction property is set to auto.
It is requiered to know if postproc is capable to change the video
direction before fixating the source caps.
In order to do it, it'ss required to know if there's a functional VPP,
but that's checked at create() vmethod, which occurs after caps
fixating.
This patch checks for a functional VPP at fixate caps and, if so,
checks for the enabled filtes and later do the caps fixations.
If the video direction is unsupported by the driver, an element
warning is posted in the bus to notify the application.
gst_vaapi_enum_type_get_nick() was added in the library thus it can
be used elsewhere. It retrives the nick from an enum gtype.
The main reason to demote the message's level is because it is not an
error, it's a possible output of the trial and there's a code path
that handles it.
Secondly, it's very annoying when using gallium driver for radeon.
Otherwise the queue will swallow all the available decoder's surfaces
reaching a dead-lock.
This setting might impact the bin's peformance, but it's a trade-off.
When the code path goes to push buffers downstream when no surface
available in decoder context, and it fails the code bails out with a
fatal error.
That behavior is wrong, since it shouldn't be fatal. The use case is
when the video stream is disabled.
This patch just ignores the errors in this situation and demotes the
level of a log message.
For surfaces with different chroma type, it is prefer to initialize
a format which chroma type should be same with surface chroma type
instead of using fixed NV12.
When create vaapi surface, it is better to use the chroma type get
from jpeg file instead of using fixed 420 format. And the correct
chroma type can be determined by horizontal_factor/vertical_factor
flags that get from jpegparse.
If the allocation caps and negotiated caps are the same,
then ensure any previously negotiated video info is also
removed. This can occur when multi-resolution video
decoding returns to it's original resolution.
Fixes#170
This fixes a deadlock in gst_vaapiencode_change_state, which was due to
srcpad's chain function was locked waiting for available buffers. Since the
coded buffers in codedbuf_queue become available after sinkpad consume the
encoded frames, Paused -> Ready state change leads to deadlock. Coded buffers
are never consumed and marked free, hence gst_vaapiencode_handle_frame waits for
available buffers and holds the stream_lock of the srcpad.
Adds vpp mirroring support to vaapipostproc. Use
property video-direction. Valid values are identity,
horiz or vert. Default is identity (no mirror).
Closes#89
v2: Use GstVideoOrientationMethod enum
v3: Don't warn for VA_MIRROR_NONE.
Use GST_TYPE_VIDEO_ORIENTATION_METHOD type.
v4: Query VAAPI caps when setting mirror value
instead of during per-frame processing.
v5: Return TRUE in warning cases when setting mirror value.
https://bugzilla.gnome.org/show_bug.cgi?id=748184 has resurrected
with commit 3e992d8a
Since gst_vaapi_find_preferred_caps_feature() returns a color format
from caps negotiation, different from the default one (NV12), the
postproc enables the color transformation. But when GL_TEXTURE_UPLOAD
feature is negotiated, no color transformation shall be done.
Nonetheless, with commit 3e992d8a the requested format changes
firstly, because there's no video sink yet, so ANY caps are
negotiated; but later, when there's a video sink and a caps
renegotiation, the GL_TEXTURE_UPLOAD is negotiated though the color
format conversion still ongoing. It is required to reset that
conversion.
This patch force default color format when GL_TEXTURE_UPLOAD is
selected as preferred, thus avoiding the color conversion.
Fixes: #157
When the downstream has any caps, then raw video feature will
be used. At this situation, the preferred format should be chose
from caps which contains "vide/x-raw" feature instead of from
the fist allowed caps.
Fixes#142
The vaapi internal encoder's property id are negative, thus they are
different from GObject's property ids.
gst_vaapi_encoder_set_property() should map to the internal encoder
property id, assigned in gst_vaapiencode_default_set_property().
When downstream has any caps, vaapi should not shovel vaapi featured
buffers, but rather plain raw video, assuming always the worst case
scenario (downstream cannot handle featured video memory but raw
system memory buffers).
This patch query the peer caps without any filter, to know if
donwstream just ask for any caps, if so jump to the color space
checking, otherwise do the caps intersection and continue with the
feature selection algorithm.
Fixes: #139
We prefer to use the same format between image and surface for gst
vaapi allocator. The old way may choose different formats between
image and surface. For example, the RGBA image may have a NV12 surface.
So we need to do format conversion when we put/get image to surface.
Some drivers such as iHD can not support such conversion and always
cause a data flow error. There may also have some performance cost
for format conversion when put/get images.
So we prefer to use the same format for image and surface in the
allocator. If the surface can not support that format, we then
fallback to find a best one as the surface format.
Co-authored-by: Víctor Jáquez <vjaquez@igalia.com>
[wl_shell] is officially [deprecated], so provide support for the
XDG-shell protocol should be provided by all desktop-like compositors.
(In case they don't, we can of course fall back to wl_shell).
Note that the XML file is directly provided by the `wayland-protocols`
dependency and generates the protocol marshalling code.
[wl_shell]: https://people.freedesktop.org/~whot/wayland-doxygen/wayland/Client/group__iface__wl__shell.html
[deprecated]: 698dde1958
Fix the deinterlace black frame when playing with glimagesink:
gst-launch-1.0 filesrc location=test.264 ! h264parse ! vaapih264dec \
! vaapipostproc deinterlace-mode=1 deinterlace-method=1 ! glimagesink
The gst_vaapi_plugin_base_get_allowed_raw_caps is used for both sink
pad and src pad, which cause some bugs. For sink pad, we need to verify
vaPutImage() while for the src pad we need to verify vaGetImage().
For vaapidecoderXXX kind of plugins, the case is more complex. We need
to verify whether the decoded result(in some surface, NV12 format most
of the time) can be vaGetImage to some raw image format. Add more check
to fix all these problems.
https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/issues/123
Signed-off-by: He Junyan <junyan.he@hotmail.com>
Add 444 10bit yuv format Y410, which can be used to decode
main-444 10bit streams. Currently, this feature is only
supported by media-driver in Icelake.
The extract_allowed_surface_formats function just check whether
we can support some kind of surface/image format pair. We just
need to create a surface, create an image with the same video-format
and putImage from image to surface. All these operations success,
that kind of video-format is supported.
The old manner do not work for some kind of video-format. For example,
the RGBA kind of format will create a NV12 surface and RGBA image,
and the putImage will fail because the format is not same. And so
the RGBA format is not supported but actually it is supported.
gst_vaapi_plugin_base_close() removed the raw caps that are used indirectly
in gst_vaapipostproc_transform_caps(). The usage is already protected by
the mutex.
This is needed when the pipeline is stopped during startup.
gl_platform_type in gst_vaapi_get_display_type_from_gl_env generate
unused-variable warning and may block build when Werror enabled.
Several functions like gst_vaapi_display_egl_new_with_native_display
have no prototype warning and link error when GL is enabled but EGL
is disabled. Fix all these warning and link error.
https://bugzilla.gnome.org/show_bug.cgi?id=797358
Signed-off-by: Junyan He <junyan.he@intel.com>
Add 422 10bit yuv format Y210, which can be used to decode
main-10-422 10bit streams. Currently, this feature is only
supported by media-driver in Icelake.
https://bugzilla.gnome.org/show_bug.cgi?id=797264
The current vaapipostproc calls driver's video processing
pipeline for deinterlacing only if it is Advance deinterlacing.
Modify in the way that it always tries with driver's video
processing pipeline for deinterlacing, and falls back to software
method of appending picture structure meta data only if it fails
with driver's method.
https://bugzilla.gnome.org/show_bug.cgi?id=797095
Removed exposed macros GST_VAAPI_OBJECT_DISPLAY() and
GST_VAAPI_OBJECT_ID() to plugins, keeping them only for internal
library usage.
The purpose is readability.
https://bugzilla.gnome.org/show_bug.cgi?id=797139
If the unit could not be parsed, just skip this nal and keep parsing
what is left in the adapter. We need to flush the broken unit in the
decoder specific parser because the generic code does not know about
units boundary. This increases error resilliance.
Before this, the broken unit would stay in the adapter and EOS would be
returned. Which stopped the streaming. Just removing the EOS would have
lead to the adapter size growing indefinitely.
https://bugzilla.gnome.org/show_bug.cgi?id=796863
On systems with an Nvidia card, this error is output each time
the registry is rebuilt, which happens pretty often when
using gst-build as a development environment.
https://bugzilla.gnome.org/show_bug.cgi?id=796663
gst_vaapi_display_egl_new_with_native_display() has been broken since
it wasn't used.
Currently it's needed to call this API to create a display providing
the EGL display, so it could avoid duplicated calls to the native
display (eg. eglTerminate).
Signed-off-by: Victor Jaquez <vjaquez@igalia.com>
https://bugzilla.gnome.org/show_bug.cgi?id=795391
The commit 67e33d3de2 ("vaapiencode: h264:
find best profile in those available") changed the code to pick a profile
that is actually supported by the hardware. Unfortunately it dropped the
downstream constraints. This can cause negotiation failures under certain
circumstances.
The fix is split in two cases:
1\ the available VA-API caps doesn't intersect with pipeline's allowed
caps:
* The best allowed profile (pipeline's caps) is set as the encoding
target profile (it will be adjusted later by the available profiles
and properties)
2\ the available VA-API caps does intersect with pipeline's allowed
caps:
* The intersected caps are fixed, and its profile is set as the
encoding target profile. In this case the is not the best profile,
but the minimal one (if VA-API reports the profiles in order).
Setting the minimal profile of the intersected caps is better for
compatibility.
This patch fixes other tests related with caps negotiation, for
example, it handles baseline profile, even when VA only supports
constrained-baseline.
Original-patch-by: Michael Olbrich <m.olbrich@pengutronix.de>
https://bugzilla.gnome.org/show_bug.cgi?id=794306
Instead of using our own context handling for looking for GstGL
parameters (display, context and other context), this patch changes
the logic to use the utility function offered by GstGL.
https://bugzilla.gnome.org/show_bug.cgi?id=793643
The parameters of gst_gl_ensure_element_data() have to be not
local variable since they are going to be used to see if they're
set in gst_element_set_context() inside the API.
This is basically a revert of commit 3d56306chttps://bugzilla.gnome.org/show_bug.cgi?id=793643
Through "gst.vaapi.app.Display" context, users can set their own
VADisplay and native display of their backend.
So far we support only X11 display, from now we also support Wayland
display.
Attributes:
- wl-display : pointer of struct wl_display .
https://bugzilla.gnome.org/show_bug.cgi?id=705821
Instead to look for the best profile in the allowed profiles by
downstream, the encoder should look for the base profile in the
available profile in VA-API.
https://bugzilla.gnome.org/show_bug.cgi?id=794306
Instead of copying the metada in prepare_output_buffer() vmethod,
it is done in append_output_buffer_metadata() thus deinterlaced
buffers could also have the proper metas.
GstVideoCropMeta now it is copied internally and it is decided via
transform_meta() vmethod.
A new internal method, copy_metadata() was added to handle VPP
transformation where non-GstVideoVaapiMeta metas were lost.
When importing buffers to a VA-base buffer, it is required to copy
the metas in the original buffer, otherwise information will be
lost, such as GstVideoRegionOfInterestMeta.
https://bugzilla.gnome.org/show_bug.cgi?id=768248
Generate system allocated output buffers when downstream doesn't
support GstVideoMeta.
The VA buffer content is copied to the new output buffer, and it
replaces the VA buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=785054
When downstream can't handle GstVideoMeta it is required to send
system allocated buffers.
The system allocated buffers are produced in prepare_output_buffer()
vmethod if downstream can't handl GstVideoMeta.
At transform() vmethod if the buffer is a system allocated buffer,
a VA buffer is instanciated and replaces the out buffer. Later
the VA buffer is copied to the system allocate buffer and it
replaces the output buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=785054
This patch add the member copy_output_frame and set it TRUE when
when downstream didn't request GstVideoMeta API, the caps are raw
and the internal allocator is the VA-API one.
https://bugzilla.gnome.org/show_bug.cgi?id=785054
This function will inform the element if it shall copy the generated
buffer by the pool to a system allocated buffer before pushing it
to downstream.
https://bugzilla.gnome.org/show_bug.cgi?id=785054