We do not need to maintain a standalone list of decoder's output
template for raw formats and that is easy to make mistake(for
example, the AYVU is wrong in that list, should be VUYA).
Just use GST_VAAPI_FORMATS_ALL to replace the raw formats list for
src template.
We need to provide more precise template caps for postproc's src
and sink pads. The GST_VIDEO_FORMATS_ALL make all video formats
available which are really superfluous.
Don't try to decode until the first I-frame is received within the
currently active sequence. i965 H265 decoder don't show any artifact
but it crashes.
Fixes: #98
GST_VAAPI_FORMATS_ALL collects all declared formats in video-format
as a caps template string, and make them available in caps with
memory:VASurface feature.
Fixes: #199
When translating navigation x,y coordinates for
video-direction, it is necessary to subtract 1
when using the video dimensions to compute the
new x,y coordinates. That is, a 100x200 image
should map coordinates in x=[0-99],y=[0-199].
This issue was found with unit tests provided
in !182.
We should halt meson configuration if there is no render API
installed (either DRM, Wayland or X11).
That behavior was already in autotools but missed in meson. This patch
brings it back.
Fixes: #196
The code is essentially the same for getting all op default
values. Thus, use a macro to help minimize code duplication
and [hopefully] encourage using the same mechanism for all
default getters.
Currently the parameter of skin-tone-enhancement filter is forced
to zero. In fact it could be set different value by the user.
So create a new property named as "skin-tone-enhancement-level"
for accepting the used defined parameter value.
At the same time, skin-tone-enhancement is marked as deprecated.
When skin-tone-enhancement-level is set, skin-tone-enhancement
will be ignored.
g_return_val_fail() documentations says:
If expr evaluates to FALSE, the current function should be
considered to have undefined behaviour (a programmer error).
The only correct solution to such an error is to change the
module that is calling the current function, so that it avoids
this incorrect call.
So it was missused in a couple parts of the H264 and H265 internal
decoders. This patch changes that to plain conditionals.
Also, it was included a couple code-style fixes.
The expression "len >= 0" is always true since "len"
is an unsigned type. And it is clear that the writers
intention was not to write "len > 0" since we handle
len == 0 in the ensuing "if (len < 3)" conditional
block.
Found by static analysis. encoder->mb_width * encoder->mb_height
is evaluated using 32-bit arithmetic before widen. Thus, cast
at least one of these to guint64 to avoid overflow.
The YUV formats have no ambiguity for drivers, so we can add them all.
Some old driver(i965) does not implement full get/put image functions
but can use derive image funtions for the YUV format. It does not
report that kind of formats correctly in image query, but will derive
that YUV format image from surface. The dynamic mapping of YUV format
will block that manner.
Adding more YUV format mapping has no side effect. So considering the
legacy driver conformance, we add all YUV formats mapping statically
and dynamic mapping RBG formats
Fix: #189Fix: #190
Multiple different scenarios could break the display thread creation and
end up blocking waiting for thread o be created. Fix them all by
correctly waiting for a new boolean to become valid.
Some streams have error data introducing unknown NAL type. There are
also kinds of NAL types we do not want to handle. The old manner will
set a decoder error when encounter this, which cause a latent crash bug.
The decoder may successfully decode the picture and insert it into DPB.
But there are error NAL units after the AU which cause the post unit error
and make that frame dropped. The later output of the picture still want
to ref that frame and crash.
No need to set decoder error when can not recognize or handle the NAL
unit, just skip it and continue.
Fix: #191
This patch makes use of GST_PARAM_USER_SHIFT to define the internal
param in encoders to decide which parameters to expose. Thus
gstreamer-vaapi will not interfere with any change in GStreamer in the
future.
Also, the internal symbol was change to
GST_VAAPI_PARAM_ENCODER_EXPOSURE to keep the namespacing.
The command line:
gst-launch-1.0 filesrc location=some_name.mjpeg ! jpegparse !
vaapijpegdec ! videoconvert ! video/x-raw,format=I420 ! vaapisink
will crash on i965 driver because of no pointer check.
We now generate the video format map between GST format and VA format
dynamically based on the image format returned by vaQueryImageFormats.
i965 driver does to report image format of 444P and Y800 forcc, while
the jpeg decoder context VASurfaceAttribPixelFormat use them. We can
not recognize these format and pass a NULL pointer to
gst_vaapi_surface_new_from_formats.
We need to add a pointer check here and let the fallback logic handle
this case correctly.
Other drivers work well.
Improve the mapping between va format and gst format. The new map
will be generated dynamically, based on the query result of image
format in VA driver. Also consider the ambiguity of RGB color
format in LSB mode.
The Mesa Gallium driver is poorly tested currently, leading to bad user
experience for AMD users. The driver can be added back to the white list at
runtime using the GST_VAAPI_ALL_DRIVERS environment variable.
Adding crop meta x,y to w,h only compensates for left,top
cropping. But we also need to compensate for right,bottom
cropping.
The video meta contains the appropriate w,h (uncropped)
values, so use it instead.
Test:
gst-launch-1.0 -vf videotestsrc num-buffers=500 \
! videocrop top=50 bottom=30 left=40 right=20 \
! vaapipostproc ! vaapisink & \
gst-launch-1.0 -vf videotestsrc num-buffers=500 \
! videocrop top=50 bottom=30 left=40 right=20 \
! vaapipostproc ! identity drop-allocation=1 \
! vaapisink
Mapping a pointer event needs to consider both size and
video-direction operations together, not just one or the other.
This fixes an issue where x,y were not being mapped correctly
for 90r, 90l, ur-ll and ul-lr video-direction. In these directions,
the WxH are swapped and GST_VAAPI_POSTPROC_FLAG_SIZE is set. Thus,
the first condition in the pointer event handling was entered and
x,y scale factor were incorrectly computed due to srcpad WxH
swap.
This also fixes all cases where both video-direction and scaling
are enabled at the same time.
Test that all pointer events map appropriately:
for i in `seq 0 7`
do
GST_DEBUG=vaapipostproc:5 gst-launch-1.0 -vf videotestsrc \
! vaapipostproc video-direction=${i} width=300 \
! vaapisink
GST_DEBUG=vaapipostproc:5 gst-launch-1.0 -vf videotestsrc \
! vaapipostproc video-direction=${i} width=300 height=200 \
! vaapisink
GST_DEBUG=vaapipostproc:5 gst-launch-1.0 -vf videotestsrc \
! vaapipostproc video-direction=${i} height=200 \
! vaapisink
GST_DEBUG=vaapipostproc:5 gst-launch-1.0 -vf videotestsrc \
! vaapipostproc video-direction=${i} \
! vaapisink
done
Advertise to upstream that vaapipostproc can handle
crop meta.
When used in conjunction with videocrop plugin, the
videocrop plugin will only do in-place transform on the
crop meta when vaapipostproc advertises the ability to
handle it. This allows vaapipostproc to apply the crop
meta on the output buffer using vaapi acceleration.
Without this advertisement, the videocrop plugin will
crop the output buffer directly via software methods,
which is not what we desire.
vaapipostproc will not apply the crop meta if downstream
advertises crop meta handling; vaapipostproc will just
forward the crop meta to downstream. If crop meta is
not advertised by downstream, then vaapipostproc will
apply the crop meta.
Examples:
1. vaapipostproc will forward crop meta to vaapisink
gst-launch-1.0 videotestsrc \
! videocrop left=10 \
! vaapipostproc \
! vaapisink
2. vaapipostproc will do the cropping
gst-launch-1.0 videotestsrc \
! videocrop left=10 \
! vaapipostproc \
! identity drop-allocation=1 \
! vaapisink