Until now, the encoder ignored the profile in src caps and chose one
according with the given parameters. But the encoder must honor the
profile specifed in src caps.
This patch do that, and if the encoder needs to choose the profile,
it will do it by following these rules:
1\ If given parameters are not compatible with given profile, the
encoder will bail out with an error.
2\ The encoder will choose the higher profile indicated in the
src caps.
https://bugzilla.gnome.org/show_bug.cgi?id=757941
So far vaapi encoder does not set profile to src caps. This patch makes it
setting profile to src caps, which is determined by itself.
In addition, if encoder chose different profile, which is not negotiated with
downstream, we should set compatible profile to make negotiation working.
https://bugzilla.gnome.org/show_bug.cgi?id=757941
Check if the requested profile in source caps, is supported by the
VA driver. If it is not, an info log message is send saying that
another (compatible?) profile will be used.
https://bugzilla.gnome.org/show_bug.cgi?id=757941
First check if downstream requests ANY caps. If so, byte-stream is
used and the profile will be choose by the encoder. If dowstream
requests EMPTY caps, the negotiation will fail.
Lately, byte-stream and profile are looked in the allowed caps.
https://bugzilla.gnome.org/show_bug.cgi?id=757941
The check for avc stream format was done in the vaapi encoder's
vmethod get_caps(), but that is wrong since it has to be check
when encoder set_format().
https://bugzilla.gnome.org/show_bug.cgi?id=757941
vaapipostproc didn't negotiate the proper multiview caps losing
downstream information.
This patch enables the playing of MVC encoded stream by setting
the proper multiview mode/flags and views to src caps, according
to sink caps.
https://bugzilla.gnome.org/show_bug.cgi?id=784320
There is another regression with 7a206923 when setting the video
info for the video meta, it should be the one from the image's
allocator rather from the allocation caps.
Test pipeline:
gst-launch-1.0 filesrc location=bug766184.flv ! decodebin \
! tee ! videoconvert ! videoscale \
! video/x-raw, width=1920, height=1080 ! xvimagesink
There is a regression in 7a206923, since the buffer pool ditches all
the buffers generated by them because the pool config size is
different of the buffer's size.
Test pipeline:
gst-launch-1.0 filesrc location=big_buck_bunny_1080p_h264.mov \
! qtdemux ! vaapih264dec ! vaapipostproc ! xvimagesink \
--gst-debug=GST_PERFORMANCE:5
The allocator may update the buffer size according to the VA surface
properties. In order to do this, the video info is modified when the
allocator is created, which reports through the allocation info the
updated size, and set it to the pool config.
Since commits in https://bugzilla.gnome.org/show_bug.cgi?id=781142 landed,
they introduced regression in seek.
Formerly, once seek is done, decoder drops P-frames until I-frame arrives.
But since the commits landed, it doesn't drop P-frame and does try to
decode it continuously because active_sps is still alive. See ensure_sps function.
But there are prev_frames and prev_ref_frames reset already, then it
causes assertion.
So it's necessary to reset active_sps/pps also in reset method.
https://bugzilla.gnome.org/show_bug.cgi?id=783726
There are some symbols that are not used when compiling with old
version of libva and those generates a compilation error.
Original-patch-by: Matt Staples <staples255@gmail.com>
Change the hard-coded range of quality-level from {1-8} to {1-7},
since it is the range Intel Open source driver supports.
Also perform the range clamping only if the user provided
quality-level is greater than the max-range suppored by the driver,
because there could be non-intel drivers giving lower value than
the hard-coded max value 7.
https://bugzilla.gnome.org/show_bug.cgi?id=783567
Refactor the set_config() virtual method considering a cleaner
approach to allocator instanciation, if it it not set or if it is
not valid for the pool.
https://bugzilla.gnome.org/show_bug.cgi?id=783599
The vaapi video decoders might have different allocation caps from
the negotiation caps, thus the GstVideoMeta shall use the negotiation
caps, not the allocation caps.
This was done before reusing gst_allocator_get_vaapi_video_info(),
storing there the negotiation caps if they differ from the allocation
ones, but this strategy felt short when the allocator had to be reset
in the vaapi buffer pool, since we need both.
This patch adds gst_allocator_set_vaapi_negotiated_video_info() and
gst_allocator_get_vaapi_negotiated_video_info() to store the
negotiated video info in the allocator, and distinguish it from
the allocation video info.
https://bugzilla.gnome.org/show_bug.cgi?id=783599
Renamed local video info structure names in set_config() vitual
method. The purpose of their renaming is to clarify the origin
of those structures, whether come from passed caps parameter
(new_allocation_vinfo) or from the configured allocator
(allocator_vinfo).
https://bugzilla.gnome.org/show_bug.cgi?id=783599
Renamed private GstVideoInfo structure video_info to allocation_vinfo
and alloc_info to negotiated_vinfo.
The purpose of these renaming is to clarify the origin and purpose of
these private variables:
video_info (now allocation_vinfo) comes from the bufferpool
configuration. It describes the physical video resolution to be
allocated by the allocator, which may be different from the
negotiated one.
alloc_info (now vmeta_vinfo) comes from the negotiated caps in
the pipeline. It represents how the frame is going to be mapped
using the video meta.
In Intel's VA-API backend, the allocation_vinfo resolution is
bigger than the negotiated_info.
https://bugzilla.gnome.org/show_bug.cgi?id=783599
Just set the framerate parameter if the framerate numerator and
denominator are bigger than zero.
Otherwise, in Intel Gen6 driver, a warning is raised disabling the
bitrate control.
Original-patch-by: Hyunjun Ko <zzoon@igalia.com>
https://bugzilla.gnome.org/show_bug.cgi?id=783532
Instead of recalculating the miscellaneous buffer parameters for
every buffer, it is only done once, when the encoder is configured.
And for every buffer, the same structures are just copied.
https://bugzilla.gnome.org/show_bug.cgi?id=783449
This is patch pretends to decouple the assignation of the values
in the parameter structures and the VA buffer's parameters setting.
It may lead to some issues since HRD, framerate or controlrate may
not be handled by the specific encoder, but they are set in
the VA buffer's parameters.
I leave as it because this patch is just a transitional patch.
https://bugzilla.gnome.org/show_bug.cgi?id=783449
According to the VA documentation:
The framerate is specified as a number of frames per second,
as a fraction. The denominator of the fraction is given in
the top half (the high two bytes) of the framerate field, and
the numerator is given in the bottom half (the low two bytes).
For example, if framerate is set to (100 << 16 | 750), this is
750 / 100, hence 7.5fps.
If the denominator is zero (the high two bytes are both zero)
then it takes the value one instead, so the framerate is just
the integer in the low 2 bytes.
This patch fixes the the framerate calculation in vp8 encoder
according to this.
https://bugzilla.gnome.org/show_bug.cgi?id=783449
Move frame-rate parameter from ensure_misc_params() to
ensure_contro_rate_param() since it only has meaning when the
control rate is either VBR or CBR.
https://bugzilla.gnome.org/show_bug.cgi?id=783449
Move the Hypothetical Reference Decoder (HRD) parameter, from
ensure_misc_params() to ensure_control_rate_params(), since it
only shall be defined when the control rate is either VBR or CBR.
https://bugzilla.gnome.org/show_bug.cgi?id=783449
Instead of filling the control rate param in ensure_misc_params(),
this patch refactor it out, as a first step to merge the same code
for all the encoders.
https://bugzilla.gnome.org/show_bug.cgi?id=783449
Instead of using a proxy to story the buffer quality level, the
encoder now uses the native VA structure, which is copied to the
dynamically allocated VAEncMiscParameterBuffer.
This approach is computationally less expensive.
Right now, H264 and HEVC can set as a property the number of slices to
process. But each driver can set a maximum number of slices, depending
on the supported profile & entry point.
This patch verifies the current num_slices to process against the maximum
permitted by the driver and the media size.
https://bugzilla.gnome.org/show_bug.cgi?id=780955