There is another regression with 7a206923 when setting the video
info for the video meta, it should be the one from the image's
allocator rather from the allocation caps.
Test pipeline:
gst-launch-1.0 filesrc location=bug766184.flv ! decodebin \
! tee ! videoconvert ! videoscale \
! video/x-raw, width=1920, height=1080 ! xvimagesink
There is a regression in 7a206923, since the buffer pool ditches all
the buffers generated by them because the pool config size is
different of the buffer's size.
Test pipeline:
gst-launch-1.0 filesrc location=big_buck_bunny_1080p_h264.mov \
! qtdemux ! vaapih264dec ! vaapipostproc ! xvimagesink \
--gst-debug=GST_PERFORMANCE:5
The allocator may update the buffer size according to the VA surface
properties. In order to do this, the video info is modified when the
allocator is created, which reports through the allocation info the
updated size, and set it to the pool config.
Since commits in https://bugzilla.gnome.org/show_bug.cgi?id=781142 landed,
they introduced regression in seek.
Formerly, once seek is done, decoder drops P-frames until I-frame arrives.
But since the commits landed, it doesn't drop P-frame and does try to
decode it continuously because active_sps is still alive. See ensure_sps function.
But there are prev_frames and prev_ref_frames reset already, then it
causes assertion.
So it's necessary to reset active_sps/pps also in reset method.
https://bugzilla.gnome.org/show_bug.cgi?id=783726
There are some symbols that are not used when compiling with old
version of libva and those generates a compilation error.
Original-patch-by: Matt Staples <staples255@gmail.com>
Change the hard-coded range of quality-level from {1-8} to {1-7},
since it is the range Intel Open source driver supports.
Also perform the range clamping only if the user provided
quality-level is greater than the max-range suppored by the driver,
because there could be non-intel drivers giving lower value than
the hard-coded max value 7.
https://bugzilla.gnome.org/show_bug.cgi?id=783567
Refactor the set_config() virtual method considering a cleaner
approach to allocator instanciation, if it it not set or if it is
not valid for the pool.
https://bugzilla.gnome.org/show_bug.cgi?id=783599
The vaapi video decoders might have different allocation caps from
the negotiation caps, thus the GstVideoMeta shall use the negotiation
caps, not the allocation caps.
This was done before reusing gst_allocator_get_vaapi_video_info(),
storing there the negotiation caps if they differ from the allocation
ones, but this strategy felt short when the allocator had to be reset
in the vaapi buffer pool, since we need both.
This patch adds gst_allocator_set_vaapi_negotiated_video_info() and
gst_allocator_get_vaapi_negotiated_video_info() to store the
negotiated video info in the allocator, and distinguish it from
the allocation video info.
https://bugzilla.gnome.org/show_bug.cgi?id=783599
Renamed local video info structure names in set_config() vitual
method. The purpose of their renaming is to clarify the origin
of those structures, whether come from passed caps parameter
(new_allocation_vinfo) or from the configured allocator
(allocator_vinfo).
https://bugzilla.gnome.org/show_bug.cgi?id=783599
Renamed private GstVideoInfo structure video_info to allocation_vinfo
and alloc_info to negotiated_vinfo.
The purpose of these renaming is to clarify the origin and purpose of
these private variables:
video_info (now allocation_vinfo) comes from the bufferpool
configuration. It describes the physical video resolution to be
allocated by the allocator, which may be different from the
negotiated one.
alloc_info (now vmeta_vinfo) comes from the negotiated caps in
the pipeline. It represents how the frame is going to be mapped
using the video meta.
In Intel's VA-API backend, the allocation_vinfo resolution is
bigger than the negotiated_info.
https://bugzilla.gnome.org/show_bug.cgi?id=783599
Just set the framerate parameter if the framerate numerator and
denominator are bigger than zero.
Otherwise, in Intel Gen6 driver, a warning is raised disabling the
bitrate control.
Original-patch-by: Hyunjun Ko <zzoon@igalia.com>
https://bugzilla.gnome.org/show_bug.cgi?id=783532
Instead of recalculating the miscellaneous buffer parameters for
every buffer, it is only done once, when the encoder is configured.
And for every buffer, the same structures are just copied.
https://bugzilla.gnome.org/show_bug.cgi?id=783449
This is patch pretends to decouple the assignation of the values
in the parameter structures and the VA buffer's parameters setting.
It may lead to some issues since HRD, framerate or controlrate may
not be handled by the specific encoder, but they are set in
the VA buffer's parameters.
I leave as it because this patch is just a transitional patch.
https://bugzilla.gnome.org/show_bug.cgi?id=783449
According to the VA documentation:
The framerate is specified as a number of frames per second,
as a fraction. The denominator of the fraction is given in
the top half (the high two bytes) of the framerate field, and
the numerator is given in the bottom half (the low two bytes).
For example, if framerate is set to (100 << 16 | 750), this is
750 / 100, hence 7.5fps.
If the denominator is zero (the high two bytes are both zero)
then it takes the value one instead, so the framerate is just
the integer in the low 2 bytes.
This patch fixes the the framerate calculation in vp8 encoder
according to this.
https://bugzilla.gnome.org/show_bug.cgi?id=783449
Move frame-rate parameter from ensure_misc_params() to
ensure_contro_rate_param() since it only has meaning when the
control rate is either VBR or CBR.
https://bugzilla.gnome.org/show_bug.cgi?id=783449
Move the Hypothetical Reference Decoder (HRD) parameter, from
ensure_misc_params() to ensure_control_rate_params(), since it
only shall be defined when the control rate is either VBR or CBR.
https://bugzilla.gnome.org/show_bug.cgi?id=783449
Instead of filling the control rate param in ensure_misc_params(),
this patch refactor it out, as a first step to merge the same code
for all the encoders.
https://bugzilla.gnome.org/show_bug.cgi?id=783449
Instead of using a proxy to story the buffer quality level, the
encoder now uses the native VA structure, which is copied to the
dynamically allocated VAEncMiscParameterBuffer.
This approach is computationally less expensive.
Right now, H264 and HEVC can set as a property the number of slices to
process. But each driver can set a maximum number of slices, depending
on the supported profile & entry point.
This patch verifies the current num_slices to process against the maximum
permitted by the driver and the media size.
https://bugzilla.gnome.org/show_bug.cgi?id=780955
- Use gst_element_send_event() instead of gst_pad_push_event()
- don't zero App structure
- check for pipeline parsing error
- only get vaapisink for property set
When state of vaapisink is changed from PLAYING to NULL, the handle_events
flag is set to FALSE, and never recovered, and then event thread is never
going to run.
So we should allow to set the flag only when users try it.
https://bugzilla.gnome.org/show_bug.cgi?id=782543
Since we started using VPP in VaapiWindowX11, we need to care about
the case that src rect and window's size are different.
So, once VPP has converted to other format, we should honor the
size of the VPP's surface as source rect. Otherwise, it is cropped
according the previous size of the source rect.
https://bugzilla.gnome.org/show_bug.cgi?id=782542
This implements a pipleint to recognize difference between ROI and non-ROI.
See comments in this code in detail.
https://bugzilla.gnome.org/show_bug.cgi?id=768248
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
Handles new custom event GstVaapiEncoderRegionOfInterest
to enable/disable a ROI region.
Writes a way to use new event to document.
https://bugzilla.gnome.org/show_bug.cgi?id=768248
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
$ simple-encoder -r inputfile.y4m
And you'll got an output file in H264 with two regions of interest.
https://bugzilla.gnome.org/show_bug.cgi?id=768248
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
Set ROI params during encoding each frame, which are set via
gst_vaapi_encoder_add_roi ()
https://bugzilla.gnome.org/show_bug.cgi?id=768248
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>
Queries if the driver supports "Region of Interest" (ROI) during the config
creation.
This attribute conveys whether the driver supports region-of-interest (ROI)
encoding, based on user provided ROI rectangles. The attribute value is
partitioned into fields as defined in the VAConfigAttribValEncROI union.
If ROI encoding is supported, the ROI information is passed to the driver
using VAEncMiscParameterTypeROI.
https://bugzilla.gnome.org/show_bug.cgi?id=768248
Signed-off-by: Víctor Manuel Jáquez Leal <vjaquez@igalia.com>