When downstream can't handle GstVideoMeta it is required to send
system allocated buffers.
The system allocated buffers are produced in prepare_output_buffer()
vmethod if downstream can't handl GstVideoMeta.
At transform() vmethod if the buffer is a system allocated buffer,
a VA buffer is instanciated and replaces the out buffer. Later
the VA buffer is copied to the system allocate buffer and it
replaces the output buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=785054
This patch add the member copy_output_frame and set it TRUE when
when downstream didn't request GstVideoMeta API, the caps are raw
and the internal allocator is the VA-API one.
https://bugzilla.gnome.org/show_bug.cgi?id=785054
This function will inform the element if it shall copy the generated
buffer by the pool to a system allocated buffer before pushing it
to downstream.
https://bugzilla.gnome.org/show_bug.cgi?id=785054
VA-API based buffer might need a video meta because of different
strides. But when donwstream doesn't support video meta we need to
force the usage of video meta.
Before we changed the buffer pool configuration, but actually this
is a hack and we cannot rely on that for downstream.
This patch add a check fo raw video caps and allocator is VA-API,
then the option is enabled without changing the pool configuration.
In this case the element is responsible to copy the frame to a
simple buffer with the expected strides.
https://bugzilla.gnome.org/show_bug.cgi?id=785054
Increased max values of periodic key frame for h26x codecs.
This allow more fine tunning of encoder that in certian scenario
want higher periodic key frame.
For example: it doesn't want a key frame each 10 seconds but
each 120 seconds.
https://bugzilla.gnome.org/show_bug.cgi?id=786320
vaapsink, when used with the Intel VA-API driver, tries to display
surfaces with format NV12, which are handled correctly by
Weston. Nonetheless, COGL cannot display YUV surfaces, making fail
pipelines on mutter.
This shall be solved either by COGL or by making the driver to paint
RGB surfaces. In the meanwhile, let's just demote vaapisink as
marginal when the Wayland environment is detected, no matter if it is
Weston.
https://bugzilla.gnome.org/show_bug.cgi?id=775698
WARNING: Trying to compare values of different types (str, int).
The result of this is undefined and will become a hard error
in a future Meson release.
In propose_allocation() if the numer of allocation params is zero, the
system's allocator is added first, and lastly the native VA-API
allocator.
In decide_allocation(), the allocations params in query are travered,
looking for a native VA-API allocator. If it is found, it is reused as
src pad allocator. Otherwise, a new allocator is instantiated and
appended in the query.
https://bugzilla.gnome.org/show_bug.cgi?id=789476
GST_VAAPI_VIDEO_ALLOCATOR_NAME was added in commit 5b11b8332 but it
was never used, since the native VA-API allocator name has been
GST_VAAPI_VIDEO_MEMORY_NAME.
This patch removes GST_VAAPI_VIDEO_ALLOCATOR_NAME macro.
https://bugzilla.gnome.org/show_bug.cgi?id=789476
Don't subscribe to button press events when using a foreing window,
because the user created window would trap those events, preveting the
show of frames.
https://bugzilla.gnome.org/show_bug.cgi?id=791615
Check for display's color-balance properties, available by the VA-API
driver, before setting them.
Also logs an info message of those unavailable properties.
https://bugzilla.gnome.org/show_bug.cgi?id=792638
eglGetDisplay() is currently broken in Mesa for Wayland. Also using
eglGetDisplay() is rather fragile, and it is recommended to use
eglGetPlatformDisplay() when possible.
In order to do that, this patch uses the helper in GstGL. If
gstreamer-vaapi is not compiled with GstGL support, eglGetDisplay()
will be used.
https://bugzilla.gnome.org/show_bug.cgi?id=790493
gst_vaapipostproc_ensure_filter might free the allowed_srcpad_caps
and allowed_sinkpad_caps. This can race with copying these caps in
gst_vaapipostproc_transform_caps and lead to segfaults.
The gst_vaapipostproc_transform_caps function already locks
postproc_lock before copying the caps. Make sure that calls to
gst_vaapipostproc_ensure_filter also acquire this lock.
https://bugzilla.gnome.org/show_bug.cgi?id=791404
Null-checking op_info suggests that it may be null, but it has already
been dereferenced on all paths leading to the check.
There may be a null pointer dereference, or else the comparison
against null is unnecessary.
Passed pointer in parse_int() are unsigned int (32 bits, unsigned) but
they are dereferenced as a wider long (64 bits, signed). This may lead
to memory corruption.
gst.vaapi.app.Display context is made for applications that will
provide the VA display and the native display to used by the
pipeline, when are using vaapisink as overlay. There are no use
case for encoders, decoders, neither for the postprocessor.
In the case of the vaapisink, it shall query for gst.vaapi.Display
upstream first, and then, if there is no reply,
gst.vaapi.app.Display context will be posted in the bus for the
application. If the application replies, a GstVaapiDisplay object
is instantiated given the context info, otherwise a
GstVaapiDisplay is created with the normal algorithm to guess the
graphics platform. Either way, the instantiated GstVaapiDisplay
is propagated among the pipeline and the have-message bus message.
Also only vaapisink will process the gst.vaapi.app.Display, if
and only if, it doesn't have a display already set. This is
caused because if vaapisink is in a bin (playsink, for example)
the need-context is posted twice, leading to an error state.
https://bugzilla.gnome.org/show_bug.cgi?id=790999
Frames are encoded as different layers. Frame in a particular
layer will use pictures in lower or same layer as references.
Which means decoder can drop the frames in upper layer but still
decode lower layer frames.
B-frames, except the one in top most layer, are reference frames.
All the base layer frames are I or P.
eg: with 3 temporal layers
T3: B1 B3 B5 B7
T2: B2 B6
T1: I0 P4 P8
T1, T2, T3: Temporal Layers
P1...Pn: P-Frames:
B1...Bn: B-frames:
T1: I0->P4 , P4->P8 etc..
T2: I0--> B2 <-- P4
T3: I0--> B1 <-- B2, B2 --> B3 <-- P4
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=788918
Frames are encoded as different layers. A frame in a particular
layer will use pictures in lower or same layer as references.
Which means decoder can drop the frames in upper layer but still
decode lower layer frames.
eg: with 3 temporal layers
T3: P1 P3 P5 P7
T2: P2 P6
T1: P0 P4 P8
T1, T2, T3: Temporal Layers
P1...pn: P-Frames:
P0->P1 , P0->P2, P2->P3, P0->P4......repeat
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=788918
The frame_num generation was not correctly implemented.
According to h264 spec, frame_num should get incremented
for each frame if previous frame is a referece frame.
For eg: IPBPB sequece should have the frame numbers 0,1,2,2,3
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
https://bugzilla.gnome.org/show_bug.cgi?id=788918