Make sure that the GstVaapiPluginBase instance receives the new src
pad caps whenever they get updated from within the GstVaapiDecoder
decode routines.
This also ensures that downstream elements receive correctly sized
SW decoded buffers if needed.
https://bugs.tizen.org/jira/browse/TC-114
Split GstVideoDecoder::set_format() handler to first update the sink
pad caps and reset the active VA decoder instance based on those, and
then update the src pad caps whenever possible, e.g. when the caps
specify a valid video resolution.
In some occasions, a buffer pool is created for pre-initialization
purposes regardless of whether a valid image size is available or
not. However, during actual decode stage, the vaapidecode element
is expected to update the srcpad caps with the new dimensions, thus
also triggering a reset of the underlying bufferpool.
If the best downstream capsfeature turns out to be GLMemory, then make
sure to propagate RGBA video format in caps to that element. This fixes
the following pipeline: ... ! vaapipostproc ! glimagesink.
When an explicit output video format is selected, from an src pad
capsfilter, make sure that the downstream element actually supports
that format. In particular, fix crash with the following pipelines:
... ! vaapipostproc ! video/x-raw,format=XXX ! xvimagesink ; where
XXX is a format not supported by xvimagesink.
While doing so, also reduce the set of src pad filter caps to the
actual set of allowed src pad caps.
The ensure_surface() and ensure_image() functions shall only relate
to the underlying backing store. The actual current flags are to be
updated only through ensure_{surface,image}_is_current() or very other
particular cases in GstMemory hooks.
If the surface proxy is updated into the GstVaapiVideoMemory, then
it is assumed it is the most current representation of the current
video frame. Likewise, make a few more arrangements to have the
"current " flags set more consistently.
When interacting with SW elements, the buffers and underlying video
memory could be mapped as read/write. However, we need to use those
buffers again as plain VA surfaces, we have to make sure the VA image
is thus committed back to VA surface memory.
This fixes pipelines involving avdec_* and vaapi{postproc,sink}.
Hence vaapisink can display buffers decoded by gst-libav, or HW decoded
buffers can be further processed in-place, e.g. with a textoverlay.
https://bugzilla.gnome.org/show_bug.cgi?id=704078
[ported to current git master branch, amended commit message]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
In the current implementation, gst_video_info_set_format() would reset
the whole GstVideoInfo structure first, prior to setting video format
and size. So, coleteral information like framerate or pixel-aspect-
ratio are lost.
Provide and use a unique gst_video_info_change_format() for overcome
this issue, i.e. only have it change the format and video size, and
copy over the rest of the fields.
https://bugzilla.gnome.org/show_bug.cgi?id=734665
This is for helping decodebin to autoplug the vaapidecode element.
Decodebin is selecting decoder elements only based on rank and caps.
Without overriding the autoplug-* signals there is no way to autoplug
HW decoders inside decodebin. An easier soulution is to raise the
rank of vaapidecode, so that it gets selected first.
https://bugzilla.gnome.org/show_bug.cgi?id=739332
Added the same Klass specifications used in other upstream
video postprocessing elements like videoconvert, videoscale,
videobalance and deinterlace.
An example use case is for this is to help the playsink
to autoplug the hardware accelerated deinterlacer.
This is a workaround until auto-plugging is fixed when
format=ENCODED + memory:VASurface caps feature are provided.
Use the downstream negotiated video format as the output video format
if the user didn't ask for the colorspace conversion explicitly.
Usecase: This will help to connect elements like videoscale, videorate etc
to vaapipostproc.
https://bugzilla.gnome.org/show_bug.cgi?id=739443
Add new "scale-method" property to expose the scaling mode to use during
video processing. Note that this is only a hint, and the actual behaviour
may differ from implementation (VA driver) to implementation.
Attach the codec_data to out_caps only if downstream needed.
For eg: h264 encoder doesn't need to stuff codec_data to the
src caps if the negotiated caps has a stream format of byte-stream.
https://bugzilla.gnome.org/show_bug.cgi?id=734902
Fix arguments to XkbKeycodeToKeysym() for converting an X11 keycode
to a KeySym. In particular, there is no such Window argument. Also
make sure to check for, and use, the correct <X11/XKBlib.h> header
where that new function is defined. Otherwise, default to the older
XKeycodeToKeysym() function.
Really use the motion event coordinates to propagate the "mouse-move"
event to upper layer, instead of those from a button event. Those are
technically the same though.
When we copy a buffer because we're moving it into VA-API memory, we
need to copy flags. Otherwise, interlaced YUV buffers from a capture
source (e.g. V4L2) don't get flagged as interlaced.
https://bugzilla.gnome.org/show_bug.cgi?id=726270
Signed-off-by: Simon Farnsworth <simon.farnsworth@onelan.co.uk>
[reversed order of gst_buffer_copy_into() flags to match <1.0 code]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Allow implicit conversions to raw video formats, while still keeping
VA surfaces underneath. This allows for chaining the vaapipostproc
element to a software-only element that takes care of maps/unmaps.
e.g. xvimagesink.
https://bugzilla.gnome.org/show_bug.cgi?id=720174
Use pooled GstVaapiVideoMeta information, i.e. always allocate that on
video buffer allocation. Also optimize copy of additional metadata info
into the resulting video buffer: only copy the video cropping info and
the source surface proxy.
https://bugzilla.gnome.org/show_bug.cgi?id=720311
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
[fixed proxy leak, fixed double free on error, optimized meta copy]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
If no explicit output surface format is supplied try to keep the one
supplied through the sink pad caps. This avoids a useless copy, even
if things are kept in GPU memory.
This is a performance regression from git commit dfa70b9.
If only advanced deinterlacing is requested, i.e. deinterlacing is
the only active algorithm to apply with source and output surface
formats being the same, then make sure to enable VPP processing.
Otherwise, allow fallback to bob-deinterlacing with simple rendering
flags alteration.
Add a default decide_allocation() hook to GstVaapiPluginBase. The caps
feature argument can be used to force a bufferpool with a specific kind
of memory.
Add GST_VAAPI_VIDEO_BUFFER_POOL_ACQUIRE_FLAG_NO_ALLOC params flag that
can be used to disable early allocations of vaapi video metas on buffers,
thus delagating that to the bufferpool user.
Default to I420 format for output surfaces so that to match the usual
GStreamer pipelines. Though, internally, we could still opt for NV12
surface formats, i.e. default format=ENCODED is a hint for that, thus
delegating the decision to the VA driver.
Use the new gst_caps_has_vaapi_surface() helper function to detect
whether the sink pad caps contain native VA surfaces, or not, i.e.
no raw video caps.
Also rename is_raw_yuv to get_va_surfaces to make the variable more
explicit as we just want a way to differentiate raw video caps from
VA surfaces actually.
The "discontinuity" tracking code, whereby lost frames are tentatively
detected, is inoperant if the sink pad buffer timestamps are not right
to begin with.
This is a temporary workaround until the following bug is fixed:
https://bugzilla.gnome.org/show_bug.cgi?id=734386
In order to make the discontinuity detection code useful, we need to
detect the lost frames in the history as early as the previous frame.
This is because some VA implementations only support one reference
frame for advanced deinterlacing.
In practice, turn the condition for detecting new frame that is beyond
the previous frame from field_duration*2 to field_duration*3, i.e.
nothing received for the past frame and a half because of possible
rounding errors when calculating the field-duration either in this
element (vaapipostproc), or from the upstream element (parser element).
This is a regression introduced with commit faefd62.
https://bugzilla.gnome.org/show_bug.cgi?id=734135
Introduce new gst_caps_has_vaapi_surface() helper function to detect
whether the supplied caps has VA surfaces. With GStreamer >= 1.2, this
implies a check for memory:VASurface caps features, and format=ENCODED
for earlier versions of GStreamer.
Simplify the creation and installation process of properties, by first
accumulating them into a g_properties[] array, and next calling into
g_object_class_install_properties().
Also add missing docs and flags to some properties.
Move code around in a more logical way. Introduce GST_VAAPISINK_CAST()
helper macro and use it wherever we know the object is a GstBaseSink or
any base class. Drop explicit initializers for values that have defaults
set to zero.
Introduce new backends vtable so that to have clean separation between
display dependent code and common base code. That's a "soft" separation,
we don't really need dedicated objects.
https://bugzilla.gnome.org/show_bug.cgi?id=722248
Support for X11 "synchronous" mode was never implemented, and was only
to be useful for debugging. Drop that altogether, that's not going to
be useful in practice.
https://bugzilla.gnome.org/show_bug.cgi?id=733985
Rendering with GLX in vaapisink is kind of useless nowadays, including
OpenGL related fancy effects. Plain VA/GLX interfaces are also getting
deprecated in favor of EGL, or more direct buffer sharing with actual
GL textures.
Should testing of interop with GLX be needed, one could still be using
the modern cluttersink or glimagesink elements.
https://bugzilla.gnome.org/show_bug.cgi?id=733984
Allow dynamic changes to the window, e.g. performed by the user, and
make sure to refresh its contents, while preserving aspect ratio.
In practice, Expose and ConfigureNotify events are tracked in X11
display mode by default. This occurs in a separte event thread, and
this is similar to what xvimagesink does. Any of those events will
trigger a reconfiguration of the window "soft" size, subsequently
the render-rect when necessary, and finally _expose() the result.
The default of handle_events=true can be changed programatically via
gst_x_overlay_handle_events().
Thanks to Fabrice Bellet for rebasing the patch.
https://bugzilla.gnome.org/show_bug.cgi?id=711478
[dropped XInitThreads(), cleaned up the code a little]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
The gst_vaapidecode_decode_loop() function is called within a separate
task to fetch and output all frames that were decoded so far. So, if
the decoder_loop_status is forcibly set to EOS when _finish() is called,
then we are bound to exist the task without submitting the pending
frames.
If the downstream element error'ed out, then the gst_pad_push() would
propagate up an error and so we will get it right for cutting off
_finish() early in that case.
This is a regression from 6003596.
https://bugzilla.gnome.org/show_bug.cgi?id=733897
Fixes a hang/race on shutdown where _decode_loop() had already completed
its execution and _finish() was waiting on a GCond for decode_loop()
to complete. Also fixes the possible race where _finish() is called
but _decode_loop() endlessly returns before signalling completion
iff the decoder instance returns GST_FLOW_OK.
Found with: ... ! vaapidecode ! {glimagesink,cluttersink}
https://bugzilla.gnome.org/show_bug.cgi?id=733897
[factored out GST_VIDEO_DECODER_STREAM_UNLOCK() call]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Now that we always track the currently active video buffer, it is
not necessary to automatically increase its reference since this is
implicitly performed in ::show_frame() through the get_input_buffer()
helper from GstVaapiPluginBase class.
This is a regression from a26df80.
Rework the logics behind the configuration of an adequate bufferpool,
especially when OpenGL meta or additional capsfeatures are needed.
Besides, for GStreamer >= 1.4, the first capsfeatures that gets matched,
and that is not system memory, is now selected by default.
Make sure to propagate memory:VASurface capsfeature to srcpad caps
only for GStreamer >= 1.5 as the plug-in elements in GStreamer 1.4
core currently miss additional patches available in 1.5-git (1.6).
This is a temporary workaround.
If a multiview stream is decoded, multiple view components are submitted
as is downstream. It is the responsibility of the sink element to display
the required view components. By default, always select the frame buffer
that matches the view-id of the very first frame to be displayed.
However, introduce a "view-id" property to allow the selection of a
specific view component of interest to display.
Always record the VA surface that is currently being rendered, no matter
the fact we are using texturedblit or overlay. That's because in some
occasions, we need to refresh or resize the displayed contents based on
new events. e.g. user-resized window.
Besides, it's simpler to track the last video buffer in GstVaapiSink than
through the base sink "last-sample".
Add a "display-name" property to vaapisink so that the end user could
select the desired output. Keep "display-name" in-line with the existing
"display" (GstVaapiDisplayXXX type).
So, for X11 or GLX, the "display-name" is the usual display name as we
know for XOpenDisplay(); for Wayland, the "display-name" is the name used
for wl_display_connect(); and for DRM, the "display-name" is actually the
DRI device name.
https://bugzilla.gnome.org/show_bug.cgi?id=722247
Always add VideoAlignment bufferpool option if the downstream element
expects its own pool to be used but does not offer it through a proper
propose_allocation() implementation for instance, and that the ALLOCATION
query does not expose the availability of the Video Meta API.
This fixes propagation of video buffer stride information to Firefox.
Make sure to always prefer native internal formats for the VA surfaces
that get allocated. Also disable "direct-rendering" mode in this case.
This is needed so that to make sure that anything that gets out of the
decoder, or anything that gets into the encoder, is in native format
for the hardware, and thus the driver doesn't need to perform implicit
conversions in there. Interop with SW elements is still available with
fast implementations of VA imaging APIs.
Forbid shares of GstMemory instances, and rather make copy of it. This
effectively copies the GstMemory structure and enclosed metadata, but
this does not copy the VA surface contents itself. It should though.
This fixes preroll and makes sure to not download garbage for the first
frame when a SW rendering sink is used.
Use an image pool to hold VA images to be used for downloads/uploads
of contents for the associated surface.
This is an optmization for size. So, instead of creating as many VA
images as there are buffers (then VA surfaces) allocated, we only
maintain a minimal set of live VA images, thus preserving memory
resources.
Disable read-write mappings if "direct-rendering" is not supported.
Since the ordering of read and write operations is not specified,
this would require to always download the VA surface on _map(), then
commit the temporary VA image back to the VA surface on _unmap().
Some SW decoding plug-in elements still use R/W mappings though.
https://bugzilla.gnome.org/show_bug.cgi?id=733242
Fix error messages introduced in the previous commit for the _map()
imaplementation. Also use the new get_image_data() helper function
to determine the base pixels data buffer from a GstVaapiImage when
updating the video info structure from it.
Allow raw pixels of the whole frame to be mapped read-only. i.e. in
cases where the buffer pool is allocated without VideoMeta API, thus
individual planes cannot be mapped.
This is initial support for Firefox >= 30.
https://bugzilla.gnome.org/show_bug.cgi?id=731886
While creating the vaapi video allocator, make sure the associated
surface pool has correct format instead of defaulting to NV12 video
format even though there is no direct rendering support.
https://bugzilla.gnome.org/show_bug.cgi?id=732691
Make sure to always update the VA surface pointer whenever the proxy
changes. This used to only work when the VA surface is written to, in
interop with SW element ("upload" feature), and this now fixes cases
when the VA surface is needed for reading, in interop with SW element
("download" feature).
Always expose I420 format by default when the VA surface could be
mapped for interoperability with non harware accelerated elements.
However, the default behaviour remains the auto-plugging of vaapi
elements, down to the sink.
Side effect: "direct-rendering" mode is also disabled most of the
times as plain memcpy() from uncached speculative write combining
memory is not going to be efficient enough.
Fix support for VA surface download capability in vaapidecode element
for GStreamer >= 1.2. This is a fix to supporting libva-vdpau-driver,
but also the libva-intel-driver while performing hardware accelerated
conversions from the native VA surface format (NV12) to the desired
output VA image format.
For instance, this fixes pipelines involving vaapidecode ! xvimagesink.
https://bugzilla.gnome.org/show_bug.cgi?id=733243
When playbin/decodebin builds the pipeline, it puts decoders and sinks
into different bins and forwards the queries from bins to bins. So in
the initials steps the pipeline is built iteratively by playbin and
looks like this :
[filesrc]
[filesrc] -> [typefind]
[filesrc] -> [typefind] -> [demuxer]
[filesrc] -> [typefind] -> [demuxer] -> [decoder]
At this point the decoder is asked for its SRC caps and it will make a
choice based on what gst_pad_peer_query_caps() returns. The problem is
that the caps returns at that point includes caps features like ANY,
essentially because playbin can plug in additional elements like
videoscale, videoconv or deinterlace.
This patch adds a another call to
gst_vaapi_find_preferred_caps_feature() when the decoder decides its
allocation, to make sure we asks the downstream elements when the
entire pipeline has been built.
https://bugzilla.gnome.org/show_bug.cgi?id=731645
A compiler change showed me that tmp_rect went out of scope before
it was used. Move it to the beginning of the function instead.
https://bugzilla.gnome.org/show_bug.cgi?id=726363
Signed-off-by: Simon Farnsworth <simon.farnsworth@onelan.co.uk>
[added guards for GStreamer 0.10 builds]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
We can avoid scanning for start codes again if the bitstream is fed
in NALU chunks. Currently, we always scan for start codes, and keep
track of remaining bits in a GstAdapter, even if, in practice, we
are likely receiving one GstBuffer per NAL unit. i.e. h264parse with
"nal" alignment.
https://bugzilla.gnome.org/show_bug.cgi?id=723284
[use gst_adapter_available_fast() to determine the top buffer size]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Avoid reaching an assert if dynamic framerates (0/1) are used. One
way to solve this problem is to just stick field_duration to zero.
However, this means that, in presence of interlaced streams, the
very first field will never be displayed if precise presentation
timestamps are honoured.
https://bugzilla.gnome.org/show_bug.cgi?id=729604
ensure_srcpad_buffer_pool() tries to avoid unnecessarily deleting and
recreating filter_pool. Unfortunately, this also meant it didn't create
it if it did not exist.
Fix it to always create the buffer pool if it does not exist.
https://bugzilla.gnome.org/show_bug.cgi?id=723834
Signed-off-by: Simon Farnsworth <simon.farnsworth@onelan.co.uk>
Reset deinterlacer state, i.e. past reference frames used for advanced
deinterlacing, when there is some discontinuity detected in the course
of processing source buffers.
This fixes support for advanced deinterlacing when a seek occurred.
https://bugzilla.gnome.org/show_bug.cgi?id=720375
[fixed type of pts_diff variable, fetch previous buffer PTS from the
history buffer, reduce heuristic for detecting discontinuity]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Apply video cropping regions stored in GstVideoCropMeta, or in older
GstVaapiSurfaceProxy representation, to VPP pipelines. In non-VPP modes,
the crop meta are already propagated to the output buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=720730
deinterlace-mode didn't behave in the way you'd expect if you have
past experience of the deinterlace element. There were two bugs:
1. "auto" mode wouldn't deinterlace "interleaved" buffers, only "mixed".
2. "force" mode wouldn't deinterlace "mixed" buffers flagged as progressive.
Fix these up, and add assertions and error messages to detect cases that
aren't handled.
https://bugzilla.gnome.org/show_bug.cgi?id=726361
Signed-off-by: Simon Farnsworth <simon.farnsworth@onelan.co.uk>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
gst_video_info_set_format() does not preserve video info properties. In
order to keep important information in the caps such as interlace mode,
framerate, pixel aspect ratio, ... we need to manually copy back those
properties after setting the new video format.
https://bugzilla.gnome.org/show_bug.cgi?id=722276
It can happen that there is a pool provided that does not advertise
the vappivideometa. We should unref that pool before using our own.
Discovered with vaapidecode ! {glimagesink,cluttersink}
https://bugzilla.gnome.org/show_bug.cgi?id=724957
[fixed compilation by adding the missing semi-colon]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Parse any pending data until a complete frame is obtained. This is a
memory optimization to avoid expansion of video packets stuffed into
the GstAdapter, and a fix to EOS condition to detect there is actually
pending data that needs to be decoded, and subsequently output.
https://bugzilla.gnome.org/show_bug.cgi?id=731831
Force early initializatin of the GstVaapiDisplay so that to make sure
that the sink element display object is presented first to upstream
elements, as it will be correctly featuring the requested display type
by the user.
Otherwise, we might end up in situations where a VA/X11 display is
initialized in vaapidecode, then we try VA/DRM display in vaapisink
(as requested by the "display" property), but this would cause a failure
because we cannot acquire a DRM display that was previously acquired
through another backend (e.g. VA/X11).
When a new display is settled through GstElement::set_context() (>= 1.2),
or GstVideoContext::set_context() (<= 1.0), then we shall also update the
associated display type.
The built-in video parsers elements are built into a single DSO named
libgstvaapi_parse.so. The various video parsers could be accessed as
vaapiparse_CODEC.
For now, this only includes a modified version of h264parse so that to
support H.264 MVC encoded streams.
Add initial support for Subset SPS, Prefix NAL and Slice Extension NAL
for non-base-view streams encoding, and the usual SPS, PPS and Slice
NALs for base-view encoding.
The H.264 Stereo High profile encoding mode will be turned on when the
"num-views" parameter is set to 2. The source (raw) YUV frames will be
considered as Left/Right view, alternatively.
Each of the two views has its own frames reordering pool and reference
frames list management system. Inter-view references are not supported
yet, so the views are encoded independently from each other.
Signed-off-by: Li Xiaowei <xiaowei.a.li@intel.com>
[limited to Stereo High profile per the definition of MAX_NUM_VIEWS]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Fix generation of source tarballs when certain conditionals are not
met. e.g. always include all buildable codecparsers sources in the
distribution tarball, fix plug-in element sources set to include X11
and encoder bits.
This maps GstVideoColorimetry information in vaapisink's sinkpad caps
to GST_VAAPI_COLOR_STANDARD_* flags, if per-buffer information was not
available.
https://bugzilla.gnome.org/show_bug.cgi?id=722255
[factored out code, added SMPTE240M, handle per-buffer flags]
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>