If the stream has a sequence_display_extenion, then attach the
display_horizontal/display_vertical dimension as the cropping
rectangle width/height to the GstVaapiPicture.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
If the Advanced profile has display_extension fields, then set the display
width/height dimension as cropping rectangle to the GstVaapiPicture.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
If the encoded stream has the frame_cropping_flag set, then associate
the cropping rectangle to GstVaapiPicture.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Make it possible associate an empty cropping rectangle to the surface
proxy, thus resetting any cropping rectangle that was previously set.
This allows for returning plain NULL when no cropping rectangle was
initially set up to the surface proxy, or if it was reset to defaults.
Add helper macros to retrieve the VA surface information like size
(width, height) or chroma type. This is a micro-optimization to avoid
useless function calls and NULL pointer re-checks in internal routines.
Add gst_vaapi_picture_set_crop_rect() helper function to copy the video
cropping information from raw bitstreams to each picture being decoded.
Also add helper function to surface proxy to propagate that information
outside of libgstvaapi. e.g. plug-in elements or standalone applications.
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
The MPEG-2 standard specifies (6.3.7) that all quantisation matrices
shall be reset to their default values when a Sequence_Header() is
decoded.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Break the circular references between GstVaapiContext and its children
GstVaapiSurfaces. Since the VA surfaces held an extra reference to the
context, which holds a reference to its VA surfaces, then none of those
were released.
How does this impact support for subpictures?
The only situation when the parent context needs to disappear is when
it is replaced with another one because of a resolution change in the
video stream for instance, or a normal destroy. In this case, it does
not really matter to apply subpictures to the peer surfaces since they
are either gone, or those that are left in the pipe can probably bear
a reinstantiation of the subpictures for it.
So, parent_context is set to NULL when the parent context is destroyed,
other VA surfaces can still get subpictures attached to them, individually
not as a whole. i.e. subpictures for surface S1 will be created from
active composition buffers and associated to S1, subpictures for S2 will
be created from the next active composition buffers, etc. We don't try
to cache the subpictures in those cases (pending surfaces until EOS
is reached, or pending surfaces until new surfaces matching new VA context
get to be used instead).
Make sure to create the GLX context once the window object has completed
its creation. Since gl_resize() relies on the newly created window size,
then we cannot simply overload the GstVaapiWindowClass::create() hook.
So, we just call into gst_vaapi_window_glx_ensure_context() once the
window object is created in the gst_vaapi_window_glx_new*() functions.
Make it possible to add extra an extra filter to most of display cache
lookup functions so that the GstVaapiDisplay instance can really match
a compatible and existing display by type, instead of relying on extra
string tags (e.g. "X11:" prefix, etc.).
Drop obsolete GST_VAAPI_IS_xxx() helper macros since we are no longer
deriving from GObject and so those were only checking for whether the
argument was NULL or not. This is now irrelevant, and even confusing
to some extent, because we no longer have type checking.
Note: this incurs more type checking (review) but the libgstvaapi is
rather small, so this is manageable.
Make GstVaapiID a gsize instead of guessing an underlying integer large
enough to hold all bits of a pointer. Also drop GST_VAAPI_ID_NONE since
this is plain zero and that it is no longer passed as varargs.
Port GstVaapiDecoder and GstVaapiDecoder{MPEG2,MPEG4,JPEG,H264,VC1} to
GstVaapiMiniObject. Add gst_vaapi_decoder_set_codec_state_changed_func()
helper function to let the user add a callback to a function triggered
whenever the codec state (e.g. caps) changes.
Port GstVaapiVideoPool, GstVaapiSurfacePool and GstVaapiImagePool to
GstVaapiMiniObject. Drop gst_vaapi_video_pool_get_caps() since it was
no longer used for a long time. Make object allocators static, i.e.
local to the shared library.
Drop support for user-defined data since this capability was not used
so far and GstVaapiMiniObject represents the smallest reference counted
object type. Add missing GST_VAAPI_MINI_OBJECT_CLASS() helper macro.
Besides, since GstVaapiMiniObject is a libgstvaapi internal object, it
is also possible to further simplify the layout of the object. i.e. merge
GstVaapiMiniObjectBase into GstVaapiMiniObject.
Propagate the picture size from the bitstream to the GstVaapiDecoder,
and subsequent user who installed a signal on notify::caps. This fixes
decoding of TS streams when the demuxer failed to extract the required
information.
Add gst_vaapi_decoder_get_frame_with_timeout() helper function that will
wait for a frame to be decoded, until the specified timeout in microseconds,
prior to returning to the caller.
This is a fix to performance regression from 851cc0, whereby the vaapidecode
loop executed on the srcpad task was called to often, thus starving all CPU
resources.
Rework GstVideoDecoder::handle_frame() to decode the current frame,
while possibly waiting for a free surface, and separately submit all
decoded frames from a task. This makes it possible to pop and render
decoded frames as soon as possible.
Add support for interlaced streams with GStreamer 1.0 too. Basically,
this enables vaapipostproc, though it is not auto-plugged yet. We also
make sure to reply to CAPS queries, and happily handle CAPS events.
Fix support for interlaced contents with GStreamer 0.10. In particular,
propagate GstVaapiSurfaceProxy frame flags to GstVideoCodecFrame flags
correctly.
This is a regression from commit 87e5717.
Rename GstVaapiDecoderFrame to GstVaapiParserFrame because this data
structure was only useful to parsing and a proper GstvaapiDecoderFrame
instance will be created instead.
Fix regression from 0.4-branch whereby GstVaapiSurfaceProxy no longer
held any information about the expected presentation timestamp, frame
duration or additional flags like interlaced or top-field-first.
Use new GstVaapiSurfaceProxy internal helper functions to propagate the
necessary GstVideoCodecFrame flags to vaapidecode (GStreamer 0.10).
Also make GstVaapiDecoder push_frame() operate similarly to drop_frame().
i.e. increase the GstVideoCodecFrame reference count in push_frame rather
than gst_vaapi_picture_output().
Add more attributes for raw decoding modes, i.e. directly through the
libgstvaapi helper library. In particular, add presentation timestamp,
duration and a couple of flags (interlaced, TFF, RFF, one-field).
Ensure libgstvaapi-glx*.so builds against libdl since dlsym() is used
to resolve glXGetProcAddress() from GLX libraries. This fix builds on
Fedora 17.
https://bugzilla.gnome.org/show_bug.cgi?id=698046
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Fix previous commit whereby gst_vaapi_decoder_get_codec_state() was
supposed to make GstVaapiDecoder own the return GstVideoCodecState
object. Only comment was updated, not the actual code.
Make gst_vaapi_decoder_get_codec_state() return the original codec state,
i.e. make the GstVaapiDecoder object own the return state so that callers
that want an extra reference to it would just gst_video_codec_state_ref()
it before usage. This aligns the behaviour with what we had before with
gst_vaapi_decoder_get_caps().
This is an ABI incompatible change, library major version was bumped from
previous release (0.5.2).
Fix the name of the plug-in element reported to gst-inspect-1.0. i.e. we
need an explicit definition for GStreamer >= 1.0 because the GST_PLUGIN_DEFINE
incorrectly uses #name for creating the plug-in name, instead of using macro
expansion (and let further expansion of macros) through e.g. G_STRINGIFY().
Fix make dist to allow build for either GStreamer 0.10 or 1.0. i.e. make
sure to include all source files in either case while generating source
tarballs.
Introduce a new configure option --with-gstreamer-api that determines
the desired GStreamer API to use. By default, GStreamer 1.0 is selected.
Also integrate more compatibility glue into gstcompat.h and plugins.
This integrates support for GStreamer API >= 1.0 only in the libgstvaapi
core decoding library. The changes are kept rather minimal here so that
the library retains as little dependency as possible on core GStreamer
functionality.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Drop the following functions that are now obsolete:
- gst_vaapi_context_get_surface()
- gst_vaapi_context_put_surface()
- gst_vaapi_context_find_surface_by_id()
- gst_vaapi_surface_proxy_new()
- gst_vaapi_surface_proxy_get_context()
- gst_vaapi_surface_proxy_set_context()
- gst_vaapi_surface_proxy_set_surface()
This is an API change.
Now that the surface pool is reference counted in the surface proxy wrapper,
we can safely ignore surface size checks in gst_vaapi_decoder_ensure_context().
Besides, this check is already performed in gst_vaapi_context_reset_full().
Introduce gst_vaapi_surface_proxy_new_from_pool() to allocate a new surface
proxy from the context surface pool. This change also makes sure to retain
the parent surface pool in the proxy.
Besides, it was also totally useless to attach/detach parent context to
VA surface each time we acquire/release it. Since the whole context owns
all associated VA surfaces, we can mark this as such only once and for all.
Move GstVaapiVideoMeta from core libgstvaapi decoding library to the
actual plugin elements. That's only useful there. Also inline reference
counting code from GstVaapiMiniObject.
Make sure libgstvaapi core decoding library doesn't include un-needed
dependencies. So, move out GstVaapiVideoConverterGLX to plugins instead.
Besides, even if the vaapisink element is not used, we are bound to have
a correctly populated GstSurfaceBuffer from vaapidecode.
Also clean-up the file along the way.
In decode_codec_data(), force initialization of format to zero so that
we can catch up cases where codec-data has neither "format" nor "wmvversion"
fields, thus making it possible to gracefully fail in this case.
Add a new GstVaapiDecoder::decode_codec_data() hook to actually decode
codec-data in the decoder sub-class. Provide a common shared helper
function to do the actual work and delegating further to the sub-class.
Drop GstVaapiDecoderUnit buffer field (GstBuffer) since it's totally
useless nowadays as creating sub-buffers doesn't bring any value. It
actually means more memory allocations. We can't do without that in
JPEG and MPEG-4:2 decoders.
Use gst_element_class_set_static_metadata() from GStreamer 1.0, which
basically is the same as gst_element_class_set_details_simple() in
GStreamer 0.10 context.
Use newer gst_video_overlay_rectangle_get_pixels_unscaled_raw() helper
function with GStreamer 0.10 compatible semantics, or that tries to
approach the current meaning. Basically, this is also just about moving
the helper to gstcompat.h.
Force luma_log2_weight_denom and chroma_log2_weight_denom to zero if
there is no pred_weight_table() that was parsed.
This is a workaround for the VA intel-driver on Ivy Bridge.
g_async_queue_timeout_pop() appeared in glib 2.31.18. Implement it as
g_async_queue_timed_pop() with a GTimeVal as the final time to wait for
new data to arrive in the queue.
There shall be only one place to call decode_current_picture(), and this
is in the end_frame() hook. The EOS unit is processed after end_frame()
so this means we cannot have a valid picture to decode/output at this
point.
Improve robustness when some expected packets where not received yet
or that were not correctly decoded. For example, don't try to decode
a picture if there was no valid sequence or picture headers.
Fix gst_vaapi_decoder_get_surface() to only return frames with a valid
surface proxy, i.e. with a valid VA surface. This means that any frame
marked as decode-only is simply skipped.
If the decoder was not able to decode a frame because insufficient
information was available, e.g. missing sequence or picture header,
then allow the frame to be gracefully dropped without generating
any error.
It is also possible that a frame is not meant to be displayed but
only used as a reference, so dropping that frame is also a valid
operation since GstVideoDecoder base class has extra references to
that GstVideoCodecFrame that needs to be released.
The Wayland API is not fully thread-safe and client applications shall
perform locking themselves on key functions. Besides, make sure to
release the lock if the _render() function fails.
Introduce gst_vaapi_window_wayland_sync() helper function to wait for
the completion of the redraw request. Use it in _render() function to
actually block until the previous draw request is completed.
The redraw callback needs to be attached to the surface prior to the
commit. Otherwise, the callback notifies the next surface repaint,
which is not the desired behaviour. i.e. we want to be notified for
the surface we have just filled.
Another isse was the redraw_pending was reset before the actual completion
of the frame redraw callback function, thus causing concurrency issues.
e.g. the callback could have been called again, but with a NULL buffer.
When the Wayland display is shared, we still have to create our own local
shell and compositor objects, since they are not propagated from the cache.
Likewise, we also need to determine the display size or vaapisink would
fail to account for the display aspect ratio, and will try to create a 0x0
window.
Fix build with newer VC-1 codecparser where dqsbedge was renamed to
dqbedge, and now represents either DQSBEDGE or DQDBEDGE depending on
the actual value of DQPROFILE.
Fix size of encapsulated BDUs since GstVC1BDU.size actually represents
the size of the BDU data, starting from offset, i.e. after any start
code is parsed.
This fixes a buffer overflow during the unescaping process.
The AVI demuxer (avidemux) does not set a proper "format" attribute
to the generated caps. So, try to recover the video codec format from
the "wmvversion" property instead.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Don't create temporary GstBuffers for all decoder units, even if they
are lightweight "sub-buffers", since it is not really necessary to keep
the buffer data around.
Implement GstVaapiDecoder.start_frame() and end_frame() semantics so
that to create new VA context earlier and submit VA pictures to the
HW for decoding as soon as possible. i.e. don't wait for the next
frame to start decoding the previous one.
Use GstVaapiDpb interface instead of maintaining our own prev and next
picture pointers. While doing so, try to derive a sensible POC value.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Avoid usage of goto. Simplify decode_step() process to first accumulate all
pending buffers into the GstAdapter, and then parse and decode units from
that input adapter. Stop the process once a frame is fully decoded or an
error occurred.
Make sure we always have a free surface left to use for decoding the
current frame. This means that decode_step() has to return once a frame
gets decoded. If the current adapter contains more buffers with valid
frames, they will get parsed and decoded on subsequent iterations.
Keep only one DPB interface and rename gst_vaapi_dpb2_get_references()
to gst_vaapi_dpb_get_neighbours() so that to retrieve pictures in DPB
around the specified picture POC.
Move GstVaapiDpbMpeg2 API to a more generic version that could also be
useful to other decoders that require 2 reference pictures, e.g. VC-1.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Fix support for global-alpha subpictures. The previous changes brought
the ability to check for GstVideoOverlayRectangle changes by comparing
the underlying pixel buffer pointers. If sequence number and pixel data
did not change, then this is an indication that only the global-alpha
value changed. Now, try to update the underlying VA subpicture global-alpha
value.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Don't re-upload VA subpicture if only the render rectangle changed.
Rather deassociate the subpicture and re-associate it with the new
render rectangle.
A GstVideoOverlayRectangle is created whenever the underlying pixels data
change. However, when global-alpha is supported, it is possible to re-use
the same GstVideoOverlayRectangle but with a change to the global-alpha
value. This process causes a change of sequence number, so we can no longer
check for that.
Still, if sequence numbers did not change, then there was no change in
global-alpha either. So, we need a way to compare the underlying GstBuffer
pointers. There is no API to retrieve the original pixels buffer from
a GstVideoOverlayRectangle. So, we use the following heuristics:
1. Use gst_video_overlay_rectangle_get_pixels_unscaled_argb() with the same
format flags from which the GstVideoOverlayRectangle was created. This
will work if there was no prior consumer of the GstVideoOverlayRectangle
with alternate (non-"native") format flags.
2. In overlay_rectangle_has_changed_pixels(), we have to use the same
gst_video_overlay_rectangle_get_pixels_unscaled_argb() function but
with flags that match the subpicture. This is needed to cope with
platforms that don't support global-alpha in HW, so the gst-video
layer takes care of that and fixes this up with a possibly new
GstBuffer, and hence pixels data (or) in-place by caching the current
global-alpha value applied. So we have to determine the rectangle
was previously used, based on what previous flags were used to
retrieve the ARGB pixels buffer.
We previously assumed that an overlay composition changed if the number
of overlay rectangles in there actually changed, or that the rectangle
was updated, and thus its seqnum was also updated.
Now, we can cope with cases where the GstVideoOverlayComposition grew
by one or a few more overlay rectangles, and the initial overlay rectangles
are kept as is.
Create the GPtrArray once in the _init() function and destroy it only
in the _finalize() function. Then use overlay_clear() to remove all
subpicture associations for intermediate updates, don't recreate the
GPtrArray.
Make GstVaapiOverlayRectangle a reference counted object. Also make
sure that overlay_rectangle_new() actually creates and associates the
VA subpicture.
Handle global-alpha from GstVideoOverlayComposition API. Likewise,
the same code path could also work for premultiplied-alpha but this
was not tested.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Add the necessary helpers in GstVaapiDisplay to determine whether subpictures
with global alpha are supported or not. Also add accessors in GstVaapiSubpicture
to address this feature.
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Add premultiplied-alpha and global-alpha feature flags, along with converters
between VA-API and gstreamer-vaapi definitions. Another round of helpers is
also necessary for GstVideoOverlayComposition API.
Use GPOINTER_TO_SIZE() instead of GPOINTER_TO_UINT() while manipulating
pointers. The latter is meant to be 32-bit only, not uintptr_t like size.
Only a gsize can hold all bits of a pointer.
Thanks to Ouping Zhang for spotting this error.
Heuristic: if the second start-code is available, check whether that
one marks the start of a new frame because e.g. this is a sequence
or picture header. This doesn't save much, since we already cache the
results.
Accelerate scan for start codes by skipping up to 3 bytes per iteration.
A start code prefix is defined by the following bytes: 00 00 01. Thus,
for any group of 3 bytes (xx yy zz), we have the following possible cases:
1. If zz != 1, this cannot be a start code, then skip 3 bytes;
2. If yy != 0, this cannot be a start code, then skip 2 bytes;
3. If xx != 0 or zz != 1, this cannot be a start code, then skip 1 byte;
4. xx == 00, yy == 00, zz == 1, we have match!
This algorithm requires to peek bytes from the adapter. This increases the
amount of bytes copied to a temporary buffer, but this process is much faster
than scanning for all the bytes and using shift/masks. So, overall, this is
a win.
Move parsing back to decoding step, but keep functions separate for now.
This is needed for future optimizations that may introduce some meta data
for parsed info attached to codec frames.
Optimize pre-allocation of decoder units, thus avoiding un-necessary
memory reallocations. The heuristic used is that we could have around
one slice unit per macroblock line.
Use a GArray to hold decoder units in a frame, instead of a single-linked
list. This makes 'append' calls faster, but not that much. At least, this
makes things clearer.
Allocate decoder unit earlier in the main parse() function and don't
delegate this task to derived classes. The ultimate purpose is to get
rid of dynamic allocation of decoder units.
The SPS, PPS and slice headers are not fully zero-initialized in the
codecparsers/ library. Rather, the standard upstream behaviour is to
initialize only certain syntax elements with some inferred values if
they are not present in the bitstream.
At the gstreamer-vaapi decoder level, we need to further initialize
certain syntax elements with some sensible default values so that to
not complicate VA drivers that just pass those verbatim to the HW,
and also avoid an memset() of the whole decoder unit.
Signed-off-by: Sreerenj Balachandran <sreerenj.balachandran@intel.com>
Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne@intel.com>
Make GstVaapiVideoBuffer a simple wrapper for video meta. This buffer is
no longer necessary but for compatibility with GStreamer 0.10 APIs or users
expecting a GstSurfaceBuffer like Clutter.
Use PTS value computed by the decoder, which could also be derived from
the GstVideoCodecFrame PTS. This makes it possible to fix up the PTS if
the original one was miscomputed or only represented a DTS instead.
Create a new VA context if the encoded surface size changes because we
need to keep the underlying surface pool until the last one was released.
Otherwise, either of the following cases could have happened: (i) release
a VA surface to an inexistent pool, or (ii) release VA surface to an
existing surface pool, but with different size.
Avoid creating a GstBuffer for slice data. Rather, directly use the codec
frame input buffer data. This is possible because the codec frame is valid
until end_frame() where we submit the VA buffers for decoding. Anyway, the
slice data buffer is copied into the VA buffer when it is created.