When need to push out all the previously received events, concatenate all the
events from the previous frames (instead of leaking the old ones)
Improve debugging a little
Conflicts:
gst-libs/gst/video/gstvideodecoder.c
Frames receive a refcount when added to the frames list so release that refcount
in gst_video_decoder_do_finish_frame(). Also release the ref on the frame
because gst_video_decoder_do_finish_frame() takes ownership of the passed frame.
This allows subclasses to override it, as is necessary for e.g. the
video-crop meta. It is now necessary that after decide_allocation()
there is always a allocator and a configured buffer pool inside the
query.
Video base classes and theora plugin still needs to be ported again
Conflicts:
docs/libs/gst-plugins-base-libs-docs.sgml
docs/libs/gst-plugins-base-libs-sections.txt
docs/libs/gst-plugins-base-libs.types
ext/theora/gsttheoradec.c
ext/theora/gsttheoradec.h
ext/theora/gsttheoraenc.c
ext/theora/gsttheoraenc.h
gst-libs/gst/video/Makefile.am
gst-libs/gst/video/video.c
gst-libs/gst/video/video.h
gst/playback/gsturidecodebin.c
tests/check/libs/video.c
tests/check/pipelines/theoraenc.c
win32/common/libgstvideo.def
Some container formats (like AVI) set DTS on the buffers instead of
PTS.
We detect this by:
* detecting if input timestamps are non-increasing
* detecting if the order the frames come out is the same as the order
they were inputted (meaning the implementation is reordering frames).
If the decoder reorders frames, but input buffer timestamps were not
reordered, that means the buffers has DTS and not PTS as their timestamp.
If this is the case, we use set the PTS of the outgoing frames in the
same order as they were given to the decoder.
This fixes the issue for any decoder using this base class (yay).
Rename the frame_flags to flags. Because they are flags on the frame object it
does not need the redundant frame_ prefix.
Change the order of the metadata constructor so that the flags come before the
format and dimension arguments.
There's a new GstVideoFrameFlags enum now that contains the frame
specific flags only. GstVideoFlags does not contain the TFF/TFF/ONEFIELD
flags anymore because these are strictly frame specific.
Also add fallback to parse these fields from the GstBufferFlags in
gst_video_frame_map() if there's no GstVideoMeta attached to the buffer.
When setting its config, the pool increase the ref count of the allocator, but
at finalize the ref count is also increased rather than decreased.
This one-liner patch changes the gst_allocator_ref() for gst_allocator_unref()
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=674011
gst_buffer_set_qdata() will leak the structure passed to it
when called incorrectly (e.g. on a non-metadata-writable buffer).
This is expected, but we must avoid doing that in valgrind.
Try to avoid floating point maths for each pixel to be blended in
inner loop, and try to avoid the multiplication entirely for the
most common case of the global alpha being 1. Could probably be
refactored a bit more.
extract_alpha and apply_global alpha always return TRUE really,
so just do away with the return value. Convert a g_return_if_fail()
into a g_assert(), since this is only to check internal consistency
and not a guard for public API. Add some locking.
https://bugzilla.gnome.org/show_bug.cgi?id=668483
If we are asked to (un)premultiply,we need to create the new rectangle
with the right flags, so we can find it properly on subsequent cache
lookups (also because it's wrong otherwise).
https://bugzilla.gnome.org/show_bug.cgi?id=668483
We need to copy the pixels before messing with them, not least
because the buffer creation code below assumes it's ok to take
ownership.
Fixes crash caused by double-free.
https://bugzilla.gnome.org/show_bug.cgi?id=668483
We require the videometa activated before we can implement the alignment of
buffers. Users of the bufferpool should do this manually based on the results of
the allocation query.
Install defaul map/unmap function on the metadata and really call the functions
instead of always calling a default implementation.
Rework some bits so that we don't have to mess with the GstMapInfo information
(adding the offset), instead pass the adjusted data pointer from the map function.
This is necessary in order to match what the caps strings in
video.h contain for 16-bit rgb formats and also to match how
gst_video_format_parse_caps expects them.
https://bugzilla.gnome.org/show_bug.cgi?id=667681
Fix building of the libgstvideo module on Android by adding the
missing and needed $(DEFAULT_INCLUDES) to CFLAGS for the
androgenizer call on gst-libs/gst/video/Makefile.am
Before this change, building was failing due to gst-plugins-base/
and gst-plugins-base/gst-libs/gst/video being left out of the
include path.
Rename the offset field in GstVideoFormatInfo to poffset to avoid confusion with
the offset of the plane in the buffer. The poffset is the offset in the plane
where the first byte of the component data can be found.
Properly implement the COMP_OFFSET calculations.
Fix YV12 and YVU9, simply use the same offsets as the regular I420 and YUV9
variants, we use the plane info to reorder components already.
Improve the unit test.
Flesh out the video filter base class. Make it parse the input and output caps
and turn them into GstVideoInfo. Map buffers as video frames and pass them to
the transform functions.
This allows us to also implement the propose and decide_allocation vmethods.
Implement the transform size method as well.
Update subclasses with the new improvements.
Remove interlaced boolean from caps and replace with an interlace-mode enum.
document this new property in the video caps document. With the enum we can
put fields into separate video meta.
Add enum for this interlace-mode in the VideoInfo.
Update the buffer flags.
Slight change in semantics for convenience. Shouldn't cause any
problems since this function is usually only used on pre-filtered
caps and not random caps, and it's hard to imagine a situation
where someone would want to rely on the previous behaviour.
Basic API to attach overlay rectangles to buffers,
or blend them directly onto raw video buffers.
To be used primarily for things like subtitles or
logo overlays, not meant to replace videomixer.
Allows us to associate subtitle overlays with
non-raw video surface buffers, so that subtitles
are not lost and can instead be rendered later
when those surfaces are displayed or converted,
whilst re-using all the existing overlay plugins
and not having to teach them about our special
video surfaces. Could also have been made part
of the surface buffer abstraction of course, but
a secondary goal was to consolidate the blending
code for raw video into libgstvideo, and this
kind of API allows us to do both in a way that's
minimally invasive to existing elements, and at
the same time is fairly intuitive.
More features and extensions like the ability to
pass the source data or text/markup directly will
be added later.
https://bugzilla.gnome.org/show_bug.cgi?id=665080
API: gst_video_buffer_get_overlay_composition()
API: gst_video_buffer_set_overlay_composition()
API: gst_video_overlay_composition_new()
API: gst_video_overlay_composition_add_rectangle()
API: gst_video_overlay_composition_n_rectangles()
API: gst_video_overlay_composition_get_rectangle()
API: gst_video_overlay_composition_make_writable()
API: gst_video_overlay_composition_copy()
API: gst_video_overlay_composition_ref()
API: gst_video_overlay_composition_unref()
API: gst_video_overlay_composition_blend()
API: gst_video_overlay_rectangle_new_argb()
API: gst_video_overlay_rectangle_get_pixels_argb()
API: gst_video_overlay_rectangle_get_pixels_unscaled_argb()
API: gst_video_overlay_rectangle_get_render_rectangle()
API: gst_video_overlay_rectangle_set_render_rectangle()
API: gst_video_overlay_rectangle_copy()
API: gst_video_overlay_rectangle_ref()
API: gst_video_overlay_rectangle_unref()
Add private replacements for deprecated functions such as
g_mutex_new(), g_mutex_free(), g_cond_new() etc., mostly
to avoid the deprecation warnings. We'll change these
over to the new API once we depend on glib >= 2.32.
Replace g_thread_create() with g_thread_try_new().
Make appsink return a GstSample. Remove the pull_buffer_list method because it
is not very useful anymore.
Pass GstSample to the conversion function.
Update playbin2 and examples
Make out args to gst_video_event_parse_{downstream|upstream}_force_key_unit
optional, update libgstvideo.def and fix docs a bit.
API: gst_video_event_new_upstream_force_key_unit
API: gst_video_event_new_downstream_force_key_unit
API: gst_video_event_is_force_key_unit
API: gst_video_event_parse_upstream_force_key_unit
API: gst_video_event_parse_downstream_force_key_unit
https://bugzilla.gnome.org/show_bug.cgi?id=607742