Rename the offset field in GstVideoFormatInfo to poffset to avoid confusion with
the offset of the plane in the buffer. The poffset is the offset in the plane
where the first byte of the component data can be found.
Properly implement the COMP_OFFSET calculations.
Fix YV12 and YVU9, simply use the same offsets as the regular I420 and YUV9
variants, we use the plane info to reorder components already.
Improve the unit test.
Remove interlaced boolean from caps and replace with an interlace-mode enum.
document this new property in the video caps document. With the enum we can
put fields into separate video meta.
Add enum for this interlace-mode in the VideoInfo.
Update the buffer flags.
Make appsink return a GstSample. Remove the pull_buffer_list method because it
is not very useful anymore.
Pass GstSample to the conversion function.
Update playbin2 and examples
Make out args to gst_video_event_parse_{downstream|upstream}_force_key_unit
optional, update libgstvideo.def and fix docs a bit.
API: gst_video_event_new_upstream_force_key_unit
API: gst_video_event_new_downstream_force_key_unit
API: gst_video_event_is_force_key_unit
API: gst_video_event_parse_upstream_force_key_unit
API: gst_video_event_parse_downstream_force_key_unit
https://bugzilla.gnome.org/show_bug.cgi?id=607742
Rename @view_id to @id.
Add an id to the video metadata. Add a method to get the metadata from a buffer
with the given id.
Make a method to map a frame with a certain id. This only maps the frame with
the given id on the video metadata. The generic frame id can be used when a
buffer carries multiple video frames such as in multiview mode but maybe also
when dealing with interlaced video that stores the fields in separate buffers.
Make enums for the chroma siting for easier use in the videoinfo.
Make enums for the color range, color matrix, transfer function and the
color primaries. Add these values to the video info structure in a Colorimetry
structure. These values define the exact colors and are needed to perform
correct colorspace conversion. Use a couple of predefined colorimetry specs
because in practice only a few combinations are in use.
Add view_id to the video frames to identify the view this frame represents in
multiview video.
Remove old gst_video_parse_caps_framerate, use the videoinfo for this.
Port elements to new colorimetry info.
Remove deprecated colorspace property from videotestsrc.
Rework the audio caps similar to the video caps. Remove
width/depth/endianness/signed fields and replace with a simple string
format and media type audio/x-raw.
Create a GstAudioInfo and some helper methods to parse caps.
Remove duplicate code from the ringbuffer and replace with audio info.
Use AudioInfo in the base audio filter class.
Port elements to new API.
Make a new GstVideoFormatinfo structure that contains the specific information
related to a format such as the number of planes, components, subsampling,
pixel stride etc. The result is that we are now able to introduce the concept of
components again in the API.
Use tables to specify the formats and its properties.
Use macros to get information about the video format description.
Move code to set strides, offsets and size into one function.
Remove methods that are not handled with the structures.
Add methods to retrieve pointers and strides to the components in the video.
Remove the GstVideoPlane structure and move the fields directly into the
GstVideoInfo structure. This makes things a little easier to read and also makes
it more likely that we can pass the stride array to external libraries.