Rename @view_id to @id.
Add an id to the video metadata. Add a method to get the metadata from a buffer
with the given id.
Make a method to map a frame with a certain id. This only maps the frame with
the given id on the video metadata. The generic frame id can be used when a
buffer carries multiple video frames such as in multiview mode but maybe also
when dealing with interlaced video that stores the fields in separate buffers.
Make enums for the chroma siting for easier use in the videoinfo.
Make enums for the color range, color matrix, transfer function and the
color primaries. Add these values to the video info structure in a Colorimetry
structure. These values define the exact colors and are needed to perform
correct colorspace conversion. Use a couple of predefined colorimetry specs
because in practice only a few combinations are in use.
Add view_id to the video frames to identify the view this frame represents in
multiview video.
Remove old gst_video_parse_caps_framerate, use the videoinfo for this.
Port elements to new colorimetry info.
Remove deprecated colorspace property from videotestsrc.
Rework the audio caps similar to the video caps. Remove
width/depth/endianness/signed fields and replace with a simple string
format and media type audio/x-raw.
Create a GstAudioInfo and some helper methods to parse caps.
Remove duplicate code from the ringbuffer and replace with audio info.
Use AudioInfo in the base audio filter class.
Port elements to new API.
Make a new GstVideoFormatinfo structure that contains the specific information
related to a format such as the number of planes, components, subsampling,
pixel stride etc. The result is that we are now able to introduce the concept of
components again in the API.
Use tables to specify the formats and its properties.
Use macros to get information about the video format description.
Move code to set strides, offsets and size into one function.
Remove methods that are not handled with the structures.
Add methods to retrieve pointers and strides to the components in the video.
Remove the GstVideoPlane structure and move the fields directly into the
GstVideoInfo structure. This makes things a little easier to read and also makes
it more likely that we can pass the stride array to external libraries.
Update docs.
Add method to get number of components.
Implement method to calculate defaults from format and dimensions.
Improve caps parsing.
Implement GstVideoInfo to caps conversion.
Add GstVideoFlags similar to the flags on the metadata. The idea is to replace
the metadata flags with the GstVideoFlags.
Move VideoPlane to video.h, it contains the information for a plane.
Add GstVideoInfo structure that holds the current configuration of a video
format.
Add methods to parse caps into GstVideoInfo.