Make enums for the chroma siting for easier use in the videoinfo.
Make enums for the color range, color matrix, transfer function and the
color primaries. Add these values to the video info structure in a Colorimetry
structure. These values define the exact colors and are needed to perform
correct colorspace conversion. Use a couple of predefined colorimetry specs
because in practice only a few combinations are in use.
Add view_id to the video frames to identify the view this frame represents in
multiview video.
Remove old gst_video_parse_caps_framerate, use the videoinfo for this.
Port elements to new colorimetry info.
Remove deprecated colorspace property from videotestsrc.
If ints are 64 bits, 32 bits should get promoted in varargs anyway,
and we don't care about 16 bit ints.
This makes the code a lot more readable, and still gets us nice
hexadecimal 32 bit serialnos.
https://bugzilla.gnome.org/show_bug.cgi?id=656775
Rework the audio caps similar to the video caps. Remove
width/depth/endianness/signed fields and replace with a simple string
format and media type audio/x-raw.
Create a GstAudioInfo and some helper methods to parse caps.
Remove duplicate code from the ringbuffer and replace with audio info.
Use AudioInfo in the base audio filter class.
Port elements to new API.
Make a new GstVideoFormatinfo structure that contains the specific information
related to a format such as the number of planes, components, subsampling,
pixel stride etc. The result is that we are now able to introduce the concept of
components again in the API.
Use tables to specify the formats and its properties.
Use macros to get information about the video format description.
Move code to set strides, offsets and size into one function.
Remove methods that are not handled with the structures.
Add methods to retrieve pointers and strides to the components in the video.
Remove the GstVideoPlane structure and move the fields directly into the
GstVideoInfo structure. This makes things a little easier to read and also makes
it more likely that we can pass the stride array to external libraries.
This decreases the number of buffers held on each pad by one,
eliminating next_buffer. Simplifies the logic by relying solely
on CollectPads to let us know when a pad is in EOS. As a side
benefit, the collect pads related code is structured more like
other CollectPad users.
The previous code would occasionally mark the wrong pad as EOS,
causing the code to get in a state where all the streams were
finished, but EOS hadn't been sent to the source pad.