If both quality and bitrate are set, libtheora will try to meet
both constraints, causing it to prefer emitting a smaller number
of good frames, to emitting the full number of frames that would
not meet the requested quality. This causes a slideshow effect
when the bitrate is low and the quality is high. And the default
theoraenc is high (48/63).
So only set quality when it is requested, and leave it unset
otherwise.
https://bugzilla.gnome.org/show_bug.cgi?id=658443
Make enums for the chroma siting for easier use in the videoinfo.
Make enums for the color range, color matrix, transfer function and the
color primaries. Add these values to the video info structure in a Colorimetry
structure. These values define the exact colors and are needed to perform
correct colorspace conversion. Use a couple of predefined colorimetry specs
because in practice only a few combinations are in use.
Add view_id to the video frames to identify the view this frame represents in
multiview video.
Remove old gst_video_parse_caps_framerate, use the videoinfo for this.
Port elements to new colorimetry info.
Remove deprecated colorspace property from videotestsrc.
Make a new GstVideoFormatinfo structure that contains the specific information
related to a format such as the number of planes, components, subsampling,
pixel stride etc. The result is that we are now able to introduce the concept of
components again in the API.
Use tables to specify the formats and its properties.
Use macros to get information about the video format description.
Move code to set strides, offsets and size into one function.
Remove methods that are not handled with the structures.
Add methods to retrieve pointers and strides to the components in the video.
Remove the GstVideoPlane structure and move the fields directly into the
GstVideoInfo structure. This makes things a little easier to read and also makes
it more likely that we can pass the stride array to external libraries.
The speed-level property, which allows callers to trade of encoding
quality for speed in the libtheora api, has a version-dependent
maximum and default values. Instead of hardcoding the acceptable
range for the theoraenc element's presentation of this setting,
we query the library directly at class initialization time and
set the maximum and default values from that. If the query fails,
we fall back to the previous default setting.
To keep the values reported by gst-inspect (which I'm told use
the spec values from the class) with those available on an\
instantiated element, we remove to setting of enc->speed_level
from the initializer and instead pass G_PARAM_CONSTRUCT to
the property spec flags, asking g_object to set this property
when theoraenc objects are constructed.
NB in theory the maximum speed-level could depend on the actual
video caps. If later versions of libtheoraenc do this, a second
call will need to be made from theora_enc_reset to update the
property, since this function is mostly useful for realtime
adjustment of performance while the pipeline is running.
libtheora has two encoding modes, CBR, where it tries to hit a target
bitrate and VBR where it tries to achieve a target quality.
Internally if the target bitrate is set to anything other then 0 the
encoding-mode is CBR.
This means that the gstreamer element can leave the video_quality
setting alone as long as the user is tweaking the bitrate. Which has the
nice side-effect that if the user explicitely sets the bitrate to 0
(which is actually the default), the quality value doesn't get reset and
one ends up encoding VBR at quality-level 0...
Remove "This property requires libtheora version >= 1.1" qualifiers
from property descriptions. They aren't needed any longer now that
we require libtheora >= 1.1.
Since this is just a debugging feature and libtheora will usually not be
compiled with that option enabled, we should maybe just hide these properties,
since they won't work anyway, and avoid confusing warnings.
Also rename properties to make them less cryptic.
https://bugzilla.gnome.org/show_bug.cgi?id=628488
Do some additional checks on the granulpos timestamp before using it for
calculating the duration because oggdemux generates wrong granulepos now.
Fixes seeking somewhat again.
Previously, the code always rounded to even sizes. Now it only ensures
that pic_x and pic_y are multiples of 2 if the output format requires
it.
Also inlcudes fixes to take pic_x/y into account properly when copying
the buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=594729