Use YUV instead of RGB textures, then convert using the new apple specific
shader in GstGLColorConvert. Also use GLMemory directly instead of using the
GL upload meta, avoiding an extra texture copy we used to have before.
Set reorder_queue_frame_delay from the DPB size (in frames). Still not optimal,
as the DPB size is larger than the max bframe forward prediction length, but I
don't know how to compute the latter without parsing every group of pictures.
The decoder output frames in DTS order, even with the flag
kVTDecodeFrame_EnableTemporalProcessing. We store a internal
queue of the decoded frames and push them PTS order.
FigVideoFormatDescriptionCreateWithSampleDescriptionExtensionAtom
is an un-documented private function which might change its signature
as it already did in the past. Replace it with
CMVideoFormatDescriptionCreate and the also un-documented Extensions
dictionary.
Public frameworks don't need to build the API dynamically, we instead
use the framework directly.
The exception is for VideoToolbox which went public in the 10.8 SDK,
but it's still private in older version of the SDK and iOS. This allow
building the plugin against SDK's where it's not a public framework.
These callbacks may fire from any thread, hence we should only enqueue
buffers and let the streaming thread take care of the rest as soon as
the blocking encode or decode operation has finished.
The codec that called us might be holding locks to shared resources, so
we should never push downstream from within its buffer callback.
Note that a GstBufferList is not used here because we need to preserve
the buffer metadata held by our GstBuffer subclasses.
Profiling of H.264 encode and decode revealed that conversions
between packed and planar were happening behind the scenes.
Hence we now choose I420 instead of YUY2.
Also rename the relevant API so we mirror the public API more closely, and
switch to CoreFoundation CFTypeRef style typedefs. We still support the old
private CoreMedia in order to not break OS X support.
This means that vtenc and vtdec are now compatible with iOS 4.x, and in
theory also future versions of OS X, where this API may turn public like
it has on iOS.
Provides the following elements:
qtkitvideosrc: OS X video source relying on the QTKit API. Comes with
hard-coded caps as the API does not provide any way of querying for
formats supported by the hardware. Hasn't been tested a lot, but seems
to work.
miovideosrc: OS X video source which uses the undocumented/private
CoreMediaIOServices API, which is also the one used by iChat.
Present on latest version of Leopard and all versions of Snow Leopard.
Has been tested extensively with built-in cameras and TANDBERG's
PrecisionHD USB camera.
vtenc, vtdec: Generic codec wrappers which make use of the undocumented/
private VideoToolbox API on OS X and iOS. List of codecs are currently
hard-coded to H.264 for vtenc, and H.264 + JPEG for vtdec. Can easily be
expanded by adding new entries to the lists, but haven't yet had time to
do that. Should probably also implement probing as available codecs depend
on the OS and its version, and there doesn't seem to be any way to
enumerate the available codecs.
vth264decbin, vth264encbin: Wrapper bins to make it easier to use
vtdec_h264/vtenc_h264 in live scenarios.
iphonecamerasrc: iPhone camera source relying on the undocumented/private
Celestial API. Tested on iOS 3.1 running on an iPhone 3GS. Stops working
after a few minutes, presumably because of a resource leak. Needs some
love.
Note that the iOS parts haven't yet been ported to iOS 4.x.