iOS has special stride requirements that we don't know yet, so copy
input buffers into buffers allocated by iOS for now.
Later we should check the stride and probably provide a buffer pool for these
buffers so upstream can directly write in there.
gst_pad_get_pad_template_caps() returns a reference which is unreferenced,
so creating a copy using gst_caps_copy() results in a reference leak. Also
the caps are pushed as an event downstream, but this doesn't consume the
caps so it must still be unreferenced.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=734534
The pixel buffer release callback is called if the void *
dataPtr given to the CVPixelBufferCreateWithPlanarBytes
is not NULL.
According to the documentation dataPtr is supposed to be a
"plane description block" but no specific type is given.
https://bugzilla.gnome.org/show_bug.cgi?id=711847
Handle stride alignment through the use of the video meta API. The
code is based on the corevideobuffer implementation.
If the video meta API is not supported and the underlying buffer
contains padding, the core media buffer is copied to a system memory
buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=727885
Public frameworks don't need to build the API dynamically, we instead
use the framework directly.
The exception is for VideoToolbox which went public in the 10.8 SDK,
but it's still private in older version of the SDK and iOS. This allow
building the plugin against SDK's where it's not a public framework.
These callbacks may fire from any thread, hence we should only enqueue
buffers and let the streaming thread take care of the rest as soon as
the blocking encode or decode operation has finished.
The codec that called us might be holding locks to shared resources, so
we should never push downstream from within its buffer callback.
Note that a GstBufferList is not used here because we need to preserve
the buffer metadata held by our GstBuffer subclasses.
Profiling of H.264 encode and decode revealed that conversions
between packed and planar were happening behind the scenes.
Hence we now choose I420 instead of YUY2.
Also rename the relevant API so we mirror the public API more closely, and
switch to CoreFoundation CFTypeRef style typedefs. We still support the old
private CoreMedia in order to not break OS X support.
This means that vtenc and vtdec are now compatible with iOS 4.x, and in
theory also future versions of OS X, where this API may turn public like
it has on iOS.
Provides the following elements:
qtkitvideosrc: OS X video source relying on the QTKit API. Comes with
hard-coded caps as the API does not provide any way of querying for
formats supported by the hardware. Hasn't been tested a lot, but seems
to work.
miovideosrc: OS X video source which uses the undocumented/private
CoreMediaIOServices API, which is also the one used by iChat.
Present on latest version of Leopard and all versions of Snow Leopard.
Has been tested extensively with built-in cameras and TANDBERG's
PrecisionHD USB camera.
vtenc, vtdec: Generic codec wrappers which make use of the undocumented/
private VideoToolbox API on OS X and iOS. List of codecs are currently
hard-coded to H.264 for vtenc, and H.264 + JPEG for vtdec. Can easily be
expanded by adding new entries to the lists, but haven't yet had time to
do that. Should probably also implement probing as available codecs depend
on the OS and its version, and there doesn't seem to be any way to
enumerate the available codecs.
vth264decbin, vth264encbin: Wrapper bins to make it easier to use
vtdec_h264/vtenc_h264 in live scenarios.
iphonecamerasrc: iPhone camera source relying on the undocumented/private
Celestial API. Tested on iOS 3.1 running on an iPhone 3GS. Stops working
after a few minutes, presumably because of a resource leak. Needs some
love.
Note that the iOS parts haven't yet been ported to iOS 4.x.