Remove the android/ top dir
Fixe the Makefile.am to be androgenized
To build gstreamer for android we are now using androgenizer which generates the needed Android.mk files.
Androgenizer can be found here: http://git.collabora.co.uk/?p=user/derek/androgenizer.git
We need to call gst_poll_wait before calling gst_poll_* status
functions on that new descriptor, so restart the loop, so _wait
will have been called on all elements of self->poll, whether
they have just been added or not. */
Previously the different decoders would discard errounous GstFlowReturns coming
from downstream. Now we properly return these further upstream so that we
properly error out on eg. negotiation problems.
fixes the following compile errors:
cc1: warnings being treated as errors
camswclient.c: In function 'cam_sw_client_open':
camswclient.c:81: warning: implicit declaration of function 'strncpy'
camswclient.c:81: warning: incompatible implicit declaration of built-in function 'strncpy'
camswclient.c:89: warning: implicit declaration of function 'strerror'
camswclient.c:89: warning: nested extern declaration of 'strerror'
camswclient.c:89: warning: format '%s' expects type 'char *', but argument 9 has type 'int'
camswclient.c: In function 'send_ca_pmt':
camswclient.c:129: warning: implicit declaration of function 'memcpy'
camswclient.c:129: warning: incompatible implicit declaration of built-in
function 'memcpy'
gstdvbsrc.c:48:19: error: error.h: No such file or directory
Signed-off-by: Rob Clark <rob@ti.com>
These callbacks may fire from any thread, hence we should only enqueue
buffers and let the streaming thread take care of the rest as soon as
the blocking encode or decode operation has finished.
At least as far as miovideosrc is concerned. Turns out that CoreVideo's
CVPixelBufferGetIOSurface is not present in Leopard's version of CoreVideo.
We solve this by making it possible for symbols to be marked as optional.
QTCaptureSession::addInput and QTCaptureSession::addOutput call
NSObject::performSelectorOnMainThread internally so they need the mainRunLoop to
run at least for a while to complete.
We cannot call any CMBufferQueue functions while holding the lock that
our callback also depends on. So now we make use of CMBufferQueue's
trigger API in order to get notified when the queue has data.
The codec that called us might be holding locks to shared resources, so
we should never push downstream from within its buffer callback.
Note that a GstBufferList is not used here because we need to preserve
the buffer metadata held by our GstBuffer subclasses.
Profiling of H.264 encode and decode revealed that conversions
between packed and planar were happening behind the scenes.
Hence we now choose I420 instead of YUY2.
Should keep a strong reference to the device, but we don't need to manage
the reference count of elements of an NSMutableArray as it takes care of
that for us.
This element makes use of the documented AVFoundation framework made
available starting with iOS 4.0, and hence this means we can finally
capture video using a public API.
Also rename the relevant API so we mirror the public API more closely, and
switch to CoreFoundation CFTypeRef style typedefs. We still support the old
private CoreMedia in order to not break OS X support.
This means that vtenc and vtdec are now compatible with iOS 4.x, and in
theory also future versions of OS X, where this API may turn public like
it has on iOS.
GetOverlappedResult() might never return with some drivers. Time out
after 1000 ms. We cannot really fix this without either:
1) Controlling the streaming thread so we can do CancelIo() from that
thread.
2) Switch to using IO completion ports.
Turns out that the reference implementation does this, hence we need to
mirror this behaviour. This typically happens with hardware that takes
some time to initialize.
Most important part here is special-casing "device busy" so the application
is able to provide better feedback when another application is using the
device.
* Make the driver write directly into each GstBuffer to avoid memcpy().
* Don't memset() the buffer before reusing it.
* Recycle memory by keeping two spare buffers. Two because the sink
downstream may keep a ref to the previous buffer.
Note that we align buffers on highest possible byte boundary (4096) so we
don't have to take into account what kind of alignment the driver requires.
Provides the following elements:
qtkitvideosrc: OS X video source relying on the QTKit API. Comes with
hard-coded caps as the API does not provide any way of querying for
formats supported by the hardware. Hasn't been tested a lot, but seems
to work.
miovideosrc: OS X video source which uses the undocumented/private
CoreMediaIOServices API, which is also the one used by iChat.
Present on latest version of Leopard and all versions of Snow Leopard.
Has been tested extensively with built-in cameras and TANDBERG's
PrecisionHD USB camera.
vtenc, vtdec: Generic codec wrappers which make use of the undocumented/
private VideoToolbox API on OS X and iOS. List of codecs are currently
hard-coded to H.264 for vtenc, and H.264 + JPEG for vtdec. Can easily be
expanded by adding new entries to the lists, but haven't yet had time to
do that. Should probably also implement probing as available codecs depend
on the OS and its version, and there doesn't seem to be any way to
enumerate the available codecs.
vth264decbin, vth264encbin: Wrapper bins to make it easier to use
vtdec_h264/vtenc_h264 in live scenarios.
iphonecamerasrc: iPhone camera source relying on the undocumented/private
Celestial API. Tested on iOS 3.1 running on an iPhone 3GS. Stops working
after a few minutes, presumably because of a resource leak. Needs some
love.
Note that the iOS parts haven't yet been ported to iOS 4.x.
timestamps are now chosen in the following order:
upstream -> parsed by decoder -> calculated from timestamp offset
we also check the timestamps supplied from upstream/decoder to see if they
atleast is increasing.
This way we'll reuse the GstVdp[Video|Output]Buffers if they're of the same
size and chroma-type/rgba-format.
Also remove gst_vdp_output_src_pad_negotiate and set a "setcaps" function on
GstVdpOutputSrcPad instead, leaving negotiation to GstVdpVideoPostProcess.
instead we do as GstVdpVideoSrcPad and use the "templ" property of GstPad,
which enable us to change the signature of gst_vdp_output_src_pad_new to match
gst_pad_new_from_template
we now no longer try to get the GstVdpDevice from downstream since it in
practice didn't give us anything and complicates the code alot. Nevertheless if device
distribution should be done there's probably a lot better ways to do it.
Instead we now simply aquire the device in vdpauvideopostprocess when we're
going into PAUSED.
we now no longer try to get the GstVdpDevice from downstream since it in
practice didn't give us anything and complicates the code alot. Nevertheless if device
distribution should be done there's probably a lot better ways to do it.
* Inherit from GstVideoSink
* Implement GstNavigation interface
* Proper COM initialization for threaded environments
* Fix Window resource leak
* Add EVR support for better video scaling on Windows Vista and above
* Only apply PAR scaling when the keep_aspect_ratio property is set to stay
consistent with the other Linux sinks
* Prevent an infinite loop with the wndproc chain
* Fix debugging messages to use the object instance