UVC devices are never interlaced, and doing VIDIOC_TRY_FMT on them
causes expensive and slow USB IO, so don't probe them for interlaced.
This shaves 2 seconds of the startup time of cheese with a Logitech
Webcam Pro 9000.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=677722
Block gst_osx_video_sink_run_cocoa_loop until the loop thread has started and
finished initializing NSApp. Fixes occasional warnings/crashes due to two
threads going inside NSApp before finishLaunching had completed.
When we are using a dedicated thread to run the main run loop we
must make sure that all selectors are performed on this same thread.
For instance if performSelectorOnMainThread is called from the real
main thread, it will not go through the message queue and will be
executed from the real main thread. By forcing the target thread,
we ensure that all functions will be called either from the real
main thread when the main run loop is running or from our thread
spinning the main loop.
Add a little hack to run the cocoa main runloop from a separate thread _when_
the main runloop is not being run (which means that the app doesn't use cocoa).
Runloops are thread specific, so the hack boils down to getting the runloop for
the main thread and setting it as the runloop for our dedicated thread.
input.std is of type v4l2_std_id which is defined as 64-bit unsigned integer.
Casting to uint means the higher bits, wich are used for the private video
standards of the TI video capture/display driver for example, are lost.
Sample the pipeline clock and device clock closer to eachother to reduce jitter.
Don't subtract the frame duration from the timestamp when we can use the device
timestamps.
Assume a delay of 1 frame in read-write mode.
Query the amount of available buffers when doing set_config(). This allows us to
configure the parent bufferpool with the number of buffers to preallocate.
Keep track of the provided allocator and use it when we need to allocate a
buffer in RW mode.
When we are can not allocate the requested max_buffers amount of buffers, make
sure we keep 2 buffers around in the pool and copy them into an output buffer.
This makes sure that we always have a buffer to capture into. We also need to
detect those copied buffers and unref them when they return to the pool.
Only free the queued buffers that we keep track of in our buffer array. for rw
io-mode, we do allocate buffers but we don't keep track of them in the buffer
array.
This is not enough to properly support H264 cameras, but it will
allow an H264 stream to be generated by v4l2src using the default
settings of the camera. If used with the pre-set-format signal, the
H264 encoder can be fully configured.
Conflicts:
sys/v4l2/gstv4l2object.c