When do v4l2_ioctl() with VIDIOC_ENUMINPUT fails on some devices,
kernels before 3.3.0 return EINVAL, but newer kernels return ENOTTY.
This patch make those devices work well on kernel 3.3+.
Related kernel commit:
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=commit;h=07d106d0a33d6063d2061305903deb02489eba20
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Signed-off-by: Rui Wang <wangr@lemote.com>
Signed-off-by: Jie Chen <chenj@lemote.com>
This was unused apparently and removed in the kernel in commit:
From 2b719d7baf490e24ce7d817c6337b7c87fda84c1 Mon Sep 17 00:00:00 2001
From: Sakari Ailus <sakari.ailus@iki.fi>
Date: Wed, 2 May 2012 09:40:03 -0300
Subject: [PATCH] [media] v4l: drop v4l2_buffer.input and V4L2_BUF_FLAG_INPUT
Remove input field in struct v4l2_buffer and flag V4L2_BUF_FLAG_INPUT which
tells the former is valid. The flag is used by no driver currently.
https://bugzilla.gnome.org/show_bug.cgi?id=681491
Conflicts:
sys/v4l2/gstv4l2bufferpool.c
UVC devices are never interlaced, and doing VIDIOC_TRY_FMT on them
causes expensive and slow USB IO, so don't probe them for interlaced.
This shaves 2 seconds of the startup time of cheese with a Logitech
Webcam Pro 9000.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=677722
input.std is of type v4l2_std_id which is defined as 64-bit unsigned integer.
Casting to uint means the higher bits, wich are used for the private video
standards of the TI video capture/display driver for example, are lost.
Sample the pipeline clock and device clock closer to eachother to reduce jitter.
Don't subtract the frame duration from the timestamp when we can use the device
timestamps.
Assume a delay of 1 frame in read-write mode.
Query the amount of available buffers when doing set_config(). This allows us to
configure the parent bufferpool with the number of buffers to preallocate.
Keep track of the provided allocator and use it when we need to allocate a
buffer in RW mode.
When we are can not allocate the requested max_buffers amount of buffers, make
sure we keep 2 buffers around in the pool and copy them into an output buffer.
This makes sure that we always have a buffer to capture into. We also need to
detect those copied buffers and unref them when they return to the pool.
Only free the queued buffers that we keep track of in our buffer array. for rw
io-mode, we do allocate buffers but we don't keep track of them in the buffer
array.
This is not enough to properly support H264 cameras, but it will
allow an H264 stream to be generated by v4l2src using the default
settings of the camera. If used with the pre-set-format signal, the
H264 encoder can be fully configured.
Conflicts:
sys/v4l2/gstv4l2object.c
In order to support UVC H264 encoding cameras, an H264 Probe&Commit
must happen before the normal v4l2 set-format. This new signal is
meant to allow an external application or bin to do it.
It also serves to expose the file descriptor used by v4l2src in case
some custom ioctls need to be called.
Conflicts:
sys/v4l2/Makefile.am
sys/v4l2/gstv4l2src.c
sys/v4l2/v4l2src_calls.c
The tuner marshal and enumtypes are autogenerated, and they need
to be created before the compilation of gstv4l2tuner.c
This patch adds the automake instruction for ensuring the
autogeneration of those files previous the compilation.
The base class may have set the DISCONT flag on the first buffer pushed
out. We need to clear that when recycling buffers back into the buffer
pool, otherwise we constantly push out buffers with the discont flag
set, which might upset downstream elements, esp. for compressed
formats like mpeg-ts.
Make sure we always call munmap() with the same size we called mmap()
with before.
Current v4l2src uses the same structure for VIDIOC_QUERYBUF, VIDIOC_QBUF
and v4l2_munmap calls. The problem is that the video buffer size (length)
may vary for compressed or emulated bufs. VIDIOC_QBUF will change it if
we pass the pointer of a v4l2_buffer. This is why we should avoid using
same variable for mmap and video buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=671126
Don't post a (fatal) error message on the bus just because we
failed to query some control. Fixes issue with built-in
Suyin Corp webcam for HP notebook (usbid 064e:e28a) on
OpenSuse 12.1, where querying red/blue balance fails.
https://bugzilla.gnome.org/show_bug.cgi?id=670197
Add private replacements for deprecated functions such as
g_mutex_new(), g_mutex_free(), g_cond_new() etc., mostly
to avoid the deprecation warnings. We'll change these
over to the new API once we depend on glib >= 2.32.
We don't currently support setting the pixel-aspect-ratio from V4L2. So
simply set it to be 1/1 in the caps to prevent negotiation failures when
fixating to weird values (e.g. when the downstream caps has
pixel-aspect-ratio = [ MIN, MAX ] )
https://bugzilla.gnome.org/show_bug.cgi?id=663580
We used to skip frame rate setup if the camera was already setup
with the requested frame rate. This breaks some cameras though,
causing them to not output data (several models of Thinkpad cameras
have this problem at least).
So, don't skip.
https://bugzilla.gnome.org/show_bug.cgi?id=638300
Some drivers are buggy are will change the current format when
processing VIDIOC_TRY_FMT. Save and restore the current format
to ensure the format is kept unchanged.
https://bugzilla.gnome.org/show_bug.cgi?id=649067
Use the fraction compare utility to compare function, not the
handcrafted one. The handcrafted one is buggy as it doesn't take into
account rounding error. For example comparing a framerate of 20/1 on a
camera configured as 30/1 fps would yield true: 1 == (1 * 20)/30 and not
re-configure the camera. Fixes#656104
When nobody is using our pool, activate it ourselves.
Avoid leaking the buffer array.
Set default pool configuration with caps.
Don't keep current_caps, core does that for us now.
Use the more specialized type for the bufferpool.
Use the size from the driver as the size of the image to read.
Don't configure the pool when created. This will be done in the setup_allocation
method later or by upstream for sinks.
Remove unused properties and variables. Bufferpool sizes are now configured in
the bufferpool by the elements in the pipeline. We might want to influence the
pool size later somehow.
Prefer to always use the default bufferpool queue for the _acquire function
because it properly supports unblocking when setting inactive etc. As a result,
we need to dequeue buffers and put them back in the bufferpool queue when we
have queued all buffers in the sink.
Rename some variables to more meaningfull names to avoid a problem with
freeing the wrong amount of buffers.
Remove old method, use neww _process method for the sink.
Inform the parent bufferpool class about the settings too. This is needed to let
it know about the max-buffers.
Allocate the negotiated max-buffers and initially mmap min-buffers. The idea is
that the bufferpool will allocate more when needed.
Improve debugging.
Only poll in capture mode, it does not seem to work in playback mode on this
beagleboard.
Add different transport methods to the bufferpool (MMAP and READ/WRITE)
Do more parsing of the bufferpool config.
Start and stop streaming based on the bufferpool state.
Make separate methods for getting a buffer from the pool and filling it with
data. This allows us to fill buffers from other pools too. Either use copy or
read to fill up the target buffers.
Add property to force a transfer mode in v4l2src.
Increase default number of buffers to 4.
Negotiate bufferpool and its properties in v4l2src.
Pass the caps and the default video size to the bufferpool config.
Don't activate the bufferpool, this will be done by the object that decides to
use the bufferpool.
Improve debugging and error reporting.
When we have all buffers queued for playback and we need a new empty buffer,
dequeue one and return it.
Set the right size for sink buffers.
Improve counting of queued buffers.
Extend from GstBufferPool.
Handle the lifetime of the pool buffers correctly with the start/stop vmethods.
Map acquire and release directly to QBUF and DQBUF. We still expose an explicit
qbuf for the v4l2sink for now.
Move the details of how to capture to the device object. Remove the
v4l2src_calls.[ch] files because they are empty now.
Provide two simple methods to get and return a buffer to the device.
Also do a slow copy when the buffer is not from our pool.
Rename start and stop methods to open and close because that is what they do.
After setting the format on the device object, setup the bufferpools. Move this
code from the v4l2src_calls.c file, it is shared between source and sink.
Make new device start and stop method that merges various bits of common code
spread over several files.
We want to keep the default strides in the videoinfo. Keep the stride of the
video frames separate so that we can use both to copy a video frame and do
correct stride conversion.
Use GstVideoInfo to store the parsed caps.
Remove outsize from the caps parsing code, it's wrong because it does not use
the stride given by the driver.
Move the configuration of the framerate to where we set the other format
parameters.
Remove hack to check if the device is active.
Store streamparm in the device info.
Use some macros to access the current device configuration.
Remove some duplicate fields in src and sink and use the device configuration
instead.
Pass the caps to the set_format function and make _set_format parse the caps.
Also keep the parsed values in the v4l2object so that we can refer to them when
we want.
Keep track of the currently configured format and setting in the
v4l2object.
Pass the v4l2object to the bufferpool constructor so that the bufferpool can
know everything about the currently configured settings. This also allows us
to remove some awkward code.
Create a new pool in setcaps and stop/destroy the old one.
Remove buffer_alloc functions.
Check that we have v4l2 metadata in show_frame and fall back to memcpy into a
buffer from our pool if we don't receive one of our own buffers.
Various cleanups, avoids useless casts, move error handling outside of the main
code flow.
Negotiate to a resonable resolution instead of the max resolution.
Based on a patch by Guennadi Liakhovetski.
v2: updates because I forgot to add GstTuner interface to v4l2sink
v3: update to add all possible values to norm enum
Commit 6c8268dbfd broke recording
from interlaced v4l2 source (e.g. typical tv capture card) since
V4L2_FIELD_SEQ_TB (with fields stored separately) does not map
to currently defined interlaced format (fields stored interleaved).
Besides this mismatch, hardware might quite likely not support or
appreciate this field value, since querying supported formats mapped
_INTERLACED field formats to interlaced=true caps (so the latter should
not be mapped to field value that is not known to be supported).
Older kernels don't have these, and there's no easy way to check for the
existance of enums that doesn't involve a configure check, so just define
these if the V4L2_CAP_VIDEO_OUTPUT_OVERLAY define is not there, which was
added in the same commit as the TB/BT enum. Fixes compilation on CentOS 5.
https://bugzilla.gnome.org/show_bug.cgi?id=639339
These macros will expand to NOOPs given the right defines. Also,
g_return_if_fail() and friends are meant to be used to catch programming
errors (like invalid input to functions), not runtime error handling.
Looks like this got enabled by accident when adding it to v4l2sink,
so undo this for now. Not sure it makes much sense in a GStreamer
context with current hardware.
This reverts commit 9e1d419d07.
Reverting this since it adds unreviewed and bad API to v4l2src
(property of type enum, with seemingly random and unsorted values).
output devices should use get/set output, and in either case we should
not print a warning message if the ioctl fails but the device does not
claim to support the tuner interface
If xoverlay is available, v4l2sink should create a window for the overlay to
display in.
The window automatically tries to make itself as large as possible.
This works well on a small screen, but perhaps should first attempt to use
the size of the video that is played (no scaling).
Special case check for sub-buffers: In certain cases, places like
GstBaseTransform, which might check that the buffer is writable before copying
metadata, timestamp, and such, will find that the buffer has more than one
reference to it. In these cases, they will create a sub-buffer with an offset=0
and length equal to the original buffer size.
This could happen in two scenarios: (1) a tee in the pipeline, and (2) because
the refcnt is incremented in gst_mini_object_free() before the finalize function
is called, and decremented after it returns.. but returning this buffer to the
buffer pool in the finalize function, could wake up a thread blocked in
_buffer_alloc() which could run and get a buffer w/ refcnt==2 before the thread
originally unref'ing the buffer returns from finalize function and decrements
the refcnt back to 1!
This is related to issue #545501
The size of the buffer would be zero'd out in gst_v4l2_buffer_finalize()
after the buffer is qbuf'd or pushed onto the queue of available buffers..
leaving a race condition where the thread waiting for the buffer could awake
and set back a valid size before the finalizing thread zeros out the length.
This would result that the newly allocated buffer has length of zero.
When v4l2sink goes to PAUSED->READY it only stops streaming, so the state
should be set to STATE_PENDING_STREAMON in case the element transitions
back to PLAYING.
We'd prefer to throttle the decoder if we run out of buffers, to keep a bound
on memory usage. Also, for OMAP4 it is a requirement of the decoder to not
alternate between memory alloced by the display driver and malloc'd userspace
memory.
note: this really only affects v4l2sink since gst_v4l2_buffer_pool_get() is
only called once per buffer in the v4l2src case (in
gst_v4l2src_buffer_pool_activate())
Most v4l2 drivers will get upset when you queue the same buffer twice in a
row without first dequeueing it.
Rendering of pre-roll buffers can be re-introduced later, but will require
tracking the state of the buffer, and avoiding to re-QBUF if the buffer has
already been passed to the driver.
When the decoder is using pad_alloc(), v4l2sink would behave badly if
the number of buffers ('queue-size' property) was not high enough to
account for all the buffers needed by the decoder, and other elements
(such as queues) between the decoder and v4l2sink. This patch
slightly increases the default number of buffers, and changes v4l2sink
to drop frames rather than return an error in case the number of
buffers is not high enough.
with i686-apple-darwin10-gcc-4.2.1:
gstv4l2object.c: In function 'gst_v4l2_object_get_nearest_size':
gstv4l2object.c:1988: warning: format '%u' expects type 'unsigned int', but argument 12 has type 'gint *'
gstv4l2object.c:1988: warning: format '%u' expects type 'unsigned int', but argument 13 has type 'gint *'
it's perfectly ok for a video output device to not have overlay capabilities.
this patch removes the need to get/set the overlay parameters if the user
does not explicitely request one of the overlay properties
MPEG doesn't have a static size per frame, so don't pretend it has one
and fail when capturing because it doesn't match. Instead mark the size
as unknown and let the read frame grabbing method use a reasonable fallback
value (assuming that's only for actual streaming formats)
Fixes bug #628349.
The format list should be sorted from high ranks to low ranks. In the GSList
sorting function this means the compare needs to return a positive value if
format a has a lower rank than format b.
Among other things this fixes v4l2src to prefer non-emulated formats
to emulated formats when built against libv4l.
In the case we change the State from READY_TO_NULL the buffers in the pool
still hold an open dup file descriptor to the device, therefore the device
release function will not be called and the device will probably answer with
-EBUSY when we reopen it in the next NULL_TO_READY transition.
Signed-off-by: Michael Grzeschik <m.grzeschik@pengutronix.de>
See bug #622500 and #612244.
This allows set_caps to succeed if caps change in a way that
would not modify the format we're getting from the hardware.
Otherwise if not in NULL state, setting caps would fail
with EBUSY.
With this change, in some cases it's OK to go PLAYING->READY->PLAYING
rather than PLAYING->NULL->PLAYING to avoid a time-consuming close
and reopen of the device.
Fixes#621723
Fixes#621723 (partially)
set_caps can fail if the video device is running, in that case
setting its format leads to EBUSY.
If set_caps fails then we will not have set up the buffer pool
(it will be NULL) which leads to a crash when we try to pull
buffers. If we fail the negotiate on set_caps failure, then we
won't go to playing state and won't crash.
This is a small improvement. Of course, a nicer fix would
be to make set_caps work in the case where the format is
unchanged. If the format has changed, failing is
probably correct because we need to close the device
(go to NULL state) in order to set caps.
The compiler wants a cast here even though the type is already
typedefed as 64-bit integer (presumably because glib has typedefed
guint64 to unsigned long here).
Don't leak a string everytime get_uri() is called and a device
has been set. There's a limited number of devices, so just
intern the string instead of doing more elaborate housekeeping
and storing it in the instance struct or so.
Old videodevice2.h kernel headers used ioctl stuff without
including ioctl.h, making compilation fail on older systems.
Note: Including ioctl.h here is only a workaround for old kernel
headers, should be removed once everybody has new enough headers.
Fixes bug #597867.
For cameras/drivers that don't support e.g. VIDIOC_G_PARM we'd end up without
a framerate and would try to divide by 0, causing run-time warnings and all
frames to be timestamped with 0, which makes sinks that sync against the clock
drop them, causing 'hangs' (observed with the pwc driver and a Logitech QuickCam
Pro 4000). So if we do not know the framerate, simply don't adjust the
timestamps. Fixes#591451.
Clear format list and probed caps when going to NULL so if a new device
is set we'll probe the formats again instead of using previously
detected ones. Fixes bug #591747.
It seems to cause strange occasional high latencies (almost 200ms) when dequeuing buffers from _buffer_alloc(). It is simpler and seems to work much better to dqbuf from the same thread that is queuing the next buffer.
This also does the following changes:
(1) pull the bufferpool code out into gstv4l2bufferpool.c, and make a
bit more generic so it can be used both for v4l2src and v4l2sink
(2) move some of the device probing/configuration/caps stuff into
gstv4l2object.c so it does not have to be duplicated between
v4l2src and v4l2sink
Fixes bug #590280.
The v4l2 driver for USB webcams on OpenSolaris does not support select()
calls. Detect when select() fails, and skip polling the device afterward,
which restores the pre 0.10.14 behaviour on OpenSolaris.
Signed-off-by: Jan Schmidt <thaytan@noraisin.net>
Use GstPoll to wait for the fd of the video device to become readable before
trying to capture a frame. This speeds up stopping v4l2src a lot as it no
longer has to wait for the next frame, especially when capturing with low
framerates or when the video device just never generates a frame (which seems a
common issue for uvcvideo devices)
Fixes bug #563574.
sort_by_frame_size is declared static and only used inside
an ifdef, so use the same ifdef to define the function. Fixes#572185
Signed-off-by: David Schleef <ds@schleef.org>