As cameras tend to have a quite specific set of capabilities (specific
framerates for each resolution), getting the peer caps filtered by our
probed caps can cause a big increase in the caps size which slows down
things quire a bit.
As for negotiation v4l2 iterates through the caps of the peer to find the
first intersection with the probed caps, getting the fully expanded
intersection of capabilities is not useful.
Using the same testcase as for bug #702632, adding this patch on top of
the patches suggested there speeds up getting the inital frame from
around ~14-15 seconds to around ~3-4 seconds.
https://bugzilla.gnome.org/show_bug.cgi?id=702638
This can happen if other parts of the pipeline are reconfigured.
Stop streaming even for a short amount of time can be quite visible, so it
should be avoided if possible.
https://bugzilla.gnome.org/show_bug.cgi?id=700503
v4l has add a new IOCTL to export a buffer by using dmabuf.
This patch allow to use this new IOTCL if it has been defined in videodev2.h
I introduce a new IO mode (GST_V4L2_IO_DMABUF) to enable this way of working.
https://bugzilla.gnome.org/show_bug.cgi?id=693826
With the port to gstreamer 1.0 the prepare-format signal stopped being
emitted. Start emitting this again for use in uvch264src. While there
change the emission to include the caps for extra flexibility instead of
fource, width, height.
https://bugzilla.gnome.org/show_bug.cgi?id=692042
Sample the pipeline clock and device clock closer to eachother to reduce jitter.
Don't subtract the frame duration from the timestamp when we can use the device
timestamps.
Assume a delay of 1 frame in read-write mode.
Query the amount of available buffers when doing set_config(). This allows us to
configure the parent bufferpool with the number of buffers to preallocate.
Keep track of the provided allocator and use it when we need to allocate a
buffer in RW mode.
When we are can not allocate the requested max_buffers amount of buffers, make
sure we keep 2 buffers around in the pool and copy them into an output buffer.
This makes sure that we always have a buffer to capture into. We also need to
detect those copied buffers and unref them when they return to the pool.
In order to support UVC H264 encoding cameras, an H264 Probe&Commit
must happen before the normal v4l2 set-format. This new signal is
meant to allow an external application or bin to do it.
It also serves to expose the file descriptor used by v4l2src in case
some custom ioctls need to be called.
Conflicts:
sys/v4l2/Makefile.am
sys/v4l2/gstv4l2src.c
sys/v4l2/v4l2src_calls.c
When nobody is using our pool, activate it ourselves.
Avoid leaking the buffer array.
Set default pool configuration with caps.
Don't keep current_caps, core does that for us now.
Use the more specialized type for the bufferpool.
Use the size from the driver as the size of the image to read.
Don't configure the pool when created. This will be done in the setup_allocation
method later or by upstream for sinks.
Remove unused properties and variables. Bufferpool sizes are now configured in
the bufferpool by the elements in the pipeline. We might want to influence the
pool size later somehow.