Calculation of num_buffers (the max latency in buffers) was
up-side-down. If we can allcoate, then our maximum latency match
pool maximum number of buffers. Also renamed it to max latency. Finally
introduced a min_latency for clarity.
This moves away from copying information and store everything inside
the GstVideoInfo structure. The alignement exposed by v4l2 api
is now handled using proper offset.
Return a buffer from an otherpool has unwanted side effects that lead to leaks and
prevents deactivating the pool. Instead, we change the _process() API so it can
replace the internal buffer with the buffer from the downstream pool. This implied
moving from _fill() to _create() method in the src.
For format like mpegts, width and height is rarely in the negotiated caps. This
patch fixes failure when setting format, and prevent introducing width, height,
framerate and format to the caps when fixating.
https://bugzilla.gnome.org/show_bug.cgi?id=725860
As cameras tend to have a quite specific set of capabilities (specific
framerates for each resolution), getting the peer caps filtered by our
probed caps can cause a big increase in the caps size which slows down
things quire a bit.
As for negotiation v4l2 iterates through the caps of the peer to find the
first intersection with the probed caps, getting the fully expanded
intersection of capabilities is not useful.
Using the same testcase as for bug #702632, adding this patch on top of
the patches suggested there speeds up getting the inital frame from
around ~14-15 seconds to around ~3-4 seconds.
https://bugzilla.gnome.org/show_bug.cgi?id=702638
This can happen if other parts of the pipeline are reconfigured.
Stop streaming even for a short amount of time can be quite visible, so it
should be avoided if possible.
https://bugzilla.gnome.org/show_bug.cgi?id=700503
v4l has add a new IOCTL to export a buffer by using dmabuf.
This patch allow to use this new IOTCL if it has been defined in videodev2.h
I introduce a new IO mode (GST_V4L2_IO_DMABUF) to enable this way of working.
https://bugzilla.gnome.org/show_bug.cgi?id=693826
With the port to gstreamer 1.0 the prepare-format signal stopped being
emitted. Start emitting this again for use in uvch264src. While there
change the emission to include the caps for extra flexibility instead of
fource, width, height.
https://bugzilla.gnome.org/show_bug.cgi?id=692042
Sample the pipeline clock and device clock closer to eachother to reduce jitter.
Don't subtract the frame duration from the timestamp when we can use the device
timestamps.
Assume a delay of 1 frame in read-write mode.
Query the amount of available buffers when doing set_config(). This allows us to
configure the parent bufferpool with the number of buffers to preallocate.
Keep track of the provided allocator and use it when we need to allocate a
buffer in RW mode.
When we are can not allocate the requested max_buffers amount of buffers, make
sure we keep 2 buffers around in the pool and copy them into an output buffer.
This makes sure that we always have a buffer to capture into. We also need to
detect those copied buffers and unref them when they return to the pool.
In order to support UVC H264 encoding cameras, an H264 Probe&Commit
must happen before the normal v4l2 set-format. This new signal is
meant to allow an external application or bin to do it.
It also serves to expose the file descriptor used by v4l2src in case
some custom ioctls need to be called.
Conflicts:
sys/v4l2/Makefile.am
sys/v4l2/gstv4l2src.c
sys/v4l2/v4l2src_calls.c