When the v4l2 device is an output device, the application shall set the
colorspace. So map GStreamer colorimetry info to V4L2 colorspace and set
on set_format. In case we have no colorimetry information, we try to
guess it according to pixel format and video size.
https://bugzilla.gnome.org/show_bug.cgi?id=737579
This will prevent the converter to be picked automatically in case
someone implement dynamic converter selection support. I'd like this
to be ranked only for known device, as it's hard to be sure a device is
a converter suited for general purpose. Re-negotiation is also needed
before we can rank it.
https://bugzilla.gnome.org/show_bug.cgi?id=733607
Even though the UVC driver do a great deal of effort to prevent bad
timestamp to be sent to userspace, there still exist UVC hardware that
are so buggy that the timestamp endup nearly random. This code detect
and ignore timestamp from these drivers, making these camera usable.
This has been tested on both invalid and valid cameras, making sure it
does not trigger for valid cameras.
https://bugzilla.gnome.org/show_bug.cgi?id=732910
There is still around 18 drivers not yet ported to videobuf2. These driver
don't support freeing buffetrs through REQBUFS(0) hence for these the
memory type probing fails. In order to gain back our previous behaviour in
presence of these, we implement a workaround that assuming MMAP is
supported. Note that an allocator is only created for device with
STREAMING support in the device capabilities. In such case one of MMAP,
USERPTR and DMABUF is required. Though DMABUF came afterward, so is
not an option and in practice none of these drivers will only do USERPTR.
https://bugzilla.gnome.org/show_bug.cgi?id=735660
Also-by: Hans de Goede <hdegoede@redhat.com>
Since we can get the minimum number of buffers needed by an output
device to work, use it to set min_latency which will determine how many
buffers are queued.
https://bugzilla.gnome.org/show_bug.cgi?id=736072
Most V4L2 ioctls like try_fmt will adjust input fields to match what the
hardware can do rather then returning -EINVAL. As is docmented here:
http://linuxtv.org/downloads/v4l-dvb-apis/vidioc-g-fmt.html
EINVAL is only returned if the buffer type field is invalid or not supported.
So upon requesting V4L2_FIELD_NONE devices which can only do interlaced
mode will change the field value to e.g. V4L2_FIELD_BOTTOM as only returning
half the lines is the closest they can do to progressive modes.
In essence this means that we've failed to get a (usable) progessive mode
and should fall back to interlaced mode.
This commit adds a check for having gotten a usable field value after the first
try_fmt, to force fallback to interlaced mode even if the try_fmt succeeded,
thereby fixing get_nearest_size failing on these devices.
https://bugzilla.gnome.org/show_bug.cgi?id=735660
They may have been modified by the ioctl even if it failed. This also makes
the S_FMT fallback path try progressive first, making it consistent with the
preferred TRY_FMT path.
https://bugzilla.gnome.org/show_bug.cgi?id=735660
If the minimum required buffer exceed V4L2 capacity, don't share down
pool. This allow support very high latency, like with x264enc default
encoding settings.
https://bugzilla.gnome.org/show_bug.cgi?id=732288
Some v4l2 devices could require a minimum buffers different from default
values. Rather than blindly propose a pool with min-buffers set to the
default value, it ask the device using control ioctl.
https://bugzilla.gnome.org/show_bug.cgi?id=733750
These checks are no longer required with recent change to the bufferpool. This
should allow changing the configuartion, hence the way forward renegotiation
support.
https://bugzilla.gnome.org/show_bug.cgi?id=728268
We cannot allocate new buffer in acquire, otherwise the base class
is not aware and get confused. Instead, copy in _process(). This leads
to crash on finalize.
Fixes regression, see https://bugzilla.gnome.org/show_bug.cgi?id=732912
Parenting V4l2Memory to DmabufMemory was in conflict with recent
optimization in DmabufMemory to avoid dup(), and didn't work with
memory sharing. Instead, use a qdata and it's destroy notify.
https://bugzilla.gnome.org/show_bug.cgi?id=730441
Before we would hit an assertion "'gst_buffer_pool_is_active (bpool)' failed"
if the internal pool was not used to push buffer downstrea, hence not
given to the baseclass.
https://bugzilla.gnome.org/show_bug.cgi?id=732912
If special stride is needed and downstream don't support VideoMeta,
pool might be NULL in order to let the baseclass create a generic
pool. This would lead to assertion with on Exynos with:
gst-launch-1.0 -v filesrc location=mov ! qtdemux ! h264parse ! \
v4l2video8dec ! fakesink
https://bugzilla.gnome.org/show_bug.cgi?id=732707
In kernel before 3.17, polling during queue underrun would unblock right
away and trigger POLLERR. As we are not handling POLLERR, we would endup
blocking in DQBUF call, which won't be unblocked correctly when going
to NULL state. A deadlock at start caused by locking error in libv4l2 was
also seen before this patch. Instead, we wait until the queue is no longer
empty before polling.
https://bugzilla.gnome.org/show_bug.cgi?id=731015
The code enumerating STEPWISE framesizes would start from
(min_w, min_h) and then add (step_w, step_h) to get the
next framesize. However, it should really allow any width
from min_w to max_w with step_w and same for heights.
Secondly, we would add and probe each individual stepped
frame size to the caps as separate structure, which would
lead to hundreds if not thousands of structs ending up in
the probed caps. Use integer ranges with steps instead.
This was particularly noticable with the Raspberry Pi Cam.
https://bugzilla.gnome.org/show_bug.cgi?id=724521https://bugzilla.gnome.org/show_bug.cgi?id=732458https://bugzilla.gnome.org/show_bug.cgi?id=726521
This workaround from 2011 was causing 25 S_FMT ioctls to be sent
to my UVC webcam from under gst_v4l2_object_get_caps as it probes
all the formats. In total, this adds up to about 5 seconds of
execution time, or a 10 second delay while starting up cheese.
These ioctls come from a workaround from 2011 where TRY_FMT might
make changes to hardware settings, so S_FMT was used to restore
the original config:
https://bugzilla.gnome.org/show_bug.cgi?id=649067
The driver bug is now assumed fixed. Remove the workaround to fix the
long startup delay.
https://bugzilla.gnome.org/show_bug.cgi?id=732326
This is required as during preroll we pass the first buffer twice, hence already
queued. It is also useful, to allow filters replaying a previous rendered buffers.
This will require 1 more buffer in sink if last-sample is enabled, since the last
sample will not be the same as the currently queued buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=722303
For output device, we where queuing all the buffers, and then we would
dequeue one. This means we only have 1 buffer for the pipeline, no matter
the size of the queue. Instead, start dequeued when min_latency is reached.
Eventually, this the min_latency should also be affected by control
MIN_BUFFERS_FOR_OUTPUT (use by encoders).
Calculation of num_buffers (the max latency in buffers) was
up-side-down. If we can allcoate, then our maximum latency match
pool maximum number of buffers. Also renamed it to max latency. Finally
introduced a min_latency for clarity.
Fix the choice of min/max, don't override the min/max with own pool selected
size, correct other_pool is_active check, start from other_pool config when
configuring the other pool and finally validate the configuration.
The decodeing thread returning flushing isn't an error, we should simply
try starting the task again. If it's actually flushing, it will stop again by itself.