Offset are relative to the buffer and there is no guarantee substracting
them will give us the plane size. So we let bufferpool make the math as
it is more aware of video info than allocator and pass a size array to
allocator import function.
Pointed out by Nicolas Dufresne <nicolas.dufresne@collabora.com>
https://bugzilla.gnome.org/show_bug.cgi?id=738013
When there is no allocation parameters in the query, enable copy
threshold. When this threshold is reached, the buffer pool will start
copying when the pool reaches a critical level. If the driver supports
CREATE_BUFS, this will be used instead.
When we hit emulated formats, we disable CREATE_BUFS since libv4l2
cope very badly with it. Also clear the allocator flags so we will
never try to allocate more buffers. This fixes failure when the copy
threshold is reached as we where calling CREATE_BUFS, which lead to
libv4l2 instability.
In the fraction 1 / 2. 1 is the numerator and 2 is the denominator.
The arguments of fraction gst_value_set_fractions() are value,
numerator and denominator.
Also, gst_value_set_fraction() fails if denominator is 0 for obvious
reasons.
When importing buffers from a downstream pool, we need to deactivate
that pool to ensure it will be usable again later. Relying on the
refcount to reach zero does not work, since elements like xvimagesink
keeps a reference on their proposed pool.
When memory (that has been shared using gst_memory_share()) are freed,
the memory (or the DMABUF FD) should not bee freed. These memories have
a parent. This also removes the extra _v4l2mem_free function and avoid
calling close twice on the DMABUF FD.
https://bugzilla.gnome.org/show_bug.cgi?id=744573
Replace the sink_query with new getcaps() virtual and use the proxy
helper with the probed caps. This allow upstream element taking decision
base on what is supported downstream.
v4l2loopback driver has a this nasty bug that if the queue is larger
then 2 buffers, it returns random index on dqbuf. So far we assumed
that the index was always right, which would lead to memory being
unref twice, and eventually crash.
As the buffer array is fixed size and small, it's safer to simply
use this static size to cleanup the buffers. This is also more
consistent with the rest. The associated method is no longer
required and can be dropped.
This partly revert to the old 1.2 behavior. Instead of keeping a
reference to the output buffer queued, we simply release them but
don't forward it to GstBufferPool. This way, the buffer pool don't
need to be flushed to be stopped.
https://bugzilla.gnome.org/show_bug.cgi?id=742074
Failing streamoff prevents allocator from being disposed hence
lead to device FD leak. There is no known cases where streamoff
may fails for which we'd still be streaming. streamoff is known
to fail when a device is being unplugged (in which case errno
19/ENODEV is set).
https://bugzilla.gnome.org/show_bug.cgi?id=732734
It looks like libv4l2 support for CREATE_BUF is incomplete. That
combine with existing bugs may lead to crash in GStreamer. These
check will make it robust by:
- Checking create buf index isn't an already in used index
- Checking that the index out of QUERYBUF matches the requested
index
Right now we try to be clever by detecting if device format have
changed or not, and skip setting format in this case. This is valid
behaviour with V4L2, but it's also very error prone. The rational
for not setting these all the time is for speed, though I can't
measure any noticeable gain on any HW I own. Also, until recently,
we where doing get/set on the format for each format we where
probing, making it near to impossible that the format would match.
This also fixes bug where we where skipping frame-rate setting if
format didn't change.
https://bugzilla.gnome.org/show_bug.cgi?id=740636
If v4l2_buffer.field is V4L2_FIELD_INTERLACED, we set corresponding
GstVideoBuffer flags depending on the video standard.
According to V4L2 specification, M/NTSC transmits the bottom field
first, all other standards the top field first.
https://bugzilla.gnome.org/show_bug.cgi?id=737603
When libv4l2 emulates RW mode on top of MMAP devices, the queues are
only initialized on first read. The problem is that poll() will fail
if called before the queues are initialized and streaming. Workaround
this by doing a zero size read when pool is started in that IO mode.
https://bugzilla.gnome.org/show_bug.cgi?id=740633
This patch fixes CREATE_BUFS support for capture devices. Initially we
would only try and allocate more buffers when the copy threshold
is reached. When the threshold was not set (needed) it would never
happen. Another problem is that on capture side, acquire returns
filled buffer, hence need to pool. We need to set a special flag to
force allocation to happen.
https://bugzilla.gnome.org/show_bug.cgi?id=741134
This allow skipping buffer flagged with ERROR that has no payload.
This is typical behaviour when a recovererable error occured during
capture in the driver, but that no valid data was ever written into that
buffer. This patch also translate V4L2_BUF_FLAG_ERROR into
GST_BUFFER_FLAG_CORRUPTED. Hence decoding error produce
by decoder due to missing frames will now be correctly marked. Finally,
this fixes a buffer leak when EOS is reached.
https://bugzilla.gnome.org/show_bug.cgi?id=740040
If the v4l2 queue support dmabuf select this buffer pool mode
and update the query with allocator.
This patch only concern exporting dmabuf and not importing dmabuf
fd from downstream element.
https://bugzilla.gnome.org/show_bug.cgi?id=699382
Improve buffer validation by making sure each memory are the right
one and that each memory is writable. This fixes tearing issues in
case downstream uses gst_buffer_make_writable() or other type
of GstBuffer copy where memory are only reffed.
https://bugzilla.gnome.org/show_bug.cgi?id=739754
Rather than try and guess interlace support as part of checking supported
sizes, look for interlace support specifically in its own function.
As a cleanup, use V4L2_FIELD_ANY when probing sizes, which should result in
the driver doing the right thing.
With my capture setup, this gets me the following sample caps:
For 1080i resolution:
video/x-raw, format=(string)YUY2, width=(int)1920, height=(int)1080, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)interleaved, framerate=(fraction){ 25/1, 30/1 }
For 720p resolution:
video/x-raw, format=(string)YUY2, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, framerate=(fraction){ 50/1, 60/1 }
For 576i/p resolution (both possible at the point of query):
video/x-raw, format=(string)YUY2, width=(int)720, height=(int)576, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string){ progressive, interleaved }, framerate=(fraction){ 25/1, 50/1 }
This, in turn, makes 576i work correctly; with the old code,
the caps would be interlace-mode=progressive for interlaced video.
https://bugzilla.gnome.org/show_bug.cgi?id=726194
On streamon failure, the queued buffer is not released from the
bufferpool class point of view because it is queued to the driver and
the flush logic is not performed since we are not in streaming state.
It causes the v4l2 bufferpool to always return that stop method failed
and to leak v4l2 objects and buffers.
This commit solve this by performing the flush logic in error case, ie
flushing the allocator and restoring queued buffer state to non-queued.
https://bugzilla.gnome.org/show_bug.cgi?id=738102
This will prevent deadlocks, but will also properly flush the pool and allocator
when going to READY state. It should also fix issues reported on mailing list
when seeking is performed.
https://bugzilla.gnome.org/show_bug.cgi?id=738152
When the v4l2 device is an output device, the application shall set the
colorspace. So map GStreamer colorimetry info to V4L2 colorspace and set
on set_format. In case we have no colorimetry information, we try to
guess it according to pixel format and video size.
https://bugzilla.gnome.org/show_bug.cgi?id=737579
This will prevent the converter to be picked automatically in case
someone implement dynamic converter selection support. I'd like this
to be ranked only for known device, as it's hard to be sure a device is
a converter suited for general purpose. Re-negotiation is also needed
before we can rank it.
https://bugzilla.gnome.org/show_bug.cgi?id=733607
Even though the UVC driver do a great deal of effort to prevent bad
timestamp to be sent to userspace, there still exist UVC hardware that
are so buggy that the timestamp endup nearly random. This code detect
and ignore timestamp from these drivers, making these camera usable.
This has been tested on both invalid and valid cameras, making sure it
does not trigger for valid cameras.
https://bugzilla.gnome.org/show_bug.cgi?id=732910
There is still around 18 drivers not yet ported to videobuf2. These driver
don't support freeing buffetrs through REQBUFS(0) hence for these the
memory type probing fails. In order to gain back our previous behaviour in
presence of these, we implement a workaround that assuming MMAP is
supported. Note that an allocator is only created for device with
STREAMING support in the device capabilities. In such case one of MMAP,
USERPTR and DMABUF is required. Though DMABUF came afterward, so is
not an option and in practice none of these drivers will only do USERPTR.
https://bugzilla.gnome.org/show_bug.cgi?id=735660
Also-by: Hans de Goede <hdegoede@redhat.com>
Since we can get the minimum number of buffers needed by an output
device to work, use it to set min_latency which will determine how many
buffers are queued.
https://bugzilla.gnome.org/show_bug.cgi?id=736072
Most V4L2 ioctls like try_fmt will adjust input fields to match what the
hardware can do rather then returning -EINVAL. As is docmented here:
http://linuxtv.org/downloads/v4l-dvb-apis/vidioc-g-fmt.html
EINVAL is only returned if the buffer type field is invalid or not supported.
So upon requesting V4L2_FIELD_NONE devices which can only do interlaced
mode will change the field value to e.g. V4L2_FIELD_BOTTOM as only returning
half the lines is the closest they can do to progressive modes.
In essence this means that we've failed to get a (usable) progessive mode
and should fall back to interlaced mode.
This commit adds a check for having gotten a usable field value after the first
try_fmt, to force fallback to interlaced mode even if the try_fmt succeeded,
thereby fixing get_nearest_size failing on these devices.
https://bugzilla.gnome.org/show_bug.cgi?id=735660
They may have been modified by the ioctl even if it failed. This also makes
the S_FMT fallback path try progressive first, making it consistent with the
preferred TRY_FMT path.
https://bugzilla.gnome.org/show_bug.cgi?id=735660
If the minimum required buffer exceed V4L2 capacity, don't share down
pool. This allow support very high latency, like with x264enc default
encoding settings.
https://bugzilla.gnome.org/show_bug.cgi?id=732288
Some v4l2 devices could require a minimum buffers different from default
values. Rather than blindly propose a pool with min-buffers set to the
default value, it ask the device using control ioctl.
https://bugzilla.gnome.org/show_bug.cgi?id=733750
These checks are no longer required with recent change to the bufferpool. This
should allow changing the configuartion, hence the way forward renegotiation
support.
https://bugzilla.gnome.org/show_bug.cgi?id=728268
We cannot allocate new buffer in acquire, otherwise the base class
is not aware and get confused. Instead, copy in _process(). This leads
to crash on finalize.
Fixes regression, see https://bugzilla.gnome.org/show_bug.cgi?id=732912
Parenting V4l2Memory to DmabufMemory was in conflict with recent
optimization in DmabufMemory to avoid dup(), and didn't work with
memory sharing. Instead, use a qdata and it's destroy notify.
https://bugzilla.gnome.org/show_bug.cgi?id=730441
Before we would hit an assertion "'gst_buffer_pool_is_active (bpool)' failed"
if the internal pool was not used to push buffer downstrea, hence not
given to the baseclass.
https://bugzilla.gnome.org/show_bug.cgi?id=732912
If special stride is needed and downstream don't support VideoMeta,
pool might be NULL in order to let the baseclass create a generic
pool. This would lead to assertion with on Exynos with:
gst-launch-1.0 -v filesrc location=mov ! qtdemux ! h264parse ! \
v4l2video8dec ! fakesink
https://bugzilla.gnome.org/show_bug.cgi?id=732707
In kernel before 3.17, polling during queue underrun would unblock right
away and trigger POLLERR. As we are not handling POLLERR, we would endup
blocking in DQBUF call, which won't be unblocked correctly when going
to NULL state. A deadlock at start caused by locking error in libv4l2 was
also seen before this patch. Instead, we wait until the queue is no longer
empty before polling.
https://bugzilla.gnome.org/show_bug.cgi?id=731015
The code enumerating STEPWISE framesizes would start from
(min_w, min_h) and then add (step_w, step_h) to get the
next framesize. However, it should really allow any width
from min_w to max_w with step_w and same for heights.
Secondly, we would add and probe each individual stepped
frame size to the caps as separate structure, which would
lead to hundreds if not thousands of structs ending up in
the probed caps. Use integer ranges with steps instead.
This was particularly noticable with the Raspberry Pi Cam.
https://bugzilla.gnome.org/show_bug.cgi?id=724521https://bugzilla.gnome.org/show_bug.cgi?id=732458https://bugzilla.gnome.org/show_bug.cgi?id=726521
This workaround from 2011 was causing 25 S_FMT ioctls to be sent
to my UVC webcam from under gst_v4l2_object_get_caps as it probes
all the formats. In total, this adds up to about 5 seconds of
execution time, or a 10 second delay while starting up cheese.
These ioctls come from a workaround from 2011 where TRY_FMT might
make changes to hardware settings, so S_FMT was used to restore
the original config:
https://bugzilla.gnome.org/show_bug.cgi?id=649067
The driver bug is now assumed fixed. Remove the workaround to fix the
long startup delay.
https://bugzilla.gnome.org/show_bug.cgi?id=732326
This is required as during preroll we pass the first buffer twice, hence already
queued. It is also useful, to allow filters replaying a previous rendered buffers.
This will require 1 more buffer in sink if last-sample is enabled, since the last
sample will not be the same as the currently queued buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=722303
For output device, we where queuing all the buffers, and then we would
dequeue one. This means we only have 1 buffer for the pipeline, no matter
the size of the queue. Instead, start dequeued when min_latency is reached.
Eventually, this the min_latency should also be affected by control
MIN_BUFFERS_FOR_OUTPUT (use by encoders).
Calculation of num_buffers (the max latency in buffers) was
up-side-down. If we can allcoate, then our maximum latency match
pool maximum number of buffers. Also renamed it to max latency. Finally
introduced a min_latency for clarity.
Fix the choice of min/max, don't override the min/max with own pool selected
size, correct other_pool is_active check, start from other_pool config when
configuring the other pool and finally validate the configuration.
The decodeing thread returning flushing isn't an error, we should simply
try starting the task again. If it's actually flushing, it will stop again by itself.
gstv4l2bufferpool.c:608:18: error: implicit conversion from enumeration type
'enum _GstV4l2BufferPoolAcquireFlags' to different enumeration type
'GstBufferPoolAcquireFlags' [-Werror,-Wenum-conversion]
params.flags = GST_V4L2_POOL_ACQUIRE_FLAG_RESURECT;
~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We need to handle the case where a flush occure while the streaming
thread is being brought up. In this case, the flushing state of the poll
object is cleared. To solve this, we simply set the capture poll to flushing
again, this way we know the thread will exit. The decoder streamlock
is used to synchronize with handle frame.
M2M devices were sharing the same properties as src and sink. Most of
these made no sense. This patch reduces the number of propeties and
makes io-mode clearer by having capture-io-mode and output-io-mode. This
also accidently fixed a bug in gstv4l2transform io-mode code, where the
capture io-mode could not be set.
https://bugzilla.gnome.org/show_bug.cgi?id=729591
If the driver need more buffers than requested by the config,
update the pool min/max values. The minimum value for the pool
could be provided either by the driver or by the pool. This is
best effort for drivers that don't support
CID V4L2_CID_MIN_BUFFERS_FOR_CAPTURE.
https://bugzilla.gnome.org/show_bug.cgi?id=730200
This allow calling start streaming later for capture device. Currently it breaks
in dmabuf-import because downstream is holding a buffer that will only be
released after stream-start.
https://bugzilla.gnome.org/show_bug.cgi?id=730207
When extrapolating the offset, we need to use the extrapolate
stride rather then the base stride. This should fix support for format
with more then two planes (I420, Y42B, etc).
When doing frame operation, we need to use the default VideoInfo
and let the frame API read the video meta in order to get the stride
and offset right. Currently we where using the specialized VideoInfo
which reflects what the HW is setup to.
Simplify framerate field if possible, so we don't end up with
e.g. framerate = (fraction) { 30/1 }. Maybe the helper function
should be moved to core, but we can do this later.
This moves away from copying information and store everything inside
the GstVideoInfo structure. The alignement exposed by v4l2 api
is now handled using proper offset.
Return a buffer from an otherpool has unwanted side effects that lead to leaks and
prevents deactivating the pool. Instead, we change the _process() API so it can
replace the internal buffer with the buffer from the downstream pool. This implied
moving from _fill() to _create() method in the src.
Buffer refcounting is a bit hard, because of the duality between CAPTURE and
OUTPUT mode. In the long term, we should consider having two seperate pool
instead of this mess. At least state should be better kept this way.
All enum that has REQBUFS and CREATE_BUFS where missing S, which was
confusing since they are supposed to match with associcated ioctl name. This
also fixes the yet unused CAN_REQUEST flag check.
Because of the buf in videobuf2, dqbuf may leave the DONE flag being,
which would implied that the buffer is queued. As this has been broken
for 4 years, simply guaranty the state flags integrity when doing
qbuf/dqbuf.
See https://patchwork.linuxtv.org/patch/23641/
Improve decide allocation so it properly configure both local and downstream
buffer pools. Also read back the pool config if it was changed to to driver
limitations.
Pre-configuring the pool is error prone, since it may hide a configuration failure and
endup with a pool that is not configured the way it should (e.g. no video meta, wrong
queue size, etc.)
Catch short allocation after saving the format. This is not a catch all, but should catch
most of the miss-behaving drivers when doing S_FMT/G_FMT and avoid potential crash.
This goal of this allocator is mainly to allow tracking the memory.
Currently, when a buffer memory has been modified, the buffer and it's
memory is disposed and lost until the stream is restarted.
Some well known decoder wrongly set num_planes to 0 in their format instead of
one. In this case we would endup with no size when deciding buffer allocation.
In order to correctly set the pool min/max, we need to probe for CREATE_BUFS
ioctl. This can be done as soon as the format has been negotiated using a
count of 0.
Now that we might be copying out buffer (e.g. downstream don't support video
meta bug we need it) we need to move the EOS handling inside the process
method.
As soon a the alpha component can be set, we can expose the RGB32 and BGR32
format as ARGB and BGRA as long we can deterministically set the alpha padding
value.
In certain cases we cannot live without video meta and/or crop meta
being enabled in our internal buffer pool. Ensure this is always the case,
regardless of having support for allocation query.
Upon error, the pools might not have been allocated yet, hence we should not
try and flush them (even though we still want to make sure the processing thread
is fully stopped).
Buffer pool was guessing wrongly the number of planes rather
then reading the value from obj->n_v4l2_planes. This was causing
format YU12 (I420) to fail upon check.
The complex mechanic to try and choose the right thing did not work. Instead,
simply probe the non-contiguous format first and then the contiguous one.
This is in fact very low overhead, as there is a relatively small number of
pixel format supported by each devices.
Certain decoder has been found to not choose a format automatically. Running
v4l2videodec on these would assert. This patch will make it fail cleanly
instead.
If caps are set again, we have a risk od returning from set_format with a
input_state pointing to dead memory. Clearing the pointer after unref fix
this issue.
Uppon certain downstream error, stop() is called without a flush(). This mean that
the streaming thread may still be running even though unlock has been called.
Now calling flush to reset the decoder state if we are processing.
Simplify sub-instanciation by defining an absract type and using subtype
class and instance init callback. This also fixes a bug where the template
pads get initialized too late.
https://bugzilla.gnome.org/show_bug.cgi?id=727925
For format like mpegts, width and height is rarely in the negotiated caps. This
patch fixes failure when setting format, and prevent introducing width, height,
framerate and format to the caps when fixating.
https://bugzilla.gnome.org/show_bug.cgi?id=725860
It seems that GStreamer's mpegts elements (tsdemux, tsparse) require caps
`video/mpegts,systemstream=true`. As far as I can see the significance
of systemstream is to indicate that this is a container format rather than
an elementary stream. As this is the case (and I can't understand how it
could not be the case with mpegts) I add systemstream=true to v4l2src's
caps.
This allows v4l2src to be linked with tsdemux for playback from my
Hauppauge HD-PVR with the pipeline:
v4l2src ! queue ! tsdemux ! video/x-h264 ! decodebin ! xvimagesink
In combination with the next commit this fixes using Hauppauge HD-PVR with
GStreamer 1.0+.
Adds type compatiblity with other OS like BSD. This uses types mapping macro to
avoid conflict with existing defined types. We resuse glib types as these are
already available on supported platforms. This is GCC only because of the
le32 type that uses bitwise attribute.
https://bugzilla.gnome.org/show_bug.cgi?id=726453
With years the amount of ifdef have grown up and we are not even sure if the
old code path compiles. Each time we need to update the v4l2 framework to add
the new feature, we break compilation on older kernel. With exception of two
controls in the video orientation control, this patch get rid of all ifdef by
including the latest version of videodev2.h inside GStreamer.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=723446
V4L2 kernel drivers allow configuration of the hardware settings via a
mechanism called controls. These can be referred to by name such as
"Brightness" and "White Balance Temperature". The user-space command line
client for setting these controls (v4l2-ctl) normalises these names such
that they only contain lower case alphanumeric characters and the
underscore '_'. e.g:
Kernel v4l2-ctl
----------------------------------------------------
Brightness brightness
White Balance Temperature white_balance_temperature
Focus (absolute) focus_absolute
GStreamer seems to want to follow this pattern but failed for controls with
more than one consecutive non-alphanum character. e.g. GStreamer would
produce "focus__absolute_" rather than "focus_absolute".
This commit fixes that issue. Backwards compatibility is preserved by
normalising all control names before comparison.
https://bugzilla.gnome.org/show_bug.cgi?id=725632
For each videoCdevice probe it input/output capabilities
if it match with video decoder requirement register a new element.
Signed-off-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
https://bugzilla.gnome.org/show_bug.cgi?id=722128
We correctly indicate the field ordering on interlaced buffers, but fail to
flag them as containing interlaced video, which we need to do here because
we signal interlace-mode=mixed in our caps. This means that downstream
elements (like vaapipostproc from gstreamer-vaapi) don't recognise these
buffers as in need of deinterlacing.
Fix this by setting the interlaced flag on all interlaced buffers.
Signed-off-by: Simon Farnsworth <simon.farnsworth@onelan.co.uk>
https://bugzilla.gnome.org/show_bug.cgi?id=724899