Removes the FIXME/Question in the buffer pool and add a ref to the
element in the GstAllocator too. This ref is strictly required to keep
the GstV4l2Object structure around.
The library has started preventing a lot of interesting use cases,
like CREATE_BUFS, DMABuf, usage of TRY_FMT. As the libv4l2 is totally
inactive and not maintained, we decided to disable it. As a convenience
we added a run-time environment that let you enable it for testing.
GST_V4L2_USE_LIBV4L2=1
This of course only works if you have enabled libv4l2 at build time.
While not documented, gst_video_colorimetry_matches() only accepts well
known names. Looking at the code and unit test, this seems to be on
purpose, so fixing by parsing the string and compating the colorimetry
structures.
The subclass negotiated function will call set_format, if that fails the
pool will not be created. We ended up with an assertion.
GStreamer-CRITICAL **: gst_buffer_pool_set_active: assertion 'GST_IS_BUFFER_POOL (pool)' failed
In this commit, we enabled skip_try_fmt_probes quirk in order to speed
up the start which is known to be disastrously slow with certain USB
cameras.
This has the side effect that we needed to rewrite the entire
negotiation process in a way that we iterate over the possible caps
until we find one that works.
The new negotiation method consist of extracting a preferred structure
from the peer caps and using this to fixate and sort the caps. To
reflect the old behaviour, we sort all resolution strictly bigger
to the preferred one with the closes one first. The rest is appended,
keeping the same order. We then normalize the caps in case there was
some list of interlace-mode or colorimetry left. We finally iterate
over all fixed caps and try it. 99% of the time, the first or the
second one should work, whit the result of a single S_FMT being issues.
From there, it will be relatively easy to introduce new negotiation
algorithm. The current algorithm is made for optimal image quality
with a scaling sink that sets it's window resolution as preference.
This the case if for:
v4l2src ! videoconvert ! videoscale ! ximagesink
Other strategy would be needed to optimize for non-scaling sink like
ximagesink or kmssink when the driver does not scale.
https://bugzilla.gnome.org/show_bug.cgi?id=785156
skip_try_fmt_probes quirk is set, V4L2 object will not probe for
interlace-mode and colorimetry to avoid relying on try_fmt. This quirk
will be used by v4l2src to avoid desastrous startup time with slow
USB webcams.
When this quirk is enabled, caller will have to iterate over the
negotiated caps as it may contains unsupported formats. If the peer
didn't choose a specific interlace-mode, or colorimetry, the value
chosen by the driver is set into the caps. For this reason, when this
mode is enabled, gst_v4l2_object_set_format() will require writable
caps.
https://bugzilla.gnome.org/show_bug.cgi?id=785156
According to the spec,TRY_FMT cannot return EBUSY, though it can
return EINVAL if it was not possible to update the format to
something supported.
https://bugzilla.gnome.org/show_bug.cgi?id=785156
This is in preparation for removing slow TRY_FMT probes for
colorimetry. As we won't have tried that colorimetry we cannot
assume the driver will accept it.
https://bugzilla.gnome.org/show_bug.cgi?id=785156
This is in preparation from removing the slow TRY_FMT probes for
interlacing. As we won't have tried that interlace-mode already
we need to validate that the driver isn't refusing it.
https://bugzilla.gnome.org/show_bug.cgi?id=785156
Add a couple of useful debug traces , they happened to be useful to
debug/investigate a 4K video playback issue with v4l2, so let's make these
changes more permanent.
Signed-off-by: Nicolas Dechesne <nicolas.dechesne@linaro.org>
https://bugzilla.gnome.org/show_bug.cgi?id=785109
Since 1.6, the transfer function for BT2020 has been changed from BT709
to BT2020_12. It's the same function, but with more precision. As a side
effect, the V4L2 colorpsace didn't match GStreamer colorspace. When
GStreamer ended up making a guess, it would not match anything supported
by V4L2 anymore. This this by using BT2020_12 for BT2020 colorspace and
BT2020 transfer function in replacement of BT709 whenever a 4K
resolution is detected.
The pixel aspect ratio is documented to not change unless the TV
Standard is changed. So this mean that this will be uniform across all
possible format and resolutions.
https://bugzilla.gnome.org/show_bug.cgi?id=784674
First step of a larger cleanup, all function from v4l2_calls are in fact
methods on GstV4l2Object. This split makes the code really confusing.
This also remove no longer unused macros.
Before that, each m2m node would be wrapped as a single, multi-format
decoder element. As a unique name was needed, we where using the device
name, which changes between re-boots. This led to unpredictable element
names. In this patch, we generate an element per codec, using
v4l2<codec>dec name. If there is multiple decoder for the same format,
the following elements will be named v4l2<node><codec>dec.
https://bugzilla.gnome.org/show_bug.cgi?id=784908
When resurrecting a buffer, the subsequent free call can result
in the group-released handler being called again, which causes
a recursive loop. This patch blocks the signal handler during
the time that it executes, ensuring that the loop will not occur.
https://bugzilla.gnome.org/show_bug.cgi?id=759292
Increasing this number fix a buffer starvation problem I'm hitting
with a "v4l2src ! kmssink" pipeline.
kmssink requests 2 buffer as it keeps a reference on the last rendered
one. So we were allocating 3 buffers for the pipeline.
Once the first 2 buffers have been pushed we ended up with:
- one buffer queued in v4l2
- one being pushed
- one kept as last rendered
If this 3rd buffer is released after that v4l2 used the first one to
capture we end up with a buffer starvation problem as no buffer is currently
queued in v4l2 for capture.
Fixing this by adding one extra buffer to the pipeline so when one
buffer is being pushed downstream the other can already be queued to
capture the next frame.
We were already adding 3 buffers if downstream didn't reply to the
allocation query. I reduced this number to 2 to compensate the extra
buffer which is now always added.
https://bugzilla.gnome.org/show_bug.cgi?id=783049
This patch fixes a memory leak that is caused if the dmabuf file
descriptor dup fails. Previously, _cleanup_failed_alloc() would
not unref the memory because mems_allocated had not yet been
incremented.
https://bugzilla.gnome.org/show_bug.cgi?id=784302
When upstream does no use the v4l2videoenc pool, we need to activate
that internal pool. Though, we relied the driver to provide a minimum
required buffer, which Qualcomm Venus driver don't currently provide.
https://bugzilla.gnome.org/show_bug.cgi?id=783361
This implements H264 encoding support using generic V4L2 interface. It is
reported to work with Samsung MFC driver, IXM.6 CODA driver and
Qualcomm mainline Venus driver. Other platform should be supported as
none of this work is platform specific.
The implementation consist of a GstV4l2VideoEnc base class, which
implements the core streaming functionality. This base class is implemented
by GstV4l2H264Enc class that implements the caps negotiation specific to
H264 profiles and level. This implementation supports hardware with multiple
H264 encoder. Though, to make it simplier to use, the first discovered H264
encoder will be named v4l2h264enc. Other encoder found during discovery will
have a unique name like v4l2video0h264enc.
This work is the combined work of multiple developpers in the last 3
years. Thanks to all of the contributors:
Ayaka <ayaka@soulik.info>
Frédéric Sureau <frederic.sureau@vodalys.com>
Jean-Michel Hautbois <jean-michel.hautbois@veo-labs.com>
Nicolas Dufresne <nicolas.dufresne@collabora.com>
Pablo Anton <pablo.anton@vodalys-labs.com>
https://bugzilla.gnome.org/show_bug.cgi?id=728438
This is needed for V4L2_OUTPUT interface, and is harmless of
V4L2_CAPTURE interfaces. This will fix timestamp in cases like:
v4l2src io-mode=dmabuf ! v4l2videoNenc output-io-mode=dmabuf-import ! ...
Same apply for userptr.
https://bugzilla.gnome.org/show_bug.cgi?id=781119
Running `gst-validate-launcher -t validate.file.playback.change_state_intensive.vorbis_vp8_1_webm`
on odroid XU4 (s5p-mfc v4l2 driver) often leads to:
ERROR:../subprojects/gst-plugins-good/sys/v4l2/gstv4l2videodec.c:215:gst_v4l2_video_dec_stop: assertion failed: (g_atomic_int_get (&self->processing) == FALSE)
This happens when the following race happens:
- T0: Main thread
- T1: Upstream streaming thread
- T2. v4l2dec processing thread)
[The decoder is in PAUSED state]
T0. The validate scenario runs `Executing (36/40) set-state: state=null repeat=40`
T1- The decoder handles a frame
T2- A decoded frame is push downstream
T2- Downstream returns FLUSHING as it is already flushing changing state
T2- The decoder stops its processing thread and sets `->processing = FALSE`
T1- The decoder handles another frame
T1- `->process` is FALSE so the decoder restarts its streaming thread
T0- In v4l2dec-> stop the processing thread is stopped
NOTE: At this point the processing thread loop never started.
T0- assertion failed: (g_atomic_int_get (&self->processing) == FALSE)
Here I am removing the whole ->processing logic to base it all on the
GstTask state to avoid duplicating the knowledge.
https://bugzilla.gnome.org/show_bug.cgi?id=778830
Without a specified framerate from the sink, the decoder frame interval
should be set using the framerate of the encoded video stream.
Therefore, the v4l2object should be able to change the framerate on the
output if the V4L2 device accepts it.
This is also necessary for mem2mem encoders so that their bitrate
calculation code may work correctly and they may report the correct
frame duration on the capture queue.
https://bugzilla.gnome.org/show_bug.cgi?id=779466
If the duration of the v4l2object is GST_CLOCK_TIME_NONE, because the
sink did not specify a framerate in the caps and the driver accepts the
framerate, the decoder element uses GST_CLOCK_TIME_NONE to calculate and
set the element latency.
While this is a bug of the capture driver, the decoder element should
not use the invalid duration to calculate a latency, but print a warning
instead.
https://bugzilla.gnome.org/show_bug.cgi?id=779466
The correct behaviour of anything stuck in the ->render() function
between ->unlock() and ->unlock_stop() is to call
gst_base_sink_wait_preroll() and only return an error if this returns an
error, otherwise, it must continue where it left off!
https://bugzilla.gnome.org/show_bug.cgi?id=774945
Update the image size according the amount of data we are going to
read/write. This workaround bugs in driver where the sizeimage provided
by TRY/S_FMT represent the buffer length (maximum size) rather then the expected
bytesused (buffer size).
https://bugzilla.gnome.org/show_bug.cgi?id=775564
Set MAY_BE_LEAKED flag on static pads returned by gst_v4l2_object_get_*_caps()
functions. Made functions thread safe by using g_once_init[enter|leave]
funtions.
https://bugzilla.gnome.org/show_bug.cgi?id=778453
In gst_v4l2_allocator_qbuf(), the memory is referenced after the
buffer is queued. Once queued (VIDIOC_QBUF), the buffer might be handled
by the V4L2 driver (e.g. decoded) and dequeued (gst_v4l2_allocator_dqbuf),
through a different thread, before the memory is referenced (gst_memory_ref).
In this case, in gst_v4l2_allocator_dqbuf(), the memory is unreferenced
(gst_memory_unref) before having been referenced: the memory refcount
reaches 0, and the memory is freed.
So, to avoid this crossing case, in gst_v4l2_allocator_qbuf(), the
memory shall be referenced before the buffer is queued.
https://bugzilla.gnome.org/show_bug.cgi?id=777399