Whenever we import from downstream pool (userptr or dmabuf-import), we
should copy over the flags and timestamp, otherwise downstream will not
get proper synchronization or will not be able to notice frames that has
corruption in it.
https://bugzilla.gnome.org/show_bug.cgi?id=785680
Removes the FIXME/Question in the buffer pool and add a ref to the
element in the GstAllocator too. This ref is strictly required to keep
the GstV4l2Object structure around.
The library has started preventing a lot of interesting use cases,
like CREATE_BUFS, DMABuf, usage of TRY_FMT. As the libv4l2 is totally
inactive and not maintained, we decided to disable it. As a convenience
we added a run-time environment that let you enable it for testing.
GST_V4L2_USE_LIBV4L2=1
This of course only works if you have enabled libv4l2 at build time.
While not documented, gst_video_colorimetry_matches() only accepts well
known names. Looking at the code and unit test, this seems to be on
purpose, so fixing by parsing the string and compating the colorimetry
structures.
The subclass negotiated function will call set_format, if that fails the
pool will not be created. We ended up with an assertion.
GStreamer-CRITICAL **: gst_buffer_pool_set_active: assertion 'GST_IS_BUFFER_POOL (pool)' failed
In this commit, we enabled skip_try_fmt_probes quirk in order to speed
up the start which is known to be disastrously slow with certain USB
cameras.
This has the side effect that we needed to rewrite the entire
negotiation process in a way that we iterate over the possible caps
until we find one that works.
The new negotiation method consist of extracting a preferred structure
from the peer caps and using this to fixate and sort the caps. To
reflect the old behaviour, we sort all resolution strictly bigger
to the preferred one with the closes one first. The rest is appended,
keeping the same order. We then normalize the caps in case there was
some list of interlace-mode or colorimetry left. We finally iterate
over all fixed caps and try it. 99% of the time, the first or the
second one should work, whit the result of a single S_FMT being issues.
From there, it will be relatively easy to introduce new negotiation
algorithm. The current algorithm is made for optimal image quality
with a scaling sink that sets it's window resolution as preference.
This the case if for:
v4l2src ! videoconvert ! videoscale ! ximagesink
Other strategy would be needed to optimize for non-scaling sink like
ximagesink or kmssink when the driver does not scale.
https://bugzilla.gnome.org/show_bug.cgi?id=785156
skip_try_fmt_probes quirk is set, V4L2 object will not probe for
interlace-mode and colorimetry to avoid relying on try_fmt. This quirk
will be used by v4l2src to avoid desastrous startup time with slow
USB webcams.
When this quirk is enabled, caller will have to iterate over the
negotiated caps as it may contains unsupported formats. If the peer
didn't choose a specific interlace-mode, or colorimetry, the value
chosen by the driver is set into the caps. For this reason, when this
mode is enabled, gst_v4l2_object_set_format() will require writable
caps.
https://bugzilla.gnome.org/show_bug.cgi?id=785156
According to the spec,TRY_FMT cannot return EBUSY, though it can
return EINVAL if it was not possible to update the format to
something supported.
https://bugzilla.gnome.org/show_bug.cgi?id=785156
This is in preparation for removing slow TRY_FMT probes for
colorimetry. As we won't have tried that colorimetry we cannot
assume the driver will accept it.
https://bugzilla.gnome.org/show_bug.cgi?id=785156
This is in preparation from removing the slow TRY_FMT probes for
interlacing. As we won't have tried that interlace-mode already
we need to validate that the driver isn't refusing it.
https://bugzilla.gnome.org/show_bug.cgi?id=785156
Add a couple of useful debug traces , they happened to be useful to
debug/investigate a 4K video playback issue with v4l2, so let's make these
changes more permanent.
Signed-off-by: Nicolas Dechesne <nicolas.dechesne@linaro.org>
https://bugzilla.gnome.org/show_bug.cgi?id=785109
Since 1.6, the transfer function for BT2020 has been changed from BT709
to BT2020_12. It's the same function, but with more precision. As a side
effect, the V4L2 colorpsace didn't match GStreamer colorspace. When
GStreamer ended up making a guess, it would not match anything supported
by V4L2 anymore. This this by using BT2020_12 for BT2020 colorspace and
BT2020 transfer function in replacement of BT709 whenever a 4K
resolution is detected.
The pixel aspect ratio is documented to not change unless the TV
Standard is changed. So this mean that this will be uniform across all
possible format and resolutions.
https://bugzilla.gnome.org/show_bug.cgi?id=784674
First step of a larger cleanup, all function from v4l2_calls are in fact
methods on GstV4l2Object. This split makes the code really confusing.
This also remove no longer unused macros.
Before that, each m2m node would be wrapped as a single, multi-format
decoder element. As a unique name was needed, we where using the device
name, which changes between re-boots. This led to unpredictable element
names. In this patch, we generate an element per codec, using
v4l2<codec>dec name. If there is multiple decoder for the same format,
the following elements will be named v4l2<node><codec>dec.
https://bugzilla.gnome.org/show_bug.cgi?id=784908
When resurrecting a buffer, the subsequent free call can result
in the group-released handler being called again, which causes
a recursive loop. This patch blocks the signal handler during
the time that it executes, ensuring that the loop will not occur.
https://bugzilla.gnome.org/show_bug.cgi?id=759292
Increasing this number fix a buffer starvation problem I'm hitting
with a "v4l2src ! kmssink" pipeline.
kmssink requests 2 buffer as it keeps a reference on the last rendered
one. So we were allocating 3 buffers for the pipeline.
Once the first 2 buffers have been pushed we ended up with:
- one buffer queued in v4l2
- one being pushed
- one kept as last rendered
If this 3rd buffer is released after that v4l2 used the first one to
capture we end up with a buffer starvation problem as no buffer is currently
queued in v4l2 for capture.
Fixing this by adding one extra buffer to the pipeline so when one
buffer is being pushed downstream the other can already be queued to
capture the next frame.
We were already adding 3 buffers if downstream didn't reply to the
allocation query. I reduced this number to 2 to compensate the extra
buffer which is now always added.
https://bugzilla.gnome.org/show_bug.cgi?id=783049
This patch fixes a memory leak that is caused if the dmabuf file
descriptor dup fails. Previously, _cleanup_failed_alloc() would
not unref the memory because mems_allocated had not yet been
incremented.
https://bugzilla.gnome.org/show_bug.cgi?id=784302
Even though hooked up to the build system, it's clear that no one
has ever built or used this with GStreamer 1.x. It wants to link
against libgstinterfaces, which no longer exists. And uses 0.10-style
raw audio caps. And the last meaningful change was done in 2009.
Let's just remove it.
When upstream does no use the v4l2videoenc pool, we need to activate
that internal pool. Though, we relied the driver to provide a minimum
required buffer, which Qualcomm Venus driver don't currently provide.
https://bugzilla.gnome.org/show_bug.cgi?id=783361
This implements H264 encoding support using generic V4L2 interface. It is
reported to work with Samsung MFC driver, IXM.6 CODA driver and
Qualcomm mainline Venus driver. Other platform should be supported as
none of this work is platform specific.
The implementation consist of a GstV4l2VideoEnc base class, which
implements the core streaming functionality. This base class is implemented
by GstV4l2H264Enc class that implements the caps negotiation specific to
H264 profiles and level. This implementation supports hardware with multiple
H264 encoder. Though, to make it simplier to use, the first discovered H264
encoder will be named v4l2h264enc. Other encoder found during discovery will
have a unique name like v4l2video0h264enc.
This work is the combined work of multiple developpers in the last 3
years. Thanks to all of the contributors:
Ayaka <ayaka@soulik.info>
Frédéric Sureau <frederic.sureau@vodalys.com>
Jean-Michel Hautbois <jean-michel.hautbois@veo-labs.com>
Nicolas Dufresne <nicolas.dufresne@collabora.com>
Pablo Anton <pablo.anton@vodalys-labs.com>
https://bugzilla.gnome.org/show_bug.cgi?id=728438
Fixes a negotiation error seen when trying to playback of a .MOV file with
a mono AAC audio stream decoded by avcdec_aac that doesn't set channel-mask
field but sink was requiring channel-mask=0x3.
We were unnecessarily looping/goto-ing repeatedly when we had exactly
the amount of data as the free space, and also when the free space was
too small. This, as it turns out, is a very common scenario with
Directsound on Windows.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=773681
We have to do polling here because the event notification API that
Directsound exposes cannot be used with live playback since all events
must be registered in advance with the capture buffer, you cannot
add/remove them once playback has begun. Directsoundsrc had the same
problem.
See also: https://bugzilla.gnome.org/show_bug.cgi?id=781249
This is needed for V4L2_OUTPUT interface, and is harmless of
V4L2_CAPTURE interfaces. This will fix timestamp in cases like:
v4l2src io-mode=dmabuf ! v4l2videoNenc output-io-mode=dmabuf-import ! ...
Same apply for userptr.
https://bugzilla.gnome.org/show_bug.cgi?id=781119
Running `gst-validate-launcher -t validate.file.playback.change_state_intensive.vorbis_vp8_1_webm`
on odroid XU4 (s5p-mfc v4l2 driver) often leads to:
ERROR:../subprojects/gst-plugins-good/sys/v4l2/gstv4l2videodec.c:215:gst_v4l2_video_dec_stop: assertion failed: (g_atomic_int_get (&self->processing) == FALSE)
This happens when the following race happens:
- T0: Main thread
- T1: Upstream streaming thread
- T2. v4l2dec processing thread)
[The decoder is in PAUSED state]
T0. The validate scenario runs `Executing (36/40) set-state: state=null repeat=40`
T1- The decoder handles a frame
T2- A decoded frame is push downstream
T2- Downstream returns FLUSHING as it is already flushing changing state
T2- The decoder stops its processing thread and sets `->processing = FALSE`
T1- The decoder handles another frame
T1- `->process` is FALSE so the decoder restarts its streaming thread
T0- In v4l2dec-> stop the processing thread is stopped
NOTE: At this point the processing thread loop never started.
T0- assertion failed: (g_atomic_int_get (&self->processing) == FALSE)
Here I am removing the whole ->processing logic to base it all on the
GstTask state to avoid duplicating the knowledge.
https://bugzilla.gnome.org/show_bug.cgi?id=778830
Without a specified framerate from the sink, the decoder frame interval
should be set using the framerate of the encoded video stream.
Therefore, the v4l2object should be able to change the framerate on the
output if the V4L2 device accepts it.
This is also necessary for mem2mem encoders so that their bitrate
calculation code may work correctly and they may report the correct
frame duration on the capture queue.
https://bugzilla.gnome.org/show_bug.cgi?id=779466
If the duration of the v4l2object is GST_CLOCK_TIME_NONE, because the
sink did not specify a framerate in the caps and the driver accepts the
framerate, the decoder element uses GST_CLOCK_TIME_NONE to calculate and
set the element latency.
While this is a bug of the capture driver, the decoder element should
not use the invalid duration to calculate a latency, but print a warning
instead.
https://bugzilla.gnome.org/show_bug.cgi?id=779466
The correct behaviour of anything stuck in the ->render() function
between ->unlock() and ->unlock_stop() is to call
gst_base_sink_wait_preroll() and only return an error if this returns an
error, otherwise, it must continue where it left off!
https://bugzilla.gnome.org/show_bug.cgi?id=774945
Update the image size according the amount of data we are going to
read/write. This workaround bugs in driver where the sizeimage provided
by TRY/S_FMT represent the buffer length (maximum size) rather then the expected
bytesused (buffer size).
https://bugzilla.gnome.org/show_bug.cgi?id=775564
Set MAY_BE_LEAKED flag on static pads returned by gst_v4l2_object_get_*_caps()
functions. Made functions thread safe by using g_once_init[enter|leave]
funtions.
https://bugzilla.gnome.org/show_bug.cgi?id=778453
In gst_v4l2_allocator_qbuf(), the memory is referenced after the
buffer is queued. Once queued (VIDIOC_QBUF), the buffer might be handled
by the V4L2 driver (e.g. decoded) and dequeued (gst_v4l2_allocator_dqbuf),
through a different thread, before the memory is referenced (gst_memory_ref).
In this case, in gst_v4l2_allocator_dqbuf(), the memory is unreferenced
(gst_memory_unref) before having been referenced: the memory refcount
reaches 0, and the memory is freed.
So, to avoid this crossing case, in gst_v4l2_allocator_qbuf(), the
memory shall be referenced before the buffer is queued.
https://bugzilla.gnome.org/show_bug.cgi?id=777399
Unlike former definitions of LOG_CAPS, the current implementation simply
expands to GST_DEBUG_OBJECT. The LOG_CAPS macro is rarely used and most
uses duplicate already existing GST_DEBUG_OBJECT lines. Therefore, the
caps are often printed twice which unnecessarily clutters the debug log.
Replace LOG_CAPS calls with GST_DEBUG_OBJECT, remove LOG_CAPS calls, and
delete the definition of LOG_CAPS.
https://bugzilla.gnome.org/show_bug.cgi?id=776899
The buffer memory type provided to the VIDIOC_CREATE_BUFS ioctl shall
be set with the value ("memory") given as input parameter of the
gst_v4l2_allocator_probe() function.
https://bugzilla.gnome.org/show_bug.cgi?id=777327
After commit 1ea9735a I see these error while using the webcam
integrated in my laptop:
GStreamer-CRITICAL **: gst_value_list_get_size: assertion 'GST_VALUE_HOLDS_LIST (value)' failed
The issue is gst_v4l2src_value_simplify() was doing its job of
generating a single value, rather than the original list. That why,
when getting the list size, a critical warning was raised.
This patch takes advantage of the compiler optimizations to verify
first if the list was simplified, thus use it directly, otherwise,
if it is a list, verify its size.
https://bugzilla.gnome.org/show_bug.cgi?id=776106
If for some reason we fail to probe formats (all try_fmt calls fail, for
example), this is not a critical error, but we end up with an empty list
of interlace modes. This causes all subsequent negotiation to fail.
This patch fixes interlace-mode setting to be skipped if we failed to
detect any.
https://bugzilla.gnome.org/show_bug.cgi?id=775702
These can be called from different threads and both manipulate the
pool->buffers array. Lock them properly and let flush_stop move the
array contents into a temporary array on the stack to avoid having
to call release_buffer under the object lock.
https://bugzilla.gnome.org/show_bug.cgi?id=775015
If the pool is inactive, it is guaranteed to also be flushing, so the
following check will return GST_FLOW_FLUSHING anyway.
This can happen if a v4l2src is blocking on DQBUF in create and is sent
an EOS event on another thread. In that case the pool is set to
flushing/inactive without locking, the v4l2src is unblocked, and may
call pool_process with a valid buffer on the already inactive pool.
https://bugzilla.gnome.org/show_bug.cgi?id=775014
I've seen problems where the `bytesused` field of `v4l2_buffer` would be
a silly number causing the later call to:
gst_memory_resize (group->mem[i], 0, group->planes[i].bytesused);
to result in this error to be printed:
(pulsevideo:11): GStreamer-CRITICAL **: gst_memory_resize: assertion 'size + mem->offset + offset <= mem->maxsize' failed
besides causing who-knows what other problems.
We make the assumption that this buffer has still been dequeued correctly
so just clamp to a valid size so downstream elements won't end up in
undefined behaviour.
The invalid `v4l2_buffer` I saw from my capture device was:
buffer = {
index = 0,
type = 1,
bytesused = 534748928, // <- Invalid
flags = 8260, // V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC | V4L2_BUF_FLAG_ERROR | V4L2_BUF_FLAG_DONE
field = 01330, // <- Invalid
timestamp = {
tv_sec = 0,
tv_usec = 0
},
timecode = {
type = 0,
flags = 0,
frames = 0 '\000',
seconds = 0 '\000',
minutes = 0 '\000',
hours = 0 '\000',
userbits = "\000\000\000"
},
sequence = 0,
memory = 2,
m = {
offset = 3537219584,
userptr = 140706665836544, // Could be nonsense, not sure
planes = 0x7ff8d2d5b000,
fd = -757747712
},
length = 2764800,
reserved2 = 0,
reserved = 0
}
This is from gdb with my own annotations added.
This was with gst-plugins-good 1.8.1, a Magewell XI100DUSB-HDMI video
capture device and kernel 3.13 using a dodgy HDMI cable which is great at
breaking HDMI capture devices. I'm using io-mode=userptr and have built
gst-plugins-good without libv4l.
https://bugzilla.gnome.org/show_bug.cgi?id=769765
QuickTime.h is no longer available on OS X 10.12 (Sierra),
and both the header and the framework seem unnecessary
for compilation - at least as of 10.11 (El Capitan).
https://bugzilla.gnome.org/show_bug.cgi?id=770526
https://github.com/mesonbuild/meson
With contributions from:
Tim-Philipp Müller <tim@centricular.com>
Jussi Pakkanen <jpakkane@gmail.com> (original port)
Highlights of the features provided are:
* Faster builds on Linux (~40-50% faster)
* The ability to build with MSVC on Windows
* Generate Visual Studio project files
* Generate XCode project files
* Much faster builds on Windows (on-par with Linux)
* Seriously fast configure and building on embedded
... and many more. For more details see:
http://blog.nirbheek.in/2016/05/gstreamer-and-meson-new-hope.htmlhttp://blog.nirbheek.in/2016/07/building-and-developing-gstreamer-using.html
Building with Meson should work on both Linux and Windows, but may
need a few more tweaks on other operating systems.
Both work with autotools but they definitely don't mean the same thing, cause
problems with other build systems, and are bad form. Existence should always be
checked with #ifdef or #if defined.
D3DX has been deprecated for the last 4 years and latest versions of
Windows no longer ship headers for it. This is fine as long as you're
building with Cerbero's Wine-based DirectX headers, but sucks if you
want to build against the actual Windows SDK.
We were just using it to get error strings anyway, so just use the
generic error string API.
The type detection would lead to assertion as it would try
to create a device without having found any type for it. It
also didn't detect MPLANE devices properly.
The monitor sets the object->element object as a GstObject. This
works for debug traces, but will assert for ELEMENT_ERROR. This
was the only case where that could happen. Add a check for that.
After switching to using V4L2_CAP_DEVICE_CAPS we lost support for
multiplanar device types. After some research, it looks like
vcap.capabilities treated the multiplanar flag of output and capture
devices equally, but not the new device_caps.
https://bugzilla.gnome.org/show_bug.cgi?id=768195
A typo in gst_v4l2_probe_and_register() caused a build error when building
with --enable-v4l2-probe. Fixing it.
gstv4l2.c: In function 'gst_v4l2_probe_and_register':
gstv4l2.c:150:25: error: 'struct v4l2_capability' has no member named 'capabilitites'
device_caps = vcap.capabilitites;
The same physical device can export multiple devices. In
this case, the capabilities field now contains a union of
all caps available from all exported V4L2 devices alongside
a V4L2_CAP_DEVICE_CAPS flag that should be used to decide
what capabilities to consider. In our case, we need the
ones from the exported device we are using.
https://bugzilla.gnome.org/show_bug.cgi?id=768195
gst_v4l2_clear_error() doesn't work like g_clear_error(), it
doesn't NULLify the pointer, so set freed debug string to NULL
so it doesn't get freed again if gst_v4l2_clear_error() is
called twice on the error.
CID 1362901
Instead of completely getting rid of the input buffer, copy
the metadata, the flags and the timestamp into an empty buffer.
This way the decoder base class can copy that information again
to the output buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=758424