For each videoCdevice probe it input/output capabilities
if it match with video decoder requirement register a new element.
Signed-off-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
https://bugzilla.gnome.org/show_bug.cgi?id=722128
Don't enforce having width, height and framerate in template caps for encoded
formats. These don't always need to be exposed and may break negotiation for
decoder and decoding sink. If needed, these field will be automatically added
when probed caps are known.
https://bugzilla.gnome.org/show_bug.cgi?id=720568
If there is nothing that seems to force a certain framerate on output device, it is
preferable to simply not set that feild. This allow negotiation with tsdemux in a
decoder for example.
https://bugzilla.gnome.org/show_bug.cgi?id=720568
Also add a FIXME in gst_v4l2_object_setup_format
to note that the whole function has to be improved
in order to support ENCODED formats.
It requires to have an encoder device which we do not
have right now.
https://bugzilla.gnome.org/show_bug.cgi?id=720568
This method allow setting up the object from the currently configured format on the
device. This is useful for M2M element where input data decides the format that will
be set on capture side.
https://bugzilla.gnome.org/show_bug.cgi?id=720568
The number of plane, and the stride does not represent a capability change. Same caps
can have different stride from the default GstVideoInfo and the number of planes will
never change for 1 format.
https://bugzilla.gnome.org/show_bug.cgi?id=720568
It makes the gst_v4l2_object_set_format() slightly simplier and will make that
logic reusable. Note that gst_v4l2_object_has_mplane() will always return the
same value for one device. There is no need to check against the caps as this
has already been done by _open.
https://bugzilla.gnome.org/show_bug.cgi?id=720568
We set the dimensions just in case but don't validate them
afterwards. For some codecs the dimensions are *not* in the
bitstream, IIRC VC1 in ASF mode for example.
https://bugzilla.gnome.org/show_bug.cgi?id=720568
This api is in linux kernel since version 2.6.39,
and present in all version 3.
The commit that adds the API in master branch of the
linux kernel source is:
f8f3914cf9
v4l2 doc: "Some devices require data for each input
or output video frame to be placed in discontiguous
memory buffers"
There are newer structures 'struct v4l2_pix_format_mplane'
and 'struct v4l2_plane'.
So the pixel format is not setup with the same API when using
multi-planar.
Also for gst-v4l2, one of the difference is that in GstV4l2Meta
there are now one mem pointer for each maped plane.
When not using multi-planar, this commit takes care of keeping
the same code path than previously. So that the 2 cases are
in two different blocks triggered from V4L2_TYPE_IS_MULTIPLANAR.
Fixes bug https://bugzilla.gnome.org/show_bug.cgi?id=712754
gst_video_info_from_caps() always extract width, height, interlace mode and
framerate now. It is no longer necessary to do it again for encoded
formats.
https://bugzilla.gnome.org/show_bug.cgi?id=703399
Instead of just assuming a aspect ratio of 1/1 use VIDIOC_CROPCAP to ask
the device.
This also add a pixel-aspect-ratio property to overwrite the value from the
driver and a force-aspect-ratio property to ignore it.
https://bugzilla.gnome.org/show_bug.cgi?id=700285
This makes it possible to set any controls that can be set with
VIDIOC_S_CTRL.
The controls are set when the property is set (if the device is open)
and when the device is opened.
https://bugzilla.gnome.org/show_bug.cgi?id=698837
This can happen if other parts of the pipeline are reconfigured.
Stop streaming even for a short amount of time can be quite visible, so it
should be avoided if possible.
https://bugzilla.gnome.org/show_bug.cgi?id=700503
In the past gst_video_info_from_caps() only video/x-raw. Now it also
supports other video/* and image/* formats.
With this patch the format won't be GST_VIDEO_FORMAT_UNKOWN and
gst_v4l2_buffer_pool_set_config() handles strides correctly.
https://bugzilla.gnome.org/show_bug.cgi?id=699570
The existence of a GstVideoFormatInfo does not guarantee, that
the buffer contains video frames, so the format must be checked.
Also, for encoded buffers the length is variable and must be set.
https://bugzilla.gnome.org/show_bug.cgi?id=698949
If TRY_FMT is not implemented, gst_v4l2_object_get_nearest_size will
use S_FMT and will change the device's operation mode. To save the
old device mode we need to set the type field or else it will fail
to save the previous format.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=685209
v4l has add a new IOCTL to export a buffer by using dmabuf.
This patch allow to use this new IOTCL if it has been defined in videodev2.h
I introduce a new IO mode (GST_V4L2_IO_DMABUF) to enable this way of working.
https://bugzilla.gnome.org/show_bug.cgi?id=693826
UVC devices are never interlaced, and doing VIDIOC_TRY_FMT on them
causes expensive and slow USB IO, so don't probe them for interlaced.
This shaves 2 seconds of the startup time of cheese with a Logitech
Webcam Pro 9000.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=677722
This is not enough to properly support H264 cameras, but it will
allow an H264 stream to be generated by v4l2src using the default
settings of the camera. If used with the pre-set-format signal, the
H264 encoder can be fully configured.
Conflicts:
sys/v4l2/gstv4l2object.c
We don't currently support setting the pixel-aspect-ratio from V4L2. So
simply set it to be 1/1 in the caps to prevent negotiation failures when
fixating to weird values (e.g. when the downstream caps has
pixel-aspect-ratio = [ MIN, MAX ] )
https://bugzilla.gnome.org/show_bug.cgi?id=663580
Some drivers are buggy are will change the current format when
processing VIDIOC_TRY_FMT. Save and restore the current format
to ensure the format is kept unchanged.
https://bugzilla.gnome.org/show_bug.cgi?id=649067
When nobody is using our pool, activate it ourselves.
Avoid leaking the buffer array.
Set default pool configuration with caps.
Don't keep current_caps, core does that for us now.
Use the more specialized type for the bufferpool.
Use the size from the driver as the size of the image to read.
Don't configure the pool when created. This will be done in the setup_allocation
method later or by upstream for sinks.
Remove unused properties and variables. Bufferpool sizes are now configured in
the bufferpool by the elements in the pipeline. We might want to influence the
pool size later somehow.
Remove old method, use neww _process method for the sink.
Inform the parent bufferpool class about the settings too. This is needed to let
it know about the max-buffers.
Allocate the negotiated max-buffers and initially mmap min-buffers. The idea is
that the bufferpool will allocate more when needed.
Improve debugging.
Only poll in capture mode, it does not seem to work in playback mode on this
beagleboard.
Add different transport methods to the bufferpool (MMAP and READ/WRITE)
Do more parsing of the bufferpool config.
Start and stop streaming based on the bufferpool state.
Make separate methods for getting a buffer from the pool and filling it with
data. This allows us to fill buffers from other pools too. Either use copy or
read to fill up the target buffers.
Add property to force a transfer mode in v4l2src.
Increase default number of buffers to 4.
Negotiate bufferpool and its properties in v4l2src.
Pass the caps and the default video size to the bufferpool config.
Don't activate the bufferpool, this will be done by the object that decides to
use the bufferpool.
Improve debugging and error reporting.
Extend from GstBufferPool.
Handle the lifetime of the pool buffers correctly with the start/stop vmethods.
Map acquire and release directly to QBUF and DQBUF. We still expose an explicit
qbuf for the v4l2sink for now.
Move the details of how to capture to the device object. Remove the
v4l2src_calls.[ch] files because they are empty now.
Provide two simple methods to get and return a buffer to the device.
Also do a slow copy when the buffer is not from our pool.
Rename start and stop methods to open and close because that is what they do.
After setting the format on the device object, setup the bufferpools. Move this
code from the v4l2src_calls.c file, it is shared between source and sink.
Make new device start and stop method that merges various bits of common code
spread over several files.
We want to keep the default strides in the videoinfo. Keep the stride of the
video frames separate so that we can use both to copy a video frame and do
correct stride conversion.
Use GstVideoInfo to store the parsed caps.
Remove outsize from the caps parsing code, it's wrong because it does not use
the stride given by the driver.
Move the configuration of the framerate to where we set the other format
parameters.
Remove hack to check if the device is active.
Store streamparm in the device info.
Use some macros to access the current device configuration.
Remove some duplicate fields in src and sink and use the device configuration
instead.
Pass the caps to the set_format function and make _set_format parse the caps.
Also keep the parsed values in the v4l2object so that we can refer to them when
we want.
Keep track of the currently configured format and setting in the
v4l2object.
Pass the v4l2object to the bufferpool constructor so that the bufferpool can
know everything about the currently configured settings. This also allows us
to remove some awkward code.
Based on a patch by Guennadi Liakhovetski.
v2: updates because I forgot to add GstTuner interface to v4l2sink
v3: update to add all possible values to norm enum
Commit 6c8268dbfd broke recording
from interlaced v4l2 source (e.g. typical tv capture card) since
V4L2_FIELD_SEQ_TB (with fields stored separately) does not map
to currently defined interlaced format (fields stored interleaved).
Besides this mismatch, hardware might quite likely not support or
appreciate this field value, since querying supported formats mapped
_INTERLACED field formats to interlaced=true caps (so the latter should
not be mapped to field value that is not known to be supported).
Older kernels don't have these, and there's no easy way to check for the
existance of enums that doesn't involve a configure check, so just define
these if the V4L2_CAP_VIDEO_OUTPUT_OVERLAY define is not there, which was
added in the same commit as the TB/BT enum. Fixes compilation on CentOS 5.
https://bugzilla.gnome.org/show_bug.cgi?id=639339
This reverts commit 9e1d419d07.
Reverting this since it adds unreviewed and bad API to v4l2src
(property of type enum, with seemingly random and unsorted values).
with i686-apple-darwin10-gcc-4.2.1:
gstv4l2object.c: In function 'gst_v4l2_object_get_nearest_size':
gstv4l2object.c:1988: warning: format '%u' expects type 'unsigned int', but argument 12 has type 'gint *'
gstv4l2object.c:1988: warning: format '%u' expects type 'unsigned int', but argument 13 has type 'gint *'
MPEG doesn't have a static size per frame, so don't pretend it has one
and fail when capturing because it doesn't match. Instead mark the size
as unknown and let the read frame grabbing method use a reasonable fallback
value (assuming that's only for actual streaming formats)
Fixes bug #628349.
The format list should be sorted from high ranks to low ranks. In the GSList
sorting function this means the compare needs to return a positive value if
format a has a lower rank than format b.
Among other things this fixes v4l2src to prefer non-emulated formats
to emulated formats when built against libv4l.