Downstream likely won't accept video/x-raw and the caps query
will return EMPTY caps. Instead, create a copy of the caps that
has all structure names replaced by 'image/jpeg'
Simple pipeline that shows the problem:
gst-launch-1.0 videotestsrc num-buffers=1 ! "video/x-raw, \
width=(int)640, height=(int)480" ! videoscale ! jpegenc ! \
"image/jpeg, width=(int)800, height=(int)600" ! filesink \
location=/tmp/image.jpg
This fixes a not-negotiated error at least on mov files with
twos audio with two channels and video dvcp. As playbin and gst-launch
sample coming from the qtdemux.c file uses audioconvert and the latter
require format interleaved.
https://bugzilla.gnome.org/show_bug.cgi?id=675326
When all pads go to EOS immediately, we are not negotiated and our collected
function is called (without any available data). Handle this case gracefully.
Conflicts:
gst/interleave/interleave.c
When we explicitely set the mute property to FALSE, connect to pulseaudio with
the PA_STREAM_START_UNMUTED flag set, otherwise pulseaudio will use its
previously used value (which might start the stream muted).
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=672401
Sample the pipeline clock and device clock closer to eachother to reduce jitter.
Don't subtract the frame duration from the timestamp when we can use the device
timestamps.
Assume a delay of 1 frame in read-write mode.
Query the amount of available buffers when doing set_config(). This allows us to
configure the parent bufferpool with the number of buffers to preallocate.
Keep track of the provided allocator and use it when we need to allocate a
buffer in RW mode.
When we are can not allocate the requested max_buffers amount of buffers, make
sure we keep 2 buffers around in the pool and copy them into an output buffer.
This makes sure that we always have a buffer to capture into. We also need to
detect those copied buffers and unref them when they return to the pool.
Only free the queued buffers that we keep track of in our buffer array. for rw
io-mode, we do allocate buffers but we don't keep track of them in the buffer
array.
This is not enough to properly support H264 cameras, but it will
allow an H264 stream to be generated by v4l2src using the default
settings of the camera. If used with the pre-set-format signal, the
H264 encoder can be fully configured.
Conflicts:
sys/v4l2/gstv4l2object.c
In order to support UVC H264 encoding cameras, an H264 Probe&Commit
must happen before the normal v4l2 set-format. This new signal is
meant to allow an external application or bin to do it.
It also serves to expose the file descriptor used by v4l2src in case
some custom ioctls need to be called.
Conflicts:
sys/v4l2/Makefile.am
sys/v4l2/gstv4l2src.c
sys/v4l2/v4l2src_calls.c