We need to handle the case where a flush occure while the streaming
thread is being brought up. In this case, the flushing state of the poll
object is cleared. To solve this, we simply set the capture poll to flushing
again, this way we know the thread will exit. The decoder streamlock
is used to synchronize with handle frame.
M2M devices were sharing the same properties as src and sink. Most of
these made no sense. This patch reduces the number of propeties and
makes io-mode clearer by having capture-io-mode and output-io-mode. This
also accidently fixed a bug in gstv4l2transform io-mode code, where the
capture io-mode could not be set.
https://bugzilla.gnome.org/show_bug.cgi?id=729591
If the driver need more buffers than requested by the config,
update the pool min/max values. The minimum value for the pool
could be provided either by the driver or by the pool. This is
best effort for drivers that don't support
CID V4L2_CID_MIN_BUFFERS_FOR_CAPTURE.
https://bugzilla.gnome.org/show_bug.cgi?id=730200
This allow calling start streaming later for capture device. Currently it breaks
in dmabuf-import because downstream is holding a buffer that will only be
released after stream-start.
https://bugzilla.gnome.org/show_bug.cgi?id=730207
1) sources that have sent BYE in the past cannot be senders, since
they would have timed out to being receivers in the meantime...
2) sources that have sent BYE are now being removed earlier inside
this function
If we are inserting a packet into the jitter queue we need to keep
looping through the items until the right position is found. Currently,
the code stops as soon as an event is found in the queue.
Regarding events, we should only move packets before an event if there
is another packet before the event that has a larger seqnum.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=730078
When extrapolating the offset, we need to use the extrapolate
stride rather then the base stride. This should fix support for format
with more then two planes (I420, Y42B, etc).
When doing frame operation, we need to use the default VideoInfo
and let the frame API read the video meta in order to get the stride
and offset right. Currently we where using the specialized VideoInfo
which reflects what the HW is setup to.
Simplify framerate field if possible, so we don't end up with
e.g. framerate = (fraction) { 30/1 }. Maybe the helper function
should be moved to core, but we can do this later.
This moves away from copying information and store everything inside
the GstVideoInfo structure. The alignement exposed by v4l2 api
is now handled using proper offset.