check_version(1.23.1) would return TRUE for a git development version
like 1.23.0.1, which is quite confusing and somewhat unexpected.
We fixed this up in the version check macros already in !2501, so this
updates the run-time check accordingly as well.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4513>
Unfortunately streamoff does not flush the events, and this can cause all
sort of issues. Flush events on capture queue. We also return
GST_V4L2_FLOW_RESOLUTION_CHANGE in case a resolution change was seen.
This allow skipping streamon(capture) on flush, which could lead to a
configuration miss-match, or failure if the buffers aren't of the right
size.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4437>
Let the driver detects the change and reconfigure the capture side
transparently from there. This avoid reallocation of the output buffers,
and eliminates the need to stop and restart the capture task. This is
only happening if the driver have support for this, otherwise the old
behaviour is maintained.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4437>
Stop doing capture buffer allocation based on guesses
and wait for the source change event when available.
Unlike stateless decoder, the stateful decoder is not aware of
the coded resolution, and this may lead to the wrong result
even when using TRY_FMT.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4437>
In previous implementation that job was split between handle_frame and
the processing loop and it wasn't clear if this mechanism was race
free. The capture setup would also be tried for every buffer, which was
not necessary.
This also simplify the handling of SRC_CH event, dropping the unneeded
atomic boolean.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4437>
When seek flush, gst v4l2 buffer pool flush is not atomic which will
lead double enqueue buffer (qbuf) issue, and v4l2 buffer pool qbuf is
also not atomic which will lead no free buffer found in the pool.
1. add lock for calculate enqueue number in streamon function
2. add lock for v4l2 capture end streamoff in pool flush function
3. lock the whole funciton of v4l2 buffer pool qbuf, then the buffer
pool index and qbuf operation are atomic
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4465>
when regotiation happens, v4l2src will check if it can reuse current caps,
but we need check if current caps is subset of all query caps from downstream
instead of check it with query caps one by one.
Assuming that the current caps is not the subset of first caps from query caps,
it will go to try fmt. when try fmt success, v4l2src will make pending_set_fmt
to TRUE and going to reset.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4500>
Allowing better control over the way discovery happens and allowing
us to expose a proper API.
This also adds the potential of implementing more multi-threaded
discovery in a clean way in the future.
This allows us to cleanly expose the new
GstDiscoverer::load-serialize-info signal.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3911>
This reverts commit f29c19be58. If this is
called for the reference context then we would run into an infinite
loop, which is not really better than an assertion.
By fixing up DTS to never be ahead of the PTS in the previous commit
this situation should be impossible to hit now.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4498>
decodebin3 will do its best to figure out whether a parsebin is required to
process the incoming stream.
The problem is that for push-based stream it could happen that the stream would
not provide any caps, resulting in nothing being linked internally.
Furthermore, there is the possibility that a stream *with* caps would not be
using a TIME segment, which is required for multiqueue to properly work.
In order to fix those two issues, we force the usage of parsebin on push-based
streams:
* When the pad is linked, if upstream can't provide any caps
* When we get a non-TIME segment
Fixes#2521
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4492>
Current implementation can in some cases detect
that all data is sent but in reality it is not,
leading to a push to an unlinked pad.
This is a race between the probe used to track data sent and a
call to close.
This patch sends an EOS before starting the close procedure
and then waits for the EOS event to come through to the
src pad before commencing with tear down.
This ensures that any queued data before EOS is flushed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4462>