We want to ensure the stream-collection is present on the pad (as a sticky
event) before we expose the pad.
This is more reliable since it will ensure it is present before any other event
is pushed through.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7682>
There were two main issues:
The mix matrix was not protected with the object lock
The code was mistakenly assuming that after updating the mix matrix
a reconfigure event sent upstream would be enough to cause upstream to
send caps again, and the converter was only reconstructed in ->set_caps.
That was not actually enough, as if the new matrix didn't affect the
number of input / output channels there was no reason for upstream to do
anything after getting the unchanged caps.
The fix for this was to have ->transform also recreate the converter
when needed, with the added subtlety that depending on the mix matrix
the element could be set to passthrough. This means that when setting
the mix matrix the converter also had to be recreated immediately to
check if the element had to be switched back to non-passthrough.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7399>
This commit fixes two issues:
- The event must be posted *after* calling stop, otherwise a race condition can occur and the app never stops
- isFinishedLaunching and applicationDidFinishLaunching are not always synchronized, causing sometimes
a deadlock on the g_cond_wait never catching the g_cond_signal
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7608>
In order to ensure all initial events (stream-start, caps, ..) are present on
pads that we expose, those various sticky events are propagated (from parsebin
to multiqueue output, from multiqueue output to exposed pads).
The problem was that the "hack" in `urisourcebin` to inform downstream elements
that the stream is parsed data and a collection will be present was only done in
one place : a probe on the output of parsebin ... but the stream-start could
potentially have already been propagated to the output pads before that.
In order to fix that, we make sure any pending sticky stream-start event is
updated before being propagated.
Fixes#3788
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7604>
Previously urisourcebin only allows stream-collections messages from adaptive
demuxers or sources to be posted.
This commit also allows the case where they come from a single parsebin. We
still want to prevent it in the case where they are multiple parsebins, since
that would require some form of aggregation to show a single/unified collection.
In order to avoid a regression with uridecodebin3 behavior, we also implement
support for GST_QUERY_SELECTABLE, so that uridecodebin3 can figure out whether
it should let GST_MESSAGE_STREAM_COLLECTION flow upwards (because app/user could
react on it) or whether it drops it in order for decodebin3 to do the collection
aggregation and posting.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7603>
The presence (or not) of a collection on an input will determine whether events
will be throttled so that there are only forwarded when that input gets a valid
collection.
Therefore the input lock should be used.
In addition to that, we want to ensure that the application/user has a chance to
reliably (i.e. synchronously) specify what streams it is interested in by
sending a GST_EVENT_SELECT_STREAMS.
But we cannot allow anything to go forward until that message posting has come
back, otherwise we run in various races.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3872
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7602>
P010 uses 16 bits per pixel, with least significant being padding. This
code worked with Intel display driver since they roundup that value, but
does not work with the generic DRM helpers which also support NV15,
which does not have any padding.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7596>
Simple gst_gl_sync_meta_wait() is not sufficient to ensure GL commands
are executed before dma-buf devices get to see the buffer.
This is the first step that should make the code behave correctly for
everybody, although there may be performance penalty. In the future we
should introduce a more general sync meta that would allow to move the
waiting from gldownload (the producer) to the sink elements (the
consumers).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7595>
Temporarily release the video decoder stream lock so that other
threads can continue decoding (e.g. call get_frame()) while data
is being pushed downstream.
At this point it is locked twice, we release one, and then the base class
releases the last one just before pushing the data.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7584>
This notably follow the way we order the template and keeps the
format:Interlaced caps at the end. This change also fixes
an early skip check, that would skip if a driver only supports
alternate interlacing for a specific format. It also fixes
a bug where only the last resolution of a discrete frame size
was allowed to use format:Interlaced. Finally, similar to template
caps code, simplify the caps for earch featurs, making the debug output
manageable and (marginally) improve negotiation speed.
This change will make it easier to introduce memory:DMABuf.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7563>
If the stream has a special colorimetry that is not in the colorimetry
list, it will cause negotiation to fail. We should allow passing any
colorimetry, so add an extra structure without the colorimetry field.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7570>
A zero-sized box is not really a problem and can be skipped to look at any
possibly following ones.
BMD ATEM devices specifically write a zero-sized bmdc box in the sample
description, followed by the avcC box in case of h264. Previously the avcC box
would simply not be read at all and the file would be unplayable.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7565>
In qml6glsrc, we capture the application by copying the back buffer into
our own FBO. The afterRendering() signal is too soon as from the apitrace, the
application has been rendered into a QT internal buffer, to be used as a cache
for refresh.
Use afterFrameEnd() signal instead. This works with no delay on GLES. With GL
it seems to reduce from 2 to 1 frame delay (this may be platform specific). A
different recording technique would need to be used to completely remove this
delay.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7562>
Be smarter when allocating sink and source memory pools to reduce the
memory footprint. Use gst_v4l2_decoder_get_render_delay() to know the
need number of buffers for downstream element.
Handle errors in case of memory allocation failures.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7546>
This is not trully supported in V4L2, but we can emulate this similar to
what other elements do. In this patch we ensure that 0/1 is supported by
encoders (caps query),and uses a default of 30fps whenever we need to
set a framerate into the driver.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7545>