Temporarily release the video decoder stream lock so that other
threads can continue decoding (e.g. call get_frame()) while data
is being pushed downstream.
At this point it is locked twice, we release one, and then the base class
releases the last one just before pushing the data.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7584>
This notably follow the way we order the template and keeps the
format:Interlaced caps at the end. This change also fixes
an early skip check, that would skip if a driver only supports
alternate interlacing for a specific format. It also fixes
a bug where only the last resolution of a discrete frame size
was allowed to use format:Interlaced. Finally, similar to template
caps code, simplify the caps for earch featurs, making the debug output
manageable and (marginally) improve negotiation speed.
This change will make it easier to introduce memory:DMABuf.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7563>
If the stream has a special colorimetry that is not in the colorimetry
list, it will cause negotiation to fail. We should allow passing any
colorimetry, so add an extra structure without the colorimetry field.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7570>
A zero-sized box is not really a problem and can be skipped to look at any
possibly following ones.
BMD ATEM devices specifically write a zero-sized bmdc box in the sample
description, followed by the avcC box in case of h264. Previously the avcC box
would simply not be read at all and the file would be unplayable.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7565>
In qml6glsrc, we capture the application by copying the back buffer into
our own FBO. The afterRendering() signal is too soon as from the apitrace, the
application has been rendered into a QT internal buffer, to be used as a cache
for refresh.
Use afterFrameEnd() signal instead. This works with no delay on GLES. With GL
it seems to reduce from 2 to 1 frame delay (this may be platform specific). A
different recording technique would need to be used to completely remove this
delay.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7562>
Be smarter when allocating sink and source memory pools to reduce the
memory footprint. Use gst_v4l2_decoder_get_render_delay() to know the
need number of buffers for downstream element.
Handle errors in case of memory allocation failures.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7546>
This is not trully supported in V4L2, but we can emulate this similar to
what other elements do. In this patch we ensure that 0/1 is supported by
encoders (caps query),and uses a default of 30fps whenever we need to
set a framerate into the driver.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7545>
Clarify the fact that `encodebasebin->src_pad` is set when using a static source
pad (`encodebin`) and when not set it's dynamically added source
pads (`encodebin2`).
Fixes usage of encodebin2 when profiles are updated
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7523>
It has to be included in the block duration but in GStreamer we're not
including it in the buffer duration, so it has to be added again here.
Not including it in the block duration can lead to fatal errors when playing
back with Firefox if there are more padding samples than actual samples, e.g.
> D/MediaDemuxer WebMDemuxer[7f6a0808b900] ::GetNextPacket: Padding frames larger
> than packet size, flagging the packet for error (padding: {13500000,1000000000},
> duration: {6000,1000000}, already processed: false)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7517>
This makes sure that if upstream has different latencies that we're still
outputting buffers with increasining timestamps across the different streams
unless buffers are arriving after the latency deadline.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7516>
By setting the earliest time to timestamp + 2 * diff there would be a difference
of 1 * diff between the current clock time and the earliest time the element
would let through in the future. If e.g. a frame is arriving 30s late at the
sink, then not just all frames up to that point would be dropped but also 30s of
frames after the current clock time.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7518>
splitmuxsink can't possibly know how much latency it will introduce as it always
keeps one GOP around before outputting something. This breaks the latency
configuration of the pipeline and we're better off just pretending that
everything downstream of the sinkpads is not live.
Especially muxers that are based on aggregator and time out on the latency
deadline can easily misbehave otherwise as the deadline will be exceeded usually.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7515>
While transforming the internals of waylandsink into a library, the
context type name was accidentally changed, causing an ABI break. Change
it back to its original (as used by the libgstgl), and add support for
the misnamed version as a backward compatibility measure.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7494>
The version was incorrectly encoded in the opj_config.h header with
the earlier version, which caused a compilation warning.
```
../subprojects/gst-plugins-bad/ext/openjpeg/gstopenjpegenc.c:943:5: warning: ‘bpp’ is deprecated:
Use prec instead [-Wdeprecated-declarations]
943 | comps[i].bpp = GST_VIDEO_FRAME_COMP_DEPTH (frame, i);
| ^~~~~
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7490>
When no ports are given, gst_jack_get_ports() is called to get all the
(physical) output ports but then the result is ignored, triggering the
"No physical output ports found..." error.
Instead, move the queried ports to the variable we're going to use
later.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7488>
If the UVC gadget announces multiple formats in the descriptors the uvcsink
doesn't select the actual format but let's the UVC hosts select the format.
If the GStreamer pipeline is started before a UVC host selected the format,
upstream decides on a format until the UVC host has decided. In this case, the
current format needs to be set based on the caps from the caps event to be able
to detect if the format selection by the UVC host requires a format change on
the GStreamer pipeline.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7484>
The uvcsink may be put into the READY state to start listening for UVC requests.
Therefore, the UVC host may set a streaming format before the GStreamer pipeline
is started and the uvcsink received a caps event. In this case, prev_caps will
be NULL.
If the EVENT_CAPS has not been received, skip the check if the format needs to
be changed, since the sink will be started with the format selected by the UVC
host, anyway.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7484>
Sometimes under certain loads, VT can error out with kVTVideoEncoderMalfunctionErr or kVTVideoEncoderNotAvailableNowErr.
These have been reported to happen more often than usual if CopyProperty/SetProperty() is used close to the encode call.
Both can be worked around by restarting the encoding session.
These errors can be returned either directly from VTCompressionSessionEncodeFrame() or later in the encoding callback.
This patch handles both scenarios the same way - a session restart is be attempted on the next encode_frame() call.
If the error is returned immediately by the encode call, it's possible that some correct frames will still be given to
the output callback, but for simplicity (+ because I wasn't able to verify this scenario) let's just discard those.
In addition, this commit also simplifies the beach/drop logic in enqueue_buffer.
Related bug reports in other projects:
http://www.openradar.me/45889262https://github.com/aws/amazon-chime-sdk-ios/issues/170#issuecomment-741908622
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7449>
Until now we were overriding pad functions forgetting about the function
data (that are set using the _full variant of the functions setters), meaning
that the data was lost and any user of that feature would get empty data when
the wrapped function were called.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7470>