Scenario is this:
1. libav receives enough data to want a buffer with get_buffer2()
which wants a buffer pool with a certain format, say Y42B but does
not negotiate and therefore GstVideoDecoder does not have any output
state configured
2. A gap event is received which GstVideoDecoder wants to forward. It
needs caps to forward the gap event so attempts to negotiate with some
default information which chooses e.g. I420 and overwrites the
previously configured bufferpool decided on by get_buffer2()
3. There is a mismatch between what ensure_internal_pool() check for
consistency and what decide_allocation() sets when overriding the
internal pool with the downstream pool.
4. FFMpeg then requests a Y42B buffer from an I420 pool and predictably
crashes writing past the contents of the buffer
This is fixed by keeping track of the internal pool states correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-libav/-/merge_requests/116>
... and call finish_frame() so that baseclass can reset internal
status. Otherwise baseclass will keep holding the status for
decoding failed frame which will result in outputting buffer with
wrong timestamp.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-libav/-/merge_requests/112>
The check seems to be to present to verify that the decoded frame
matches the format we expect. The actual check for the layout of the
frame was being performed against the context instead.
The check fails at least for avdec_aptx_hd, where the AVCodecContext has
the sample format set to AV_SAMPLE_FMT_NONE.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-libav/-/merge_requests/107>
Same behaviour as for avviddec now. FFmpeg will return AVERROR_EOF when it's
completely drained but we should not return that here or otherwise
upstream will receive EOS and not forward us more data.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-libav/-/merge_requests/97>
When sucessfully finishing out frames (or finishing configuration), we must make
sure we don't override any failing GstFlowReturn that might have been detected
previously.
Failure to do this would result in not propagating not-linked, flushing,
etc...
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-libav/-/merge_requests/90>
Without this fix, running the command below will get an error randomly.
Example:
gst-launch-1.0 videotestsrc ! vp9enc ! avmux_ivf ! fakesink
ERROR: pipeline doesn't want to preroll.
0:00:02.388528491 30148 0x5601b424a370 ERROR libav :0::
Tag [1]V[0][0] incompatible with output codec id '167' (VP90)
When the `max-threads` property is not specified, GStreamer defaults to
the amount of CPU threads in the system.
The number of threads used in avdec has a direct impact on the latency
of the decoder, which is of as many frames as threads. Therefore, big
numbers of threads can make latency levels that can be problematic in
some applications.
For this reason, ffmpeg emits a warning when more than 16 threads are
requested.
This patch limits the default number of threads to 16. This affects only
computers with more than 16 CPU threads when using avviddec without
setting `max-threads`.
Some plugins (like libcdio) registers empty long_name field. Calling strncmp on this field leads to a segmentation fault.
Signed-off-by: Kevin Joly <joly.kevin25@gmail.com>
AVBufferRef -> GstFFMpegVideoDecVideoFrame -> GstVideoCodecFrame -> AVBufferRef
Instead of holding additional ref there, set read-only which would not be
reused by ff_reget_buffer()
Fixes: https://gitlab.freedesktop.org/gstreamer/gst-libav/issues/63
... if ffmpeg would reuse the allocated AVBuffer. Reused AVFrame by
the ffmpeg seems to break our decoding flow since the reused AVFrame
holds the initial opaque data (GstVideoCodecFrame in this case), so
we couldn't trace the our in/out frames.
To enforce get_buffer() call per output frame, hold another reference
to the AVBuffer in order to mark the AVBuffer as not writable.
Fixes: https://gitlab.freedesktop.org/gstreamer/gst-libav/issues/62
When thread_type is set to FF_THREAD_FRAME, per the documentation
a latency of one frame per thread is introduced:
<https://ffmpeg.org/ffmpeg-codecs.html>, search for thread_type.
Additionally, we need in that case to calculate the automatic
number of threads ourselves, so as to accurately calculate the
latency.
The thread-type property allows specifying preferred
multithreading methods by user. Note that FF_THREAD_FRAME
may introduce additional latency especially on non-filesrc usecase,
since it introduces a decoding delay of (number of threads) frames.
https://bugzilla.gnome.org/show_bug.cgi?id=797254