We use have_data (that comes from libav), instead of only trying 10 times,
to know if there are more frames available. The old code was machine
dependant as different amount of frames could be decoded by different
type of (more powerful) machines, and 10 times was not always sufficient.
https://bugzilla.gnome.org/show_bug.cgi?id=736515
have_data is not propagated from gst_ffmpegviddec_video_frame to
gst_ffmpegviddec_frame. have_data is only set to 1 in
gst_ffmpegviddec_frame if a frame pointer is passed. However, this is
not true while draining, which means that have_data from libav will be
ignored.
https://bugzilla.gnome.org/show_bug.cgi?id=734608
gst_video_decoder_get_max_decding_time doesn't return a GstClockTime
but a GstClockTimeDiff, and thus one needs to compare it against
G_MAXINT_64.
The returning of a boolean and the extra subsequent code in _video_frame
was uselessly complicated.
The previous behaviour led to artefacts when the decoder tried to
hurry up.
https://bugzilla.gnome.org/show_bug.cgi?id=730075
As we don't know how many output buffers we need to operate, we need to
avoid pool that can't grow. Otherwise the pipeline may stall, waiting
for buffers. For now, we require it to be able to grow to at least
32 buffers, which I think is a fair amount of buffers for decoders.
https://bugzilla.gnome.org/show_bug.cgi?id=726299
Fixes crash on EOS when no buffers have been received for some
reason, e.g. because the parser didn't output any.
fakesrc num-buffers=0 format=time ! avdec_h264 ! fakesink
The output-corrupt property will set the CODEC_FLAG_OUTPUT_CORRUPT flag
in the codec context. The user can now decide whether libav outputs
corrupt frames or not.
Previous libav versions had this flag always set.
https://bugzilla.gnome.org/show_bug.cgi?id=722453
A AVCodecContext needs cleaning up before being freed.
Go through all of the allocations/setups to ensure none of them
can leak a context or its contents.
New libav will not call the release_buffer callback anymore when
avcodec_default_get_buffer() is called from get_buffer. Releasing of the
memory in a picture should now be done by registering a callback to the
avbuffer objects in the picture. There is some compatibility code to
wrap the memory we provide in get_buffer in avbuffer with a callback to
release_buffer but that is not done when avcodec_default_get_buffer()
is called.
Work around this by adding a dummy avbuffer object to the picture that
will release the frame.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=721077
... so as to focus on providing *a* buffer rather than one (too) tied
to a frame, in particular allowing multiple allocations related to a frame.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=697806
... by also removing it from the pending list of frames,
where it may still be in if it has never been submitted to _finish.
This could happen if is a decode-only frame, or in skipped decoding
situation, ...
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=693772
It can happen that negotiation fails during get_buffer(), but then
we don't retry later and never return NOT_NEGOTIATED upstream...
and instead run into assertions.
We need to reload the defaults for the codec after closing it,
otherwise we can't access codec information like the supported
sample rates and can crash.
https://bugzilla.gnome.org/show_bug.cgi?id=707040
libav can write slightly after the plane end in some SIMD optimized
functions. The extra padding value needs to be at least 16+stride_align
for each plane, so just increase the bottom padding value for the output
frame.
https://bugzilla.gnome.org/show_bug.cgi?id=694299
If coded_width/_height is supplied, the codec might use it as the
width/height and if it is wrong can lead to segfaults or video
corruption.
This is specially harmful on renegotiation scenarios where the
resolution changed. There seems to be no specific function for reseting
the AV Context in libav, so just set it directly.
https://bugzilla.gnome.org/show_bug.cgi?id=702003
This reverts commit 47647e1cac.
Breaks playback when direct rendering is disabled.
The reason is that we set the opaque vaue to NULL and then try to use the NULL
value when we decoded a frame.
Add support for codec that use reget_buffer. In this mode, it reuses the picture
and we need to attach the corresponding input frame to it or else we get the
timestamps wrong.