gstavvidenc.c: In function 'gst_ffmpegvidenc_flush_buffers':
gstavvidenc.c:733:7: error: ISO C90 forbids mixed declarations and code [-Werror=declaration-after-statement]
GstFFMpegVidEncClass *oclass =
^
cc1: all warnings being treated as errors
libav always uses planar audio formats nowadays, not much use in
us trying to allocate anything here until we add support for planar
aka non-interleaved audio formats at least in audioconvert.
The context contains the information from the latest input frame, we're
however interested in the information from the latest output frame. As we have
to negotiate for the buffer that is about to come next.
This should fix some crashes that happened when both information got out of
sync. If that happens now, we will do fallback allocation until the output
is renegotiated too.
https://bugzilla.gnome.org/show_bug.cgi?id=750865
There exist few formats (deprecated though) used by mjpeg decoder
and encoder that maps to the same GStreamer format. To properly
pick the right format, also lookup each Codec list before accepting
the format. This fixes error when trying to use mjpeg encoder.
Note that this may results in faded colors. In fact, these special
format are meant to specify that this is full range YUV. Colorimetry
in gst-libav is not yet implemented, hence is ignored in general. So
I think it's fine to first fix the issue before addressing the missing
feature.
https://bugzilla.gnome.org/show_bug.cgi?id=750398
finish() is called at EOS, drain() is called at all other times
when the decoder should be drained out. gst-libav decoder behaviour
is the same in both cases, so use the same implementation
See https://bugzilla.gnome.org/show_bug.cgi?id=734617
We use have_data (that comes from libav), instead of only trying 10
times, to know if there are more samples available. The old code was
machine dependent as different amount of samples could be decoded by
different type of (more powerful) machines, and 10 times was not always
sufficient.
https://bugzilla.gnome.org/show_bug.cgi?id=737144
We use have_data (that comes from libav), instead of only trying 10 times,
to know if there are more frames available. The old code was machine
dependant as different amount of frames could be decoded by different
type of (more powerful) machines, and 10 times was not always sufficient.
https://bugzilla.gnome.org/show_bug.cgi?id=736515
have_data is not propagated from gst_ffmpegviddec_video_frame to
gst_ffmpegviddec_frame. have_data is only set to 1 in
gst_ffmpegviddec_frame if a frame pointer is passed. However, this is
not true while draining, which means that have_data from libav will be
ignored.
https://bugzilla.gnome.org/show_bug.cgi?id=734608
After the recent addition of negotiation support for MPEG4 part 2
profiles via caps it can happen that the generated caps at this
point still contain multiple profiles. For example if downstream
does not care. Just fixate anything here and use those caps.
Remove x-xvid and x-3ivx. The last place where they were used are
in the srcpad caps of the decoder but since the decoder will never
actually output those caps we can safely remove them.
If audio_in is NULL, we'll send a NULL frame to libav, to flush
the codec. In that case, we won't know how many samples the codec
will have used, so we use -1 (for don't know) when letting the
base class know about the buffer.
Coverity 1195177