Add function to reset the timestamp tracking.
Check for reordered timestamps on the input buffers and assume PTS input
timestamps when we see reordered timestamps.
Recover from an occasionally wrong input timestamp by also tracking the output
timestamps. When we detect a reordered output timestamp, assume DTS input
timestamps again.
Fixes#611500
Don't use studlyCaps; gboolean != GstFlowReturn; use gst_caps_set_simple()
instead of creating a GValue just to set a boolean field on a caps structure.
See #622736.
Properly convert bytes into time using sample size, sample rate
and channels number, instead of sample rate only.
This can cause huge timestamp discontinuities (even though the
durations remain correct) and might cause problems to muxers.
Fixes#623388
The buffer durations were not being reordered along with the timestamp
and offset of the buffers, resulting in buffers using the duration of the
latest incoming frame instead of their original frame.
Fixes#611398
When we have an input width/height that should be used for clipping, only
perform the clipping if the rectangle is smaller than the actual picture size.
Fixes#330681
Don't escape the codec_data, it breaks some streams (but likely also fixes
others). It's better to leave it as is, like most other players do.
See #608332
Make check for vdpau decoders more generic. There might be vdpau
decoders we don't expect when using an external ffmpeg version,
and we want those blacklisted as well (e.g. ffdec_mpeg4_vdpau).
Use format field in the pad caps to differentiate VC-1 from WMV3.
Fix a typo in the caps creation and parsing - the field is called
'format' - not 'fourcc'
Add a dodgy hack to populate the extradata size field
(first byte) when it is 0 - as it seems to be for some (Matroska)
test files.
When the video caps aren't fixed yet, make sure we return the most
precise set of caps. It seems a regression was introduced in cc082f,
causing restricted caps to never be used if the context == NULL
None of the restricted caps generation uses the context, so no need to
check whether the context.
Fixes bug #578160.
Resetting default values is currently very complex in libavcodec, so
we only call it when needed (i.e. when a context was previously used).
Shaves off 10% of the setup of a decoder.
This reverts commit 98439aacc7.
Instead of printing a warning when trying to set the property
it should do nothing as before and the property description
should contain a note that setting it has no effect.
...instead of just doing nothing when setting it. This makes sure
that people notice that they shouldn't set the property because
it creates a warning now.
For audio always add the minimum ffmpeg buffer size, for video
use the same weird buffer size as they use in ffmpeg.c:
width*height*6 + 200
Also make setting of the buffer-size property a no-op.
Fixes bug #593651.
For audio always add the minimum ffmpeg buffer size, for video
use the same weird buffer size as they use in ffmpeg.c:
width*height*6 + 200
Also make setting of the buffer-size property a no-op.
Fixes bug #593651.
When we are dropping frames because of QoS disable the DTS interpolation because
we won't be able to update the timestamps and end up setting the wrong
timestamps. Instead, simply use the timestamps from ffmpeg.
ffmpeg typefinders don't do bounds checking for small chunks of
data, so just skip them if we don't have a lot of data, to avoid
invalid memory access and/or crashes.
This now uses ffmpeg functionality to keep random metadata next to
the buffers and to get the correct offset for a frame, similar to how
timestamps are handled.
Fixes bug #578278.
PixFmt that are declared in AVCodec.pix_fmts are ones which are 'officially'
declared as being supported. We should therefore not have to create a
AVCodecContext and open an encoder to know if it's supported or not.
Also, doing it this way allows us to better pickup configuration overrides
we have in gstffmpegcodecmap for some codecs (like restrictions on width,
height, framerate like it's the case for dnxhd).
Fixes#575545
Takes codec frame delay into account (roughly the same way it does for timestamps for reordered frames) to produce frames with correct offsets.
A special hack to allow trailing frame with timestamp=segment.stop to be displayed.
Fixes bug #578278.
Cache any events we get from upstream before we're open, especially
tag events we may be getting from apedemux/id3demux or the like, and
push them downstream later when we've added our pads instead of just
dropping them silently. Fixes transcoding tags for Monkey's Audio
Files with preceding APE or ID3v2 tags (#586957). Add minimal unit
test for this.
Also push stream tags later after the global tags and the newsegment
event rather than right after creating the pad.
After a DISCONT, mark the next frame with DISCONT but don't wait for a new
keyframe. This greatly improves performance on lossy networks or currupted
frames as the decoder can usually continue and conceil errors up to the next
keyframe.
Avoid an infinite loop consuming buffer timestamp info when
the video frames contain only GST_CLOCK_TIME_NONE timestamps.
Add some debug logging in the timestamp tracking paths.
Fixes: #585845
If the same instance of the plugin is asked to be initialised more that once,
instances after the first one do not register the elements properly and the
elements become not usable.
For example, if you call gst_update_registry (), is not possible to create
elements after the call since the plugin is asked to be initialised again and
does not register the elements.
Fixes#584291
The patch from Bug #580796 hacked around existing infrastructure to handle
timestamps as DTS (as in all AVI files) causing the logic to be disabled.
Properly hook the timestamp handling into the existing infrastructure to handle
these cases too, partially reverting a26b94d92c
and moving some stuff around.
Refixes #580796.
If the set of caps for either audio or video is completely empty, skip
adding that pad template to the class. Some muxers only support audio-only
or video-only and otherwise end up with EMPTY caps in the pad template.
Rewrite the audio encoders to use the right API functions of ffmpeg. Also get
rid of the handrolled cache and use adapter instead for formats that require
fixed frame_size as input.
We don't need to set a default frame_size, ffmpeg has set this value to 0 to
inform us that there is a fixed relation between the amount of input samples
and output samples. Now we only need to implement handling that fact.
ffmpeg only tells us on a per-decoded-buffer basis if the stream is
interlaced or not. When we see a change, we force negotiation.
We can't detect that in our get_buffer() (when doing downstream allocation),
because at that point the interlaced flags aren't set on the outgoing
buffer.
Add a new function new_aligned_buffer() which creates a GstBuffer of
the requested size/caps, with the memory being allocated/freed by ffmpeg's
av_malloc/av_free which guarantees properly aligned memory.
Added a can_allocate_aligned internal property which we use to figure out
whether downstream can provide us with 128bit aligned buffers.
Without a frame_size configured in the context, the ffmpeg encoders do nothing.
Since the G726 does not configure a size itself, we set ourselves a frame_size
that corresponds to 20ms of audio, which is a reasonable default.
We do this, because the demuxer is initialized in the loop function. If it's not
initialized yet, that means the loop hasn't been entered... and therefore the
PIPE GCond will never be signalled.
We simply allocate the memory using ffmpeg's av_malloc which provides us
with properly memalign'ed data.
This avoids write-outside-of-bounds when sse/altivec code is being used.
The internal resampling functions seem to require a slightly bigger buffer
for output than what we require. Therefore we give it an extra 64bytes (although
16 should have been enough).
We should post a STREAM DECODE error message on the bus when we return
GST_FLOW_ERROR, otherwise the user ends up seeing an ugly internal flow
error message, which isn't very nice.
The problem is that the ffmpeg aac decoder fails... but still accepts
the following buffers as if nothing happened. But because some things
were not properly set in the internal code, all hell breaks loose.
AVOutputFormat does *NOT* contain the full list of codecs a muxer can handle,
but does contain the recommended audio and video codecs. Therefore we use that
information to expose more muxers, until AVOutputFormat contains a list of
*ALL* compatible codecs.
For a given AVCodec, when the sample_fmts field is non-NULL, that means that
that codec can only handle a specific set of SampleFormat.
With this patch, we now look for its presence and create the proper pad template
caps.
Fixes#569441
Where no more data is available, av_read_frame just returns an error code
instead of making the difference between "I am not returning anything because
we finished reading" and "I am not returning anything because the underlying
read failed".
We differentiate between the two by looking at whether we outputted any
data previously or not.
Original commit message from CVS:
Patch by: Dejan Sakelšak <sakdean at gmail dot com>
* ext/ffmpeg/gstffmpegcodecmap.c: (gst_ff_aud_caps_new):
Narrow down the allowed channels and sample rates for AMR.
Fixes#566647.
Original commit message from CVS:
* ffmpegrev:
Updating ffmpeg SVN revision to r16304 and update to the corresponding
swscale snapshot.
* ext/ffmpeg/gstffmpegcodecmap.c: (gst_ffmpeg_caps_to_codecid):
Enable the Real Video 3.0 decoder.
Original commit message from CVS:
* ext/ffmpeg/gstffmpegcodecmap.c: (gst_ff_aud_caps_new),
(gst_ffmpeg_codecid_to_caps), (gst_ffmpeg_smpfmt_to_caps),
(gst_ffmpeg_codectype_to_caps), (gst_ffmpeg_caps_to_smpfmt),
(gst_ffmpeg_caps_to_codecid), (av_smp_format_depth):
* ext/ffmpeg/gstffmpegcodecmap.h:
Add mapping for EAC3 and QCELP audio codecs.
Add conversion functions for all available audo SampleFormat.
* ext/ffmpeg/gstffmpegdec.c: (gst_ffmpegdec_open),
(gst_ffmpegdec_setcaps), (gst_ffmpegdec_negotiate),
(clip_audio_buffer), (gst_ffmpegdec_audio_frame):
Remove assumptions that we can only handle stereo 16bit signed integer
audio, and store the depth locally.