The memory leak occurs in the case when the buffer has been
added to the fragment_buffers array of the current pad and
never been sent because of the push failure of the previous
buffers: moof or mdat header or fragmented buffer(s).
There are in the wild (mp4) streams that basically contain no tracks
but do have a redirect info[0], in which case, we won't be able
to expose any pad (there are no tracks) so we can't post anything but
an error on the bus, as:
- it can't send EOS downstream, it has no pad,
- posting an EOS message will be useless as PAUSED state can't be
reached and there is no sink in the pipeline meaning GstBin will
simply ignore it
The approach here is to to add details to the ERROR message with a
`redirect-location` field which elements like playbin handle and use right
away.
[0]: http://movietrailers.apple.com/movies/paramount/terminator-dark-fate/terminator-dark-fate-trailer-2_480p.mov
In file included from ../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/gst.h:55,
from ../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/tag/tag.h:25,
from ../gst/isomp4/qtdemux.c:56:
In function ‘qtdemux_inspect_transformation_matrix’,
inlined from ‘qtdemux_parse_trak’ at ../gst/isomp4/qtdemux.c:10676:5,
inlined from ‘qtdemux_parse_tree’ at ../gst/isomp4/qtdemux.c:14210:5:
../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/gstinfo.h:645:5: error: ‘%s’ directive argument is null [-Werror=format-overflow=]
645 | gst_debug_log ((cat), (level), __FILE__, GST_FUNCTION, __LINE__, \
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
646 | (GObject *) (object), __VA_ARGS__); \
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/gstinfo.h:1062:35: note: in expansion of macro ‘GST_CAT_LEVEL_LOG’
1062 | #define GST_DEBUG_OBJECT(obj,...) GST_CAT_LEVEL_LOG (GST_CAT_DEFAULT, GST_LEVEL_DEBUG, obj, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
../gst/isomp4/qtdemux.c:10294:5: note: in expansion of macro ‘GST_DEBUG_OBJECT’
10294 | GST_DEBUG_OBJECT (qtdemux, "Transformation matrix rotation %s",
| ^~~~~~~~~~~~~~~~
../gst/isomp4/qtdemux.c: In function ‘qtdemux_parse_tree’:
../gst/isomp4/qtdemux.c:10294:64: note: format string is defined here
10294 | GST_DEBUG_OBJECT (qtdemux, "Transformation matrix rotation %s",
| ^~
In reverse playback, we don't want to rely on the position of the current
keyframe to decide a stream is EOS: the last GOP we push will start with
a keyframe, which position is likely to be outside of the segment.
Instead, let the normal seek_to_previous_keyframe mechanism do its job,
it works just fine.
If a key unit seek is performed with a time position that matches
the offset of a keyframe, but not its actual PTS, we need to
adjust the segment nevertheless.
For example consider the following case:
* stream starts with a keyframe at 0 nanosecond, lasting 40 milliseconds
* user does a key unit seek at 20 milliseconds
* we don't adjust the segment as the time position is "over" a keyframe
* we push a segment that starts at 20 milliseconds
* we push a buffer with PTS == 0
* an element downstream (eg rtponviftimestamp) tries to calculate the
stream time of the buffer, fails to do so and drops it
When the seek event contains a (newly-added) trickmode interval,
and TRICKMODE_KEY_UNITS was requested, only let through keyframes
separated with the required interval
The time_position field of the stream is offset by the media_start
of its QtDemuxSegment compared to the start of the GstSegment of
the demuxer, take it into account when making comparisons.
mpegaudioparse suggests MP3 needs 10 or 30 frames of lead-in (depending on
mpegaudioversion, which we don't know here), thus provide at least 30 frames
lead-in for such cases as a followup to commit cbfa4531ee.
AAC and various other audio codecs need a couple frames of lead-in to
decode it properly. The parser elements like aacparse take care of it
via gst_base_parse_set_frame_rate, but when inside a container, the
demuxer is doing the seek segment handling and never gives lead-in
data downstream.
Handle this similar to going back to a keyframe with video, in the
same place. Without a lead-in, the start of the segment is silence,
when it shouldn't, which becomes especially evident in NLE use cases.
It must be accurate for all samples to work in Final Cut properly, so
the best we can do is to assume that all samples are the same as the
first. Bigger samples are truncated, smaller samples are padded.
Need to respect return of gst_video_guess_framerate() to ensure
non-zero denominator.
This patch is to fix below error with an abnormal (but has valid frame) file.
(gst-play-1.0:17940): GStreamer-CRITICAL **: passed '0' as denominator for `GstFraction'
This problem was found in Test. 2 of the YouTube 2018 EME
tests[1]. The code was accidentally not finding an mp4a's esds atom in
the sample description table when the stream was encrypted. It assumed
that if the stream is protected, then only an enca atom will be found
here. What happens with YouTube is they often provide protected
content with a few seconds of clear content, and then switch to the
encrypted stream.
The failure case here was an incorrect codec_data field being sent
into aacparse. The advertisement of stereo audio @ 44.1kHz for the
mp4a (unprotected) stream was incorrect. As usual, the esds contained
the real values here which were mono at 22050 Hz.
Here's what the MP4 tree looks like for these types of files,
demonstrating why the code was making a wrong assumption (or maybe
YouTube is being unusual),
[ftyp] size=8+16
...
[moov] size=8+1571
...
[trak] size=8+559
...
[stsd] size=12+234
entry-count = 2
[enca] size=8+147
channel_count = 2
sample_size = 16
sample_rate = 44100
[esds] size=12+27
...
...
[mp4a] size=8+67
channel_count = 2
sample_size = 16
sample_rate = 44100
[esds] size=12+27
...
In addition to fixing this, the checks for esds atoms in mp4a and mp4v
have been made symmetrical. While I haven't seen a test case for video
with the same problem, it seemed better to make the same checks. This
also fixes a crash reported from another user[2], they also noted the
asymmetry with mp4v and mp4a.
[1] https://yt-dash-mse-test.commondatastorage.googleapis.com/unit-tests/2018.html?test_type=encryptedmedia-test
[2] https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/398
Recent changes in ccextractor were attaching timecode meta to the closed
caption track. We shouldn't write timecode information for the closed
caption trak.
EA608 closed caption tracks are a bit special in that each sample
can contain CCs for multiple frames, and CCs can be omitted and have to
be inferred from the duration of the sample then.
As such we take the framerate from the (first) video track here for
CEA608 as there must be one CC byte pair for every video frame
according to the spec.
For CEA708 all is fine and there is one sample per frame.
The duration field being a uint64, is stored in 8 bytes, not 4. So the offset of
the following field, language code, needs to be updated accordingly so that the
parsed language code is not garbage.
stream.segment should be updated with the values of the current edit
list, also when a new `moov` is received. Unfortunately this was not
being the case because of an early return.
As a consequence of this bugs, no end of movie clipping was being
performed on the new moov and no segment event was being emitted.
When performing stream switching (e.g. in MSE) the new moov may have a
different edit list. This is often the case when switching between
baseline H.264 (which lacks B-frames) and more demanding profiles. For
this reason it's important to emit a new segment in order to be able
to get matching stream times.
This patch moves the initialization of QtDemuxStream.segment from
gst_qtdemux_add_stream() to _create_stream(). This ensures the segment
is always initialized when the stream is created.
Otherwise the segment format is left as GST_FORMAT_UNDEFINED in the case
were a track is reparsed and qtdemux_reuse_and_configure_stream() is
called instead of gst_qtdemux_add_stream(). (See
qtdemux_expose_streams() in the non streams-aware case.)
If ctts (CompositionOffsetBox) has larger sample_offset
(offset between PTS and DTS) than (2 * duration) of the stream,
assume the ctts box to be corrupted and ignore the box.
https://bugzilla.gnome.org/show_bug.cgi?id=797262
This fixes a bug where in some files mehd.fragment_duration is one unit
less than the actual duration of the fragmented movie, as explained below:
mehd.fragment_duration is computed by scaling the end timestamp of
the last frame of the movie in (in nanoseconds) by the movie timescale.
In some situations, the end timestamp is innacurate due to lossy conversion to
fixed point required by GstBuffer upstream.
Take for instance a movie with 3 frames at exactly 3 fps.
$ gst-launch-1.0 -v videotestsrc num-buffers=3 \
! video/x-raw, framerate="(fraction)3/1" \
! x264enc \
! fakesink silent=false
dts: 999:59:59.333333334, pts: 1000:00:00.000000000, duration: 0:00:00.333333333
dts: 999:59:59.666666667, pts: 1000:00:00.666666666, duration: 0:00:00.333333334
dts: 1000:00:00.000000000, pts: 1000:00:00.333333333, duration: 0:00:00.333333333
The end timestamp is calculated by qtmux in this way:
end timestamp = last frame DTS + last frame DUR - first frame DTS =
= 1000:00:00.000000000 + 0:00:00.333333333 - 999:59:59.333333334 =
= 0:00:00.999999999
qtmux needs to round this timestamp to the declared movie timescale, which can
ameliorate this distortion, but it's important that round-neareast is used;
otherwise it would backfire badly.
Take for example a movie with a timescale of 30 units/s.
0.999999999 s * 30 units/s = 29.999999970 units
A round-floor (as it was done before this patch) would set fragment_duration to
29 units, amplifying the original distorsion from 1 nanosecond up to 33
milliseconds less than the correct value. The greatest distortion would occur
in the case where timescale = framerate, where an entire frame duration would
be subtracted.
Also, rounding is added to tkhd duration computation too, which
potentially has the same problem.
https://bugzilla.gnome.org/show_bug.cgi?id=793959
... before the old streams is not exposed yet for MSS stream.
In case of DASH, newly configured streams will be exposed
whenever demux got moov without delay.
Meanwhile, since there is no moov box in MSS stream,
the caps will act like moov. Then, there is delay for exposing new pads
until demux got the first moof.
So, following scenario is possible only for MSS but not for DASH,
STREAM-START -> CAPS -> (configure stream but NOT EXPOSED YET)
-> STREAM-START-> CAPS (configure stream again).
In above scenario, we can reuse old stream without any stream reconfigure.
https://bugzilla.gnome.org/show_bug.cgi?id=797239