On the one hand, it insufficiently checks whether it only updates a dummy
segment. On the other hand, only doing this at the time the last sampled is
prepared (and sent downstream) is too little too late.
That is, parse each moof in one pass (considering all contained streams'
metadata), and do so incrementally as needed for playback rather than
an initial complete scan of all moof (though all moov sample metadata
is fully parsed at startup).
... as some bogus files may indicate streams of 0 duration in moov,
while indicating the complete movie duration in mvhd (the latter should
be in mehd).
Avoid extra allocation in _parse_trun, add more checks for parsing errors,
add or adjust some debug statement, fix comments, sprinkle some branch
prediction.
The allocation of the samples can be placed out of the loop.
Makes the code clearer.
Also avoid relying on traf information as it is placed on the
end of the file and might not be acessible on push mode.
The fragmented mp4 format stores the tracks and samples information in the
'moof' boxes, which are appended before each fragment (fragment->'moof'+'mdat').
The 'mfra' box stores the offset of each 'moof' box and their presentation
time. The location of this box can be retrieved from the 'mfro' box, which is
located at the end of the file.
The 'mfra' box is parsed to get the offset of each 'moof' box and their
presentation time.
Each 'moof' box can contain information for one or more tracks inside
'tfhd' boxes. For each track in a 'moof', we have a 'trun' box, which
contains information of each sample (offset and duration) used to build
the samples table.
Based on patch by Marc-André Lureau <mlureau@flumotion.com>
https://bugzilla.gnome.org/show_bug.cgi?id=596321
Versions 0 and 1 of mvhd have different sizes of its values
(32bits/64bits). This patch makes it dump them correctly.
Also use the right node in the parameter and not the root node.
https://bugzilla.gnome.org/show_bug.cgi?id=596321
The DTS typefinder may return a lower probability for frames that start
at non-zero offsets and where there's no second frame sync in the first
buffer. It's fairly unlikely that we'll acidentally identify PCM data
as DTS, so we don't do additional checks for now.
https://bugzilla.gnome.org/show_bug.cgi?id=636234
When parsing the bitstream, look for SOP markers because we are allowed to split
packets on those marker boundaries.
Rework the parsing code a little so that we can pack multiple Packetization
units in one RTP packet.
When handling newsegment, flush out the buffer history in the
existing segment, not the new one. Fixes playback in some DVD
cases.
Partially fixes#633294
In a number of cases it is necessary to flush the field history by
performing 'degraded' deinterlacing - that is, using the user-chosen
method for as many fields as possible, then using vfir for as long as
there are >= 2 fields remaining in the history, then using linear for
the last field.
This should avoid losing fields being kept for history for example at
EOS.
This may address part of #633294
Only set the delta flag when all of the units in the packet are delta units.
Based on patch from Olivier Crête <olivier.crete@collabora.co.uk>
Fixes#632945
If caps weren't negotiated, goom should return not-negotiated
from its chain functions instead of using bps unitialized, which
leads to a division by 0
https://bugzilla.gnome.org/show_bug.cgi?id=633212
GST_ELEMENT_ERROR must not be called with the object lock held,
since it will call gst_object_get_parent() internally, which
takes the object lock as well.
Only send newsegment events with new positions downstream when actually
needed, instead of sending multiple newsegment events with new seek
positions in a row. Also set the discont flag on buffers after a
discontinuity.
Re-use the existing 'pos' field maintained by ebml writer to set
buffer offsets. This also makes sure that we set the right offsets
on buffers after a seek (e.g. when writing an index at the end).
Incomming buffer is only pushed on the adapter at the end of the
handle_buffer function. But duration/timestamp of this buffer is already
taken into account for the current data in the adapter. This leads to
wrong rtp timestamps and extra latency.
Both history_count and fields_required count from 1. As per the while loop
condition that follows this code, to perform the deinterlacing method, we need
history_count >= fields_required fields in the history. Therefore if we have
history_count < fields_required (not fields_required + 1), we need more fields.
This fixes the assumption that DecoderSpecificInfo must be 2 bytes long
for AAC files. The specification allows HE-AAC to be explicitly
signalled in a backward compatible way. This is done by means of an
additional information after the regular AAC header. It is expected that
decoders that can play AAC but not HE-AAC will parse the header normally
and ignore extended bits, much as they do for the HE-AAC specific payload
in the actual stream.
https://bugzilla.gnome.org/show_bug.cgi?id=612313
Implement a latency query and report how much latency we will add to the
stream.
Alse make some defaults for the default width/height/framerate
Fixes#631303
This uses gstpbutils to extract the profile and level from the video
object sequence and adds this to stream caps. This can be used as
metadata and for fine-grained decoder selection.
https://bugzilla.gnome.org/show_bug.cgi?id=616521
This exports the AAC profile and level in caps for use as metadata and
(eventually) for more fine-grained selection of decoders at
caps-negotiation time. (Doesn't work for HE-AAC yet though.)
https://bugzilla.gnome.org/show_bug.cgi?id=612313