We now have a chance for packets to be collected before we send out the
newsegment. If we're not in accurate seeking (keyunit) it will set
the segment start/time to the keyframe's timestamp.
We now *always* seek to the keyframe just before our requested position.
When we encounter the first keyframe and we were not accurate (therefore doing
keyframe seeking), we update the segment start position to the keyframe timestamp.
This will still cause some timestamp jitter, but giving a hint as to the duration
rather than nothing seems to be a better idea.
Also, this allows some scenarios (like remuxing with asfmux) to estimate the total
duration using the accumulated packet duration (which will be correct).
The simple index entries also contain the number of packets one needs
to retrieve at a given position to get a full keyframe. We therefore
use that information to retrieve all those packets in one buffer when
working in pull-mode.
In gst_asf_demux_chain_headers, when 'goto wrong_type' was called
asfdemux tried to free a const pointer that had been cast to a
normal pointer variable.
We weren't taking the preroll into account previously, meaning that we
were always seeking preroll nanoseconds too early... resulting in a lot
of dropped packets (which are before the start time).
This brings quit a bit closer to as-fast-as-possible seeking in asf files.
Post global tags only after we've added our source pads, so that
tag events get sent downstream in addition to tag messages posted
on the bus. This makes sure tags can be picked up automatically
when transcoding, but also by tagreadbin/playbin2. Fixes#519721.
While we're at it, also add a container-format tag.
When we receive a DISCONT as input, don't clear our complete state but simply
mark a discont that will be put on the next buffer. The code will be able to
handle and throw away incomplete data.
Add some more debug info.
Remove an unused variable.
Let's not put every single mp3 frame in our index, a few frames per
second should be more than enough. For now use an index interval
of 100ms-500ms depending on the upstream size, to keep the index at
a reasonable size. Factor out the code that adds the index entry
into a separate function for better code readability.
While technically upstream may be seekable even if it doesn't know
the exact size, I can't think of a use case where this distincation
is relevant in practice, so for now just assume we're not seekable
if upstream doesn't provide us with a size. Makes sure we don't
build a seek index when streaming internet radio with sources that
pretend to be seekable until you try to actually seek.
Don't overwrite the origin flow return by whatever flow we get
when trying to push the remaining internally queued payloads.
We want to do our eos logic, ie. send an EOS event or segment-done
message in any case. Makes things EOS properly when an EOS event
is forced upon the pipeline so that the source returns
FLOW_UNEXPECTED to a pulling asfdemux. Should fix#582056.
Add a multichannel map to the output caps, and send at least a CODEC and
BITRATE tag. I'm not too sure about the 5.1 and 7.1 channel maps. I have
no samples and can't find info about the channel ordering, but this is
better than nothing.
Some Xing headers apparently start the TOC at byte 1 instead of 0. Don't
reject them because of it, just subtract the initial offset when reading
the table.
Be more lenient about what we accept as changing bits in a header - basically,
only require that the mp3 sync marker is present, for the mpeg version,
layer and samplerate.
Fixes: #581464
Some mp3 streams have an offset in timestamps, requiring us to push the
frame *AFTER* segment.stop in order for the decoder to be able to push
all data up to the segment.stop position.
This also makes timestamps (more) consistent before and after a possible
seek, and moreover makes for reasonable position reporting in live stream
(whose payload timestamps should not be taken for granted).
* Improve newsegment handling, e.g. upstream might live in TIME.
* Only send newsegment if we have needed info.
* Avoid reading past end of data section.
The problem that happens is the following:
* A packet with multiple payloads comes in
* Those payloads get handled one by one
* The first payload contains the first audio payload with timestamp A
* The second payload contains the first video (key)frame with timestamp V (where V < A)
With the previous code, the following would happen:
* the first payload gets processed, then passed to queue_for_stream
* queue_for_stream detects it's the first valid timestamp received and stores
first_ts = A
* the second payload gets processed, then pass to queue_for_stream
* queue_for_stream detects the timestamp is lower than first_ts... and
discards it... resulting in losing the first keyframe of the video stream
We've been having this issue for *ages*... it's just that nobody noticed it
that much with playbin. But with playbin2's aggresive multiqueue handling, this
will result in multiqueue not being able to preroll (because the video decoder will
be dropping a ton of buffers before (maybe) receiving the next keyframe).
Tested with over 200 asf files, and they all play the first frame correctly now,
even the most braindead ones.