Each period will start again with pts 0 + period presentation offset, which is
also going to be the presentation time inside the container stream if any.
However all periods together should form a continuous timeline, with regard to
stream time and running time.
For making this possible we keep track of the "user requested segment", i.e.
the seek events, inside the demuxer without adjusting anything and taking this
demuxer segment only as orientation for modified segments per stream.
This per stream segments will have their segment.start at pts that would be
produced for this stream in this period, and the segment.base/time adjusted so
that this pts maps to the running and stream time this period should have in
the context of all other periods.
https://bugzilla.gnome.org/show_bug.cgi?id=754222
There are several cases where a HLS server could temporarily have wrong
fragments, or reconfigure the playlist. In those cases, when we get
fragment download failures, we *really* want to wait a bit (for the next
playlist update) before retrying to get fragments.
Previously this method was first checking to see if there was next fragments
(according to the previous manifest update) before waiting for the next update.
The problem was that if that if there is a temporary failure on the server,
that's uncorrelated to whether the manifest contains next fragments or not.
If a ContentProtection element is present in an AdaptationSet element,
send Protection events on the source pad, so that qtdemux can use this
information to correctly generate its source caps for DASH CENC
encrypted streams.
This allows qtdemux to support CENC encrypted DASH streams where the
content protection specific information is carried in the MPD file
rather than in pssh boxes in the initialisation segments.
This commit adds a new function to the adaptivedemux base class to allow
a GstEvent to be queued for a stream. The queue of events are sent the
next time a buffer is pushed for that stream.
https://bugzilla.gnome.org/show_bug.cgi?id=705991
Sometimes the last fragment does not exist because of rounding errors with the
durations. Just finish the stream gracefully instead of erroring out instead.
Segment start/time/position/base should only be modified if this is the first
time we send a segment, otherwise we will override values from the seek
segment if new streams have to be exposed as part of the seek.
Segment base should be calculated from the segment start based on the stream's
own segment, not the demuxer's segment. Both might differ slightly because of
the presentationTimeOffset.
Always add the presentationTimeOffset (relative to the period start, not
timestamp 0) to the segment start after resetting the stream's segment based
on the demuxer's segment (i.e. after seeks or stream restart). Also make sure
to keep the stream's segment up to date and not just send a new segment event
without storing the segment in the stream.
https://bugzilla.gnome.org/show_bug.cgi?id=745455
And include the presentation offset in the last known position for each
stream, and just because we can also keep track of the latest known position
inside the demuxer segment.
It's going to return EOS if the period ended or otherwise there is just no
next fragment left. If we don't store the last return value, it will always
stay OK and gst_adaptive_demux_combine_flows() will always return OK instead
of EOS once all streams are done.
This partially switches period changes in DASH by at least trying to switch
instead of just stopping. What is still left is that after a period change
with DASH the times all start at 0 again instead of continuing.
It's true that we shouldn't consider errors fatal immediately, but if we
always ignore them we will loop infinitely on live streams with segments
that can't be downloaded at all.
Even for "live" streams we are not live in the GStreamer meaning of the word.
We don't produce buffers that are timestamped based on their "capture time"
and our clock, but just based on whatever timestamps the stream might contain.
Also even if we wanted to claim to be live, that wouldn't work well as we
would have to return GST_STATE_CHANGE_NO_PREROLL when going from READY to
PAUSED, which we can't. We first need data to know if we are "live" or not.
It will deadlocks as we will then join() the update task from itself. Instead
just post an actual error message on the bus and only stop the update task.
The application is then responsible for shutting down the element, and thus
all the other tasks and everything, based on the error message it gets.
It might return OK from subclasses and it could cause a bitrate
renegotiation. For DASH and MSS that is ok as they won't expose
new pads as part of this but it can cause issues for HLS as
it will expose new pads, leading to pads that will only have EOS
that cause decodebin to fail
https://bugzilla.gnome.org/show_bug.cgi?id=745905
Asks the subclass for a potential time offset to apply to each
separate stream, in dash streams can have "presentation time offsets",
which can be different for each stream.
https://bugzilla.gnome.org/show_bug.cgi?id=745455
Move the property from subclasses to adaptivedemux, it allows
selecing the percentage of the measured bitrate to be used when
selecting stream bitrates
Allows to set a bitrate directly instead of measuring it internally
based on the received chunks. The connection-speed was removed from
mssdemux and hlsdemux as it is now in the base class
And use the average to go up in resolution, and the last fragment
bitrate to go down.
This allows the demuxer to react rapidly to bitrate loss, and
be conservative for bitrate improvements.
+ Add a construct only property to define the number of fragments
to consider when calculating the average moving bitrate.
https://bugzilla.gnome.org/show_bug.cgi?id=742979
Add more power to the chunk_received function (renamed to data_received)
and also to the fragment_finish function.
The data_received function must parse/decrypt the data if necessary and
also push it using the new push_buffer function that is exposed now. The
default implementation gets data from the stream adapter (all available)
and pushes it.
The fragment_finish function must also advance the fragment. The default
implementation only advances the fragment.
This allows the subsegment handling in dashdemux to continuously download
the same file from the server instead of stopping at every subsegment
boundary and starting a new request
If we say it is the first segment after a new period it will resync
the segment.start value and all buffers will be late for the new period
we are trying to play. Otherwise we want to keep the segment.start with
the previous value to allow the running time to smoothly increase
Check if there is a next fragment before advancing to avoid causing
a bitrate switch (and maybe exposing new pads) only to push EOS.
This causes playback to stop with an error instead of properly
finishing with EOS message.
The subsegment boundary return tells the adaptivedemux that it can
try to switch to another representation as the stream is at a suitable
position for starting from another bitrate.
In order to get some subsegment information, subclasses might want
to download only the headers to have enough data (the index)
to decide where to start downloading from the subsegment.
This allows the subclasses to know if the chunks that are downloaded are
part of the header or of the index and will parse the parts that are
of their interest.
Segment start needs only to be updated when starting the streams
or after a seek, doing it during bitrate changes will cause the
running time to go discontinuous (jump back to a previous ts)
and QOS will drop buffers