There is an issue for live streams where download_loop will keep
downloading segments until it gets a 404 error for a segment
that has not yet been published. This is a problem because this
request for a segment that doesn't exist will propagate all the
way back to the origin server(s). This means that dashdemux causes
extra load on the origin server(s) for segments that aren't yet
available.
This patch uses availabilityStartTime, period
and the host's idea of UTC to decide if a fragment is available to
be requested from an HTTP server and filter out requests for fragments
that are not yet available.
https://bugzilla.gnome.org/show_bug.cgi?id=701404
gstdashdemux.c:1753: warning: format '%llu' expects type 'long long unsigned int', but argument 8 has type 'long unsigned int'
gstdashdemux.c:2224: warning: format '%llu' expects type 'long long unsigned int', but argument 9 has type 'guint64'
gstdashdemux.c:2224: warning: format '%llu' expects type 'long long unsigned int', but argument 10 has type 'guint64'
This prevents deadlocks on startup on files that have only a very
large buffer for a stream and the queue is filled and will lock on
the eos event that is pushed after the buffer. As no buffers have yet
been pushed to other streams, the pipeline locks on preroll
During a live stream it is possible for dashdemux to lag behind on a
slow connection or to rush ahead of the connection os too fast.
For the first case it is necessary to jump some segments ahead to be able to
continue playback as old segments are usually deleted from the server.
For the later, dashdemux should wait a little before attempting another
download do give time to the server to produce a new segment
Replaces the 2 likely larger lists with more appropriate structures
to improve performance.
Replaces S nodes GList for a GQueue, this reduces latency to startup
because of traversing the list just append an element.
Replaces the processed media segments GList for a GPtrArray as it is
constantly acessed by index during playback.
When dashdemux selects its first fragment, it always selects the
first fragment listed in the manifest. For on-demand content,
this is the correct behaviour. However for live content, this
behaviour is undesirable because the first fragment listed in the
manifest might be some considerable time behind "now".
The commit uses the host's idea of UTC and tries to find the
oldest fragment that contains samples for this time of day.
https://bugzilla.gnome.org/show_bug.cgi?id=701509
It was not properly divided by GST_SECONDS. Also fix issue with
max-buffering-time being multiplied by GST_SECONDS every time the
property is retrieved.
https://bugzilla.gnome.org/show_bug.cgi?id=700487
We only want to adjust the timestamps so that they start from 0 for live
streams. Non-live streams already start from 0 and after a seek we actually want
to timestamp to be the position we seek to.
Non-live streams should timestamp buffers with a running-time starting from
0. Since we already push a 0 -> -1 segment, bring the timestamps to 0
by subtracting the initial timestamp.
Manifest updates should be done periodically for live streams,
this patch makes the demuxer create a new manifest client for
the new version and transfers the stream position to the new
one, discarding the old one afterwards.
A small struct that keeps a short history of fragment download bitrates
to have an average measure of N last fragments instead of using only
the last downloaded bitrate
Do not use a global bitrate as the sizes of the fragments matter
when calculating the download rate as the connection setup time is
also being taken into the download duration, a smaller fragment
will have a lower bitrate than a larger one.
This avoids switching the bitrates for streams frequently because
of bitrate mismatches
Instead of downloading 1 fragment per stream per download loop,
select the stream with the earlier timestamp and get a fragment
only for that one.
The old algorithm would lead to problems when the fragment durations
were too different for streams.
dashdemux shouldn't emit the buffering message as that can pause
the pipeline. It has no proper knowledge of the downstream buffering
status so it can pause the pipeline when it isn't necessary. It should
have an internal buffer for downloading the streams ahead of playback,
but that shouldn't make it able to stop the pipeline for buffering.
A particular case in which this is bad is when a pad switch happens
(changing bitrates for example), the new pads dashdemux creates
will get linked to demuxers and new queues will be created,
these queues are initially empty and dashdemux will quickly
drain its buffers by pushing them to those queues. So it
would have no more buffers internally and would emit a
buffering message with a low ratio, causing the pipeline
to pause when it wouldn't be necessary.
Put EOS on the streams queues after the last fragment from the
last period for each stream. This way we keep it serialized
with the buffers and it will work when streams have different
ending times
The smallest queue should be used to prevent blocking the download
thread when a stream has too much data buffered, leaving the other
streams starving from fragments
Each stream has its own durations and timestamps, the fragment number
is different for each stream when seeking, so the seek has to be done
for all streams, rather than on a single stream and propagated to
others
GstDataQueue has proper locking and provides functions to limit the
size of the queue. Also has blocking calls that are useful to
our multithread scenario in Dash.
Store the buffers separately for each stream, this is clearer than
having a queue with a list of buffers. It also allows easier selection
of buffers to push in later refactors
Fragments should be pushed ASAP as downstream should be responsible for
doing the syncrhonization and proper buffering.
This has the great side effect of fixing most of the seeking A/V sync issues.
- the MPD file is updated in the download loop (only if we have a "dynamic" MPD and minimumUpdatePeriod is valid);
- properly LOCK/UNLOCK the GstMpdClient;
This fixes build that has been broken by commit
fb9aeac6552021b176a4c4bd07265e02a0b70e0f.
gst_mpd_client_get_target_duration has been removed, and
gst_mpd_client_get_next_fragment_duration should be used instead.
This was necessary to support variable-duration Fragments.
in the new API:
- gst_mpd_client_get_current_position returns the timestamp of the NEXT fragment to download;
- gst_mpd_client_get_next_fragment_duration returns the duration of the next fragment to download;
- gst_mpd_client_get_media_presentation_duration returns the mediaPresentationDuration from the MPD file;
also there is a new internal parser function:
- gst_mpd_client_get_segment_duration extracts the constant segment duration from the MPD file
(only used when there is no SegmentTimeline syntax element in the current representation)
In gst_mpd_client_get_next_fragment, we set the timestamp/duration of the fragment just downloaded
copying the values from the corresponding GstMediaSegment.
TODO: rework SEEKING to support seeking across different Periods.
- Periods are played in sequence, from PeriodStart to PeriodEnd
- seamless switching from one Period to the next one works fine;
- the 'new-segment' generation is broken, so if we need to switch pads for a new Period there is a crash;
- build a list of the available Periods with their start and duration time
- add the list of GstStreamPeriod in the GstMpdClient data struct
- remove cur_period from GstMpdClient and introduce an API to get the current GstStreamPeriod
- several API clean-ups
other fixes:
- fixed a buffering bug: now we stop buffering when we reach the end of manifest
- now gst_mpd_client_get_target_duration() always returns a valid duration
(in case of single-segment streams, we return either Period duration or mediaPresentation duration)
TODO: support SegmentTimeline
SegmentList nodes are allowed into Period, AdaptationSet or Representation nodes
and there is at most 1 element, so no need to keep a list;
Period nodes cannot have any Represention elements, as AdaptationSet nodes are mandatory;
this breaks compatibility with some legacy DASH test sequences.