If the presentationTimeOffset attribute of a DASH manifest contains
a value that is larger than 2^32, gstmpdparser incorrectly calculates
the stream's presentation time offset. This is due to two bugs:
1: Using gst_mpdparser_get_xml_prop_unsigned_integer rather than
gst_mpdparser_get_xml_prop_unsigned_integer_64 to parse the
attribute
2: gst_mpd_client_setup_representation multiplying the value by
GST_SECOND and then dividing by timescale
https://bugzilla.gnome.org/show_bug.cgi?id=750804
When all fragments have already been downloaded on a live stream
dashdemux would busy loop as the default implementation of
has_next_fragment would return TRUE. Implement it to correctly
signal if adaptivedemux should wait for the manifest update before
trying to get new fragments.
When updating the manifest the timestamps on it might have changed a little
due to rounding and timescale conversions. If the change makes the timestamp
of the current segment to go up it makes dashdemux reposition to the previous
one causing one extra unnecessary download.
So when repositioning add an extra 10 microseconds to cover for that rounding
issues and increase the chance of falling in the same segment.
Additionally, also improve the time used when the client is already after the
last segment. Instead of using the last segment starting timestamp use the
final timestamp to make it reposition to the next one and not to the one that
has already been downloaded.
These functions of directly getting and setting segment indexes
are no longer useful as now we need 2 indexes: repeat and segment
index.
The only operations needed are advance_segment, going back to the
first one or seeking for a timestamp.
Segments are now stored with their repeat counts instead of spanding
them to multiple segments. This caused advancing to the next segment
using a single index to have to iterate over the whole list every time.
This commit addresses this by storing both the segment index as well
as the repeat index and makes advancing to next segment just an
increment of the repeat or the segment index.
Use a single segment to represent it internally to avoid using too
much memory. This has the drawback of issuing a linear search to
find the correct segment to play but this can be fixed by using
binary searches or caching the current position and just looking
for the next one.
https://bugzilla.gnome.org/show_bug.cgi?id=748369
When a seek with a negative rate is requested, find the target
segment where gstsegment.stop belongs in and then download from
this segment backwards until the first segment.
This allows proper reverse playback.
always expose all streams instead of only exposing one of each type.
This is more aligned with gstreamer's way of working. Allows the user
to select the stream that it wants to use by linking its pad and leaving
the unused ones as unlinked.
There is an issue for live streams where download_loop will keep
downloading segments until it gets a 404 error for a segment
that has not yet been published. This is a problem because this
request for a segment that doesn't exist will propagate all the
way back to the origin server(s). This means that dashdemux causes
extra load on the origin server(s) for segments that aren't yet
available.
This patch uses availabilityStartTime, period
and the host's idea of UTC to decide if a fragment is available to
be requested from an HTTP server and filter out requests for fragments
that are not yet available.
https://bugzilla.gnome.org/show_bug.cgi?id=701404
gstmpdparser.h:530: warning: type qualifiers ignored on function return type
gstmpdparser.c:4177: warning: type qualifiers ignored on function return type
During a live stream it is possible for dashdemux to lag behind on a
slow connection or to rush ahead of the connection os too fast.
For the first case it is necessary to jump some segments ahead to be able to
continue playback as old segments are usually deleted from the server.
For the later, dashdemux should wait a little before attempting another
download do give time to the server to produce a new segment
When using a template based segment list, do not try to
contruct a finite segment list for the limits of the available periods.
We might not know when the period ends (for live streams) and we can
always create the segment on demand when requested by dashdemux,
avoiding use of some memory and cpu when re-creating this list.
Replaces the 2 likely larger lists with more appropriate structures
to improve performance.
Replaces S nodes GList for a GQueue, this reduces latency to startup
because of traversing the list just append an element.
Replaces the processed media segments GList for a GPtrArray as it is
constantly acessed by index during playback.
When dashdemux selects its first fragment, it always selects the
first fragment listed in the manifest. For on-demand content,
this is the correct behaviour. However for live content, this
behaviour is undesirable because the first fragment listed in the
manifest might be some considerable time behind "now".
The commit uses the host's idea of UTC and tries to find the
oldest fragment that contains samples for this time of day.
https://bugzilla.gnome.org/show_bug.cgi?id=701509
Manifest updates should be done periodically for live streams,
this patch makes the demuxer create a new manifest client for
the new version and transfers the stream position to the new
one, discarding the old one afterwards.
Instead of downloading 1 fragment per stream per download loop,
select the stream with the earlier timestamp and get a fragment
only for that one.
The old algorithm would lead to problems when the fragment durations
were too different for streams.
Put EOS on the streams queues after the last fragment from the
last period for each stream. This way we keep it serialized
with the buffers and it will work when streams have different
ending times
Each stream has its own durations and timestamps, the fragment number
is different for each stream when seeking, so the seek has to be done
for all streams, rather than on a single stream and propagated to
others
This was necessary to support variable-duration Fragments.
in the new API:
- gst_mpd_client_get_current_position returns the timestamp of the NEXT fragment to download;
- gst_mpd_client_get_next_fragment_duration returns the duration of the next fragment to download;
- gst_mpd_client_get_media_presentation_duration returns the mediaPresentationDuration from the MPD file;
also there is a new internal parser function:
- gst_mpd_client_get_segment_duration extracts the constant segment duration from the MPD file
(only used when there is no SegmentTimeline syntax element in the current representation)
In gst_mpd_client_get_next_fragment, we set the timestamp/duration of the fragment just downloaded
copying the values from the corresponding GstMediaSegment.
TODO: rework SEEKING to support seeking across different Periods.
- Periods are played in sequence, from PeriodStart to PeriodEnd
- seamless switching from one Period to the next one works fine;
- the 'new-segment' generation is broken, so if we need to switch pads for a new Period there is a crash;
- build a list of the available Periods with their start and duration time
- add the list of GstStreamPeriod in the GstMpdClient data struct
- remove cur_period from GstMpdClient and introduce an API to get the current GstStreamPeriod
- several API clean-ups
build the list of segments to be played using the SegmentTimeline syntax, if present
bugfixes:
- for dynamic MPD files, when mediaPresentationDuration is not present use minimumUpdatePeriod instead
- do not add a spurious '$' when building an URL from a template like "$Bandwidth$/init.mp4v"
- introduce gst_mpd_client_add_media_segment() to avoid code duplication
SegmentList nodes are allowed into Period, AdaptationSet or Representation nodes
and there is at most 1 element, so no need to keep a list;
Period nodes cannot have any Represention elements, as AdaptationSet nodes are mandatory;
this breaks compatibility with some legacy DASH test sequences.