This prevents deadlocks on startup on files that have only a very
large buffer for a stream and the queue is filled and will lock on
the eos event that is pushed after the buffer. As no buffers have yet
been pushed to other streams, the pipeline locks on preroll
During a live stream it is possible for dashdemux to lag behind on a
slow connection or to rush ahead of the connection os too fast.
For the first case it is necessary to jump some segments ahead to be able to
continue playback as old segments are usually deleted from the server.
For the later, dashdemux should wait a little before attempting another
download do give time to the server to produce a new segment
When using a template based segment list, do not try to
contruct a finite segment list for the limits of the available periods.
We might not know when the period ends (for live streams) and we can
always create the segment on demand when requested by dashdemux,
avoiding use of some memory and cpu when re-creating this list.
Replaces the 2 likely larger lists with more appropriate structures
to improve performance.
Replaces S nodes GList for a GQueue, this reduces latency to startup
because of traversing the list just append an element.
Replaces the processed media segments GList for a GPtrArray as it is
constantly acessed by index during playback.
Duration from segment being unknown is a issue from the MPD and not
a programming issue, so the assert isn't useful here. Instead check
and return an error code so the caller can fallback to alternatives
When dashdemux selects its first fragment, it always selects the
first fragment listed in the manifest. For on-demand content,
this is the correct behaviour. However for live content, this
behaviour is undesirable because the first fragment listed in the
manifest might be some considerable time behind "now".
The commit uses the host's idea of UTC and tries to find the
oldest fragment that contains samples for this time of day.
https://bugzilla.gnome.org/show_bug.cgi?id=701509
According to the MPEG-DASH spec, certain elements (i.e.
SegmentBase, SegmentTemplate, and SegmentList) should inherit
attributes from the same elements in the containing AdaptationSet
or Period.
Updated the SegmentBase, SegmentTemplate, and SegmentList parsers
to properly inherit attributes from the corresponding elements in
AdaptationSet and/or Period.
https://bugzilla.gnome.org/show_bug.cgi?id=702677
Convert all xml attribute/content parsing functions to return a
boolean value indicating whether or not the attribute/content was
present. We need this finer-grained control in order to properly
implement the inheritance policies described in the spec
Also fixed several memory leak conditions when handling errors in
the xml attribute/content parsing functions.
https://bugzilla.gnome.org/show_bug.cgi?id=702677
Ensure that g_free/xmlFree is used correctly based on how the
memory was allocated.
When deallocating GLists, there were many places that were using
g_list_foreach and g_list_free. Converted these occurrences to
call g_list_free_full.
Add NULL checks to all xmlFree calls since the documentation does
not guarantee that passing NULL is safe
In places where we are strdup'ing memory allocated by libxml2,
changed those calls to use xmlMemStrdup().
There were several places where we were missing g_slice_free when
deallocating a top-level node structure.
https://bugzilla.gnome.org/show_bug.cgi?id=702837
It was not properly divided by GST_SECONDS. Also fix issue with
max-buffering-time being multiplied by GST_SECONDS every time the
property is retrieved.
https://bugzilla.gnome.org/show_bug.cgi?id=700487
We only want to adjust the timestamps so that they start from 0 for live
streams. Non-live streams already start from 0 and after a seek we actually want
to timestamp to be the position we seek to.
Non-live streams should timestamp buffers with a running-time starting from
0. Since we already push a 0 -> -1 segment, bring the timestamps to 0
by subtracting the initial timestamp.
The xmlCleanupParser function seems to cleanup all statically
allocated libxml variables, making it unusable. We can't guarantee
that dashdemux won't need it anymore, so better not call it.
Manifest updates should be done periodically for live streams,
this patch makes the demuxer create a new manifest client for
the new version and transfers the stream position to the new
one, discarding the old one afterwards.
A small struct that keeps a short history of fragment download bitrates
to have an average measure of N last fragments instead of using only
the last downloaded bitrate
Do not use a global bitrate as the sizes of the fragments matter
when calculating the download rate as the connection setup time is
also being taken into the download duration, a smaller fragment
will have a lower bitrate than a larger one.
This avoids switching the bitrates for streams frequently because
of bitrate mismatches
Instead of downloading 1 fragment per stream per download loop,
select the stream with the earlier timestamp and get a fragment
only for that one.
The old algorithm would lead to problems when the fragment durations
were too different for streams.
dashdemux shouldn't emit the buffering message as that can pause
the pipeline. It has no proper knowledge of the downstream buffering
status so it can pause the pipeline when it isn't necessary. It should
have an internal buffer for downloading the streams ahead of playback,
but that shouldn't make it able to stop the pipeline for buffering.
A particular case in which this is bad is when a pad switch happens
(changing bitrates for example), the new pads dashdemux creates
will get linked to demuxers and new queues will be created,
these queues are initially empty and dashdemux will quickly
drain its buffers by pushing them to those queues. So it
would have no more buffers internally and would emit a
buffering message with a low ratio, causing the pipeline
to pause when it wouldn't be necessary.
Put EOS on the streams queues after the last fragment from the
last period for each stream. This way we keep it serialized
with the buffers and it will work when streams have different
ending times
The smallest queue should be used to prevent blocking the download
thread when a stream has too much data buffered, leaving the other
streams starving from fragments
Each stream has its own durations and timestamps, the fragment number
is different for each stream when seeking, so the seek has to be done
for all streams, rather than on a single stream and propagated to
others
GstDataQueue has proper locking and provides functions to limit the
size of the queue. Also has blocking calls that are useful to
our multithread scenario in Dash.
Store the buffers separately for each stream, this is clearer than
having a queue with a list of buffers. It also allows easier selection
of buffers to push in later refactors
Fragments should be pushed ASAP as downstream should be responsible for
doing the syncrhonization and proper buffering.
This has the great side effect of fixing most of the seeking A/V sync issues.
- the MPD file is updated in the download loop (only if we have a "dynamic" MPD and minimumUpdatePeriod is valid);
- properly LOCK/UNLOCK the GstMpdClient;