Create src pads for Representations that contain timed-text subtitles,
both when the subtitles are encapsulated in ISO BMFF (i.e., the
Representation has mimeType "application/mp4") and when they are
unencapsulated (i.e., the Representation has mimeType
"application/ttml+xml").
https://bugzilla.gnome.org/show_bug.cgi?id=747774
Unless the DASH client can compensate for the difference between its
clock and the clock used by the server, the client might request
fragments that either not yet on the server or fragments that have
already been expired from the server. This is an issue because these
requests can propagate all the way back to the origin
ISO/IEC 23009-1:2014/Amd 1 [PDAM1] defines a new UTCTiming element to allow
a DASH client to track the clock used by the server generating the
DASH stream. Multiple UTCTiming elements might be present, to indicate
support for multiple methods of UTC time gathering. Each element can
contain a white space separated list of URLs that can be contacted
to discover the UTC time from the server's perspective.
This commit provides parsing of UTCTiming elements, unit tests of this
parsing and a function to poll a time server. This function
supports the following methods:
urn:mpeg:dash:utc:ntp:2014
urn:mpeg:dash:utc:http-xsdate:2014
urn:mpeg:dash:utc:http-iso:2014
urn:mpeg:dash:utc:http-ntp:2014
The manifest update task is used to poll the clock time server,
to save having to create a new thread.
When choosing the starting fragment number and when waiting for a
fragment to become available, the difference between the server's idea
of UTC and the client's idea of UTC is taken into account. For example,
if the server's time is behind the client's idea of UTC, we wait for
longer before requesting a fragment
[PDAM1]: http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber=66068
dashdemux: support NTP time servers in UTCTiming elements
Use the gst_ntp_clock to support the use of an NTP server.
https://bugzilla.gnome.org/show_bug.cgi?id=752413
Bitrate-limit is already available in the baseclass and, even though
the bandwidth-usage name is better, hls and mss already used
bitrate-limit. This patch deprecates the bandwidth-usage and maps
it to the baseclass bitrate-limite.
Add more power to the chunk_received function (renamed to data_received)
and also to the fragment_finish function.
The data_received function must parse/decrypt the data if necessary and
also push it using the new push_buffer function that is exposed now. The
default implementation gets data from the stream adapter (all available)
and pushes it.
The fragment_finish function must also advance the fragment. The default
implementation only advances the fragment.
This allows the subsegment handling in dashdemux to continuously download
the same file from the server instead of stopping at every subsegment
boundary and starting a new request
The ISOBMFF profile allows definind subsegments in a segment. At those
subsegment boundaries the client can switch from one representation to
another as they have aligned indexes.
To handle those the 'sidx' index is parsed from the stream and the
entries point to pts/offset of the samples in the stream. Knowing that
the entries are aligned in the different representation allows the client
to switch mid fragment. In this profile a single fragment is used per
representation and the subsegments are contained in this fragment.
To notify the superclass about the subsegment boundary the chunk_received
function returns a special flow return that indicates that. In this case,
the super class will check if a more suitable bitrate is available and will
change to the same subsegment in this new representation.
It also requires special handling of the position in the stream as the
fragment advancing is now done by incrementing the index of the subsegment.
It will only advance to the next fragment once all subsegments have been
downloaded.
https://bugzilla.gnome.org/show_bug.cgi?id=741248
If EOS or ERROR happens before the download loop thread has reached its
g_cond_wait() call, then the g_cond_signal doesn't have any effect and
the download loop thread stucks later.
https://bugzilla.gnome.org/show_bug.cgi?id=735663
Set up a message handling function to be able to catch errors
from the source element and signal the cond to allow the download
loop to retry the download.
Instead, use a source element linked to a ghostpad to provide
smaller buffers and more granular control for downstream
buffering elements while also reducing startup latency
Download and push from the same task, makes code a lot simpler
to maintain. Also pushing from separate threads avoids deadlocking
when gst_pad_push blocks due to downstream queues being full.
Use a single lock for all streams instead of having separate locks.
This makes maintenance easier and at most points we would need
a single lock before iterating on all streams data. So not much
is gained from individual locks.
Handle multiple languages by using the not-linked return to stop
the download task for that stream. It can be reactivated when
a reconfigure event is received. Stopping the unused streams is
relevant to save network bandwidth
Instead of having a single download task for all streams, this
commit makes each stream have its own download loop, allowing
parallel download of fragments.
During a live stream it is possible for dashdemux to lag behind on a
slow connection or to rush ahead of the connection os too fast.
For the first case it is necessary to jump some segments ahead to be able to
continue playback as old segments are usually deleted from the server.
For the later, dashdemux should wait a little before attempting another
download do give time to the server to produce a new segment
Non-live streams should timestamp buffers with a running-time starting from
0. Since we already push a 0 -> -1 segment, bring the timestamps to 0
by subtracting the initial timestamp.
A small struct that keeps a short history of fragment download bitrates
to have an average measure of N last fragments instead of using only
the last downloaded bitrate
Do not use a global bitrate as the sizes of the fragments matter
when calculating the download rate as the connection setup time is
also being taken into the download duration, a smaller fragment
will have a lower bitrate than a larger one.
This avoids switching the bitrates for streams frequently because
of bitrate mismatches
Instead of downloading 1 fragment per stream per download loop,
select the stream with the earlier timestamp and get a fragment
only for that one.
The old algorithm would lead to problems when the fragment durations
were too different for streams.
dashdemux shouldn't emit the buffering message as that can pause
the pipeline. It has no proper knowledge of the downstream buffering
status so it can pause the pipeline when it isn't necessary. It should
have an internal buffer for downloading the streams ahead of playback,
but that shouldn't make it able to stop the pipeline for buffering.
A particular case in which this is bad is when a pad switch happens
(changing bitrates for example), the new pads dashdemux creates
will get linked to demuxers and new queues will be created,
these queues are initially empty and dashdemux will quickly
drain its buffers by pushing them to those queues. So it
would have no more buffers internally and would emit a
buffering message with a low ratio, causing the pipeline
to pause when it wouldn't be necessary.
Put EOS on the streams queues after the last fragment from the
last period for each stream. This way we keep it serialized
with the buffers and it will work when streams have different
ending times
GstDataQueue has proper locking and provides functions to limit the
size of the queue. Also has blocking calls that are useful to
our multithread scenario in Dash.
Store the buffers separately for each stream, this is clearer than
having a queue with a list of buffers. It also allows easier selection
of buffers to push in later refactors
Fragments should be pushed ASAP as downstream should be responsible for
doing the syncrhonization and proper buffering.
This has the great side effect of fixing most of the seeking A/V sync issues.
- the MPD file is updated in the download loop (only if we have a "dynamic" MPD and minimumUpdatePeriod is valid);
- properly LOCK/UNLOCK the GstMpdClient;
- Periods are played in sequence, from PeriodStart to PeriodEnd
- seamless switching from one Period to the next one works fine;
- the 'new-segment' generation is broken, so if we need to switch pads for a new Period there is a crash;