When seeking to the last second of a mpd it would reject the seek
because the comparison was < instead of <=
This fails the important use case of seeking to the end of a file
to play it back in reverse from the end
The urn:mpeg:dash:utc:http-head:2014 method of time synchronisation
uses an HTTP HEAD request to a specified URL and then parses the
Date: HTTP response header.
This commit adds support to dashdemux for this method of time
synchronisation by making a HEAD request and then parsing the Date:
response.
This commit adds support to gstfragment to return the HTTP headers
and to uridownloader to support HEAD requests. To avoid creating a
new API, the RANGE get function is re-used (abused?) with start=-1
and end=-1 to indicate a HEAD request.
https://bugzilla.gnome.org/show_bug.cgi?id=752413
This commit addresses the following items from the code review:
use a portable way to define NTP_TO_UNIX_EPOCH,
fix memory leak on error, and
add documentation to UTCTiming parse functions
Using LL is not portable, so the G_GUINT64_CONSTANT needs to be instead.
If an error occurs during DNS resolution, the GError was not being
released, causing a memory leak.
https://bugzilla.gnome.org/show_bug.cgi?id=752413
Unless the DASH client can compensate for the difference between its
clock and the clock used by the server, the client might request
fragments that either not yet on the server or fragments that have
already been expired from the server. This is an issue because these
requests can propagate all the way back to the origin
ISO/IEC 23009-1:2014/Amd 1 [PDAM1] defines a new UTCTiming element to allow
a DASH client to track the clock used by the server generating the
DASH stream. Multiple UTCTiming elements might be present, to indicate
support for multiple methods of UTC time gathering. Each element can
contain a white space separated list of URLs that can be contacted
to discover the UTC time from the server's perspective.
This commit provides parsing of UTCTiming elements, unit tests of this
parsing and a function to poll a time server. This function
supports the following methods:
urn:mpeg:dash:utc:ntp:2014
urn:mpeg:dash:utc:http-xsdate:2014
urn:mpeg:dash:utc:http-iso:2014
urn:mpeg:dash:utc:http-ntp:2014
The manifest update task is used to poll the clock time server,
to save having to create a new thread.
When choosing the starting fragment number and when waiting for a
fragment to become available, the difference between the server's idea
of UTC and the client's idea of UTC is taken into account. For example,
if the server's time is behind the client's idea of UTC, we wait for
longer before requesting a fragment
[PDAM1]: http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber=66068
dashdemux: support NTP time servers in UTCTiming elements
Use the gst_ntp_clock to support the use of an NTP server.
https://bugzilla.gnome.org/show_bug.cgi?id=752413
The gst_dash_demux_stream_update_fragment_info function could call
gst_dash_demux_stream_update_headers_info function twice. The
gst_dash_demux_stream_update_headers_info function will set header_uri and
index_uri to some newly allocated strings. The values set by the first call of
gst_dash_demux_stream_update_headers_info will leak when the function is
called for a second time.
The solution is to call gst_adaptive_demux_stream_fragment_clear before the
second call of gst_dash_demux_stream_update_headers_info
https://bugzilla.gnome.org/show_bug.cgi?id=753188
Only copy the values from the parent if the current node doesn't
have that value, they were being copied from the parent and
then overwriten by the child node, leaking the parent's copy
If a ContentProtection element is present in an AdaptationSet element,
send Protection events on the source pad, so that qtdemux can use this
information to correctly generate its source caps for DASH CENC
encrypted streams.
This allows qtdemux to support CENC encrypted DASH streams where the
content protection specific information is carried in the MPD file
rather than in pssh boxes in the initialisation segments.
This commit adds a new function to the adaptivedemux base class to allow
a GstEvent to be queued for a stream. The queue of events are sent the
next time a buffer is pushed for that stream.
https://bugzilla.gnome.org/show_bug.cgi?id=705991
Moved gst_mpd_client_get_next_segment_availability_end_time and
gst_mpd_client_add_time_difference functions to be grouped with
functions from the same category.
https://bugzilla.gnome.org/show_bug.cgi?id=752027
Corrected the initialisation of mimeType in
gst_mpdparser_get_list_and_nb_of_audio_language: the variable is used
in a loop, so it must be set to NULL at the beginning of each iteration.
https://bugzilla.gnome.org/show_bug.cgi?id=751911
Before returning the next fragment duration value, the
gst_mpd_client_get_next_fragment_duration function tries to validate it.
But the condition was incorrect.
https://bugzilla.gnome.org/show_bug.cgi?id=751539
We're interested in the offset between the period start timestamp and the
actual media timestamp so that we can properly correct for it. The absolute
presentation offset to timestamp 0 is useless as the only thing we really
care about is the offset between the current fragment timestamp and the
media timestamp.
Otherwise we will look for segments after the period usually. The seek
timestamp is relative to the start of the first period and we have to
select a segment relative to the current period's start.
We didn't do this for fragments that are generated on demand from a template,
only for the other cases when they were all generated upfront. This caused
fragment timestamps to start from 0 again for each new period.
If not set, the timeShiftBufferDepth has a default value of -1.
The standard says that this should be interpreted as infinite.
The gst_mpd_client_check_time_position function incorrectly compares
timeShiftBufferDepth with 0 instead of -1 to determine if it was set.
https://bugzilla.gnome.org/show_bug.cgi?id=751500
The last parameter of gst_mpd_client_add_media_segment function is a
duration. But when called from gst_mpd_client_setup_representation, the
last argument was wrongly set to PeriodEnd
https://bugzilla.gnome.org/show_bug.cgi?id=751449
The period start information, calculated in gst_mpd_client_setup_media_presentation
function is stored in stream_period->start. The information read from
xml file and stored in stream_period->period->start is not changed.
If the xml file does not contain the period start information,
stream_period->period->start will be -1.
The function gst_mpd_client_get_next_segment_availability_end_time wants to
use period start time, but incorrectly uses stream_period->period->start
(value from xml file, which could be -1) instead of stream_period->start
(computed value)
https://bugzilla.gnome.org/show_bug.cgi?id=751465
According to ISO/IEC 23009-1:2014(E), chapter 5.3.2.1
"The Period extends until the PeriodStart of the next Period, or until
the end of the Media Presentation in the case of the last Period."
This means that a configured value for optional attribute period duration
should be ignored if the next period contains a start attribute or it is
the last period and the MPD contains a mediaPresentationDuration attribute.
https://bugzilla.gnome.org/show_bug.cgi?id=750797
Added some warning messages in gst_mpd_client_setup_streaming to help
debug situations when the function will return FALSE.
Renamed a wrongly spelled variable.
https://bugzilla.gnome.org/show_bug.cgi?id=751149
Corrected some comments in gstmpdparser.h file.
Moved gst_mpd_client_get_adaptation_sets function to be grouped with
other functions from AdaptationSet group
https://bugzilla.gnome.org/show_bug.cgi?id=751149
The gst_mpdparser_get_rep_idx_with_max_bandwidth function assumes
representations are ordered by bandwidth and incorrectly returns the
first one when wanting the one with minimum bandwidth.
Corrected gst_mpdparser_get_rep_idx_with_max_bandwidth function to get the
correct representation in case max_bandwidth parameter is 0.
https://bugzilla.gnome.org/show_bug.cgi?id=751153
Added a check for a_node->ns before accessing a_node->ns->href in
gst_mpdparser_get_xml_node_namespace. This could happen if the xml
is missing the default namespace.
https://bugzilla.gnome.org/show_bug.cgi?id=750866
If the presentationTimeOffset attribute of a DASH manifest contains
a value that is larger than 2^32, gstmpdparser incorrectly calculates
the stream's presentation time offset. This is due to two bugs:
1: Using gst_mpdparser_get_xml_prop_unsigned_integer rather than
gst_mpdparser_get_xml_prop_unsigned_integer_64 to parse the
attribute
2: gst_mpd_client_setup_representation multiplying the value by
GST_SECOND and then dividing by timescale
https://bugzilla.gnome.org/show_bug.cgi?id=750804
This reverts commit 37011e5198.
This change was actually completely unnecessary, the streams in question are
marked as static and are not considered live anyway.
Otherwise we'll only get half of its bits printed on 32 bit architectures.
For this, promote the %d-style format strings to something that accepts
64 bit integers with G_GINT64_MODIFIER.
Using format strings from an untrusted source without validation is
calling for problems, and at least allows to remotely crash your application.
If not worse.
The functions to get the next fragment, next fragment timestamp and to advance
to the next fragment need to work differently when stream->segments is NULL.
Use logic similar to that introduced by commit 2105a310 to perform these
functions.
https://bugzilla.gnome.org/show_bug.cgi?id=749684
When all fragments have already been downloaded on a live stream
dashdemux would busy loop as the default implementation of
has_next_fragment would return TRUE. Implement it to correctly
signal if adaptivedemux should wait for the manifest update before
trying to get new fragments.
When updating the manifest the timestamps on it might have changed a little
due to rounding and timescale conversions. If the change makes the timestamp
of the current segment to go up it makes dashdemux reposition to the previous
one causing one extra unnecessary download.
So when repositioning add an extra 10 microseconds to cover for that rounding
issues and increase the chance of falling in the same segment.
Additionally, also improve the time used when the client is already after the
last segment. Instead of using the last segment starting timestamp use the
final timestamp to make it reposition to the next one and not to the one that
has already been downloaded.
These functions of directly getting and setting segment indexes
are no longer useful as now we need 2 indexes: repeat and segment
index.
The only operations needed are advance_segment, going back to the
first one or seeking for a timestamp.
Segments are now stored with their repeat counts instead of spanding
them to multiple segments. This caused advancing to the next segment
using a single index to have to iterate over the whole list every time.
This commit addresses this by storing both the segment index as well
as the repeat index and makes advancing to next segment just an
increment of the repeat or the segment index.
Use a single segment to represent it internally to avoid using too
much memory. This has the drawback of issuing a linear search to
find the correct segment to play but this can be fixed by using
binary searches or caching the current position and just looking
for the next one.
https://bugzilla.gnome.org/show_bug.cgi?id=748369
There is a playback error when trying to play a content that
has 'application' mimeType. This commit prevents an exception from
setup text streams.
https://bugzilla.gnome.org/show_bug.cgi?id=747525
Bitrate-limit is already available in the baseclass and, even though
the bandwidth-usage name is better, hls and mss already used
bitrate-limit. This patch deprecates the bandwidth-usage and maps
it to the baseclass bitrate-limite.
By implementing get_live_seek_range.
As shown by :
gst-validate-1.0 playbin \
uri=http://dev-iplatforms.kw.bbc.co.uk/dash/news24-avc3/news24.php
This patch handles live seeking, by setting a live seek range
comprised between now - timeShiftBufferDepth and now.
The inteersting thing with this stream is that one can actually
ask fragments up to availabilityStartTime, but it seems quite clear
in the spec that content is only guaranteed to exist up to
timeShiftBufferDepth.
One can test live seeking this way :
gst-validate-1.0 playbin \
uri=http://dev-iplatforms.kw.bbc.co.uk/dash/news24-avc3/news24.php \
--set-scenario seek_back.scenario
with scenario being:
description, seek=true
seek, playback-time=position+5.0, start="position-600.0",
flags=accurate+flush
This example will play the stream, wait for five seconds, then seek back
to a position 10 minutes earlier.
https://bugzilla.gnome.org/show_bug.cgi?id=744362
Add more power to the chunk_received function (renamed to data_received)
and also to the fragment_finish function.
The data_received function must parse/decrypt the data if necessary and
also push it using the new push_buffer function that is exposed now. The
default implementation gets data from the stream adapter (all available)
and pushes it.
The fragment_finish function must also advance the fragment. The default
implementation only advances the fragment.
This allows the subsegment handling in dashdemux to continuously download
the same file from the server instead of stopping at every subsegment
boundary and starting a new request
gstdashdemux.c:1330:13: error: implicit conversion from enumeration type 'enum _GstAdaptiveDemuxFlowReturn' to different enumeration type
'GstFlowReturn' [-Werror,-Wenum-conversion]
ret = GST_ADAPTIVE_DEMUX_FLOW_SUBSEGMENT_END;
~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The segment start time is calculated as the offset into the current segment.
The old condition to detect the end of period (i.e. segment start time >
period start + period duration) failed when the period start was not 0 since
the segment start time does not take the period start time into account.
Fix this detection by only comparing the segment start to the period duration.
https://bugzilla.gnome.org/show_bug.cgi?id=733369
The ISOBMFF profile allows definind subsegments in a segment. At those
subsegment boundaries the client can switch from one representation to
another as they have aligned indexes.
To handle those the 'sidx' index is parsed from the stream and the
entries point to pts/offset of the samples in the stream. Knowing that
the entries are aligned in the different representation allows the client
to switch mid fragment. In this profile a single fragment is used per
representation and the subsegments are contained in this fragment.
To notify the superclass about the subsegment boundary the chunk_received
function returns a special flow return that indicates that. In this case,
the super class will check if a more suitable bitrate is available and will
change to the same subsegment in this new representation.
It also requires special handling of the position in the stream as the
fragment advancing is now done by incrementing the index of the subsegment.
It will only advance to the next fragment once all subsegments have been
downloaded.
https://bugzilla.gnome.org/show_bug.cgi?id=741248
If EOS or ERROR happens before the download loop thread has reached its
g_cond_wait() call, then the g_cond_signal doesn't have any effect and
the download loop thread stucks later.
https://bugzilla.gnome.org/show_bug.cgi?id=735663
The internal pad still keeps its EOS flag and event as it can be assigned
after the flush-start/stop pair is sent. The EOS is assigned from the streaming
thread so this is racy.
To be sure to clear it, it has to be done after setting the source to READY to
be sure that its streaming thread isn't running.
https://bugzilla.gnome.org/show_bug.cgi?id=736012
If the language is not specified in the AdaptationSet, use the ContentComponent
node to get it. We only get it if there is only a single ContentComponent as
it doesn't seem clear on what to do if there are multiple entries
https://bugzilla.gnome.org/show_bug.cgi?id=732237
When a seek with a negative rate is requested, find the target
segment where gstsegment.stop belongs in and then download from
this segment backwards until the first segment.
This allows proper reverse playback.
When flushing, this will prevent dashdemux from trying to download more
fragments or more chunks of the same fragment before stopping.
Also improves the error handling to not transform everything non-ok into
an error.
https://bugzilla.gnome.org/show_bug.cgi?id=734014
The parsing function already frees the old value (if any), avoid a double
free by not freeing it before calling the function without setting the
pointer to NULL
Coverity ID: 1212178
The _parse_url function already frees the previous pointer, avoid
freeing it before without setting to null or we have a double free.
Coverity ID: 1212181
Coverity ID: 1212180
Coverity ID: 1212179
Set up a message handling function to be able to catch errors
from the source element and signal the cond to allow the download
loop to retry the download.
Instead, use a source element linked to a ghostpad to provide
smaller buffers and more granular control for downstream
buffering elements while also reducing startup latency
Incorrect time scaling in gst_dash_demux_wait_for_fragment_to_be_available()
means that media segments are fetched before their availablity time. This
patch fixes this.
https://bugzilla.gnome.org/show_bug.cgi?id=724875
demux->last_manifest_update is not initialised at startup, with the
effect that live manifests are reloaded immediately after the download
loop begins. This patch fixes this.
https://bugzilla.gnome.org/show_bug.cgi?id=724790
Remove the dashdemux seeking function to use the one implemented
in mpdparser as it is more complete. This also makes dashdemux not
crash when seeking on streams that use segment templates.
Download and push from the same task, makes code a lot simpler
to maintain. Also pushing from separate threads avoids deadlocking
when gst_pad_push blocks due to downstream queues being full.
Use a single lock for all streams instead of having separate locks.
This makes maintenance easier and at most points we would need
a single lock before iterating on all streams data. So not much
is gained from individual locks.
Make dash playlists with multiple periods work again by waiting
to switch the periods when all streams have reached the end of
the current period. The stream_loop is responsible for advancing
the period, but the download loops will already start downloading
data for the next period as soon as possible.
Handle multiple languages by using the not-linked return to stop
the download task for that stream. It can be reactivated when
a reconfigure event is received. Stopping the unused streams is
relevant to save network bandwidth
Instead of having a single download task for all streams, this
commit makes each stream have its own download loop, allowing
parallel download of fragments.
always expose all streams instead of only exposing one of each type.
This is more aligned with gstreamer's way of working. Allows the user
to select the stream that it wants to use by linking its pad and leaving
the unused ones as unlinked.
Fixed up the error-handling code when downloading fragments.
Modifed the error-handling code to use positive logic when
testing for cancellation of the download loop.
https://bugzilla.gnome.org/show_bug.cgi?id=701404
There is an issue for live streams where download_loop will keep
downloading segments until it gets a 404 error for a segment
that has not yet been published. This is a problem because this
request for a segment that doesn't exist will propagate all the
way back to the origin server(s). This means that dashdemux causes
extra load on the origin server(s) for segments that aren't yet
available.
This patch uses availabilityStartTime, period
and the host's idea of UTC to decide if a fragment is available to
be requested from an HTTP server and filter out requests for fragments
that are not yet available.
https://bugzilla.gnome.org/show_bug.cgi?id=701404
gstdashdemux.c:1753: warning: format '%llu' expects type 'long long unsigned int', but argument 8 has type 'long unsigned int'
gstdashdemux.c:2224: warning: format '%llu' expects type 'long long unsigned int', but argument 9 has type 'guint64'
gstdashdemux.c:2224: warning: format '%llu' expects type 'long long unsigned int', but argument 10 has type 'guint64'
gstmpdparser.h:530: warning: type qualifiers ignored on function return type
gstmpdparser.c:4177: warning: type qualifiers ignored on function return type
For SegmentTemplate elements containing a startNumber attribute, the
`number' member of GstMediaSegments should be offset by the value of
startNumber; however, this is not currently the case. As a result, the
first URI(s) requested by the download loop will be wrong.
This commit ensures that segment numbers will be offset by startNumber
when one is present in a SegmentTemplate element.
https://bugzilla.gnome.org/show_bug.cgi?id=705661
When using a SegmentTemplate element, the timestamps of the buffers
output by dashdemux are incorrect, causing problems downstream.
The reason is that GstMediaSegment start times are calculated (in
gst_mpdparser_get_chunk_by_index) by multiplying segment index by
segment duration and then scaling the result according the `timebase'
attribute from the MPD. However, the segment duration is already a
GstClockTime (i.e., it has already been scaled according to the timebase
from the MPD and converted to a nanosecond value), so multiplying it by
the segment index will give the correct timestamp without the need for
any further scaling.
https://bugzilla.gnome.org/show_bug.cgi?id=705679
This prevents deadlocks on startup on files that have only a very
large buffer for a stream and the queue is filled and will lock on
the eos event that is pushed after the buffer. As no buffers have yet
been pushed to other streams, the pipeline locks on preroll
During a live stream it is possible for dashdemux to lag behind on a
slow connection or to rush ahead of the connection os too fast.
For the first case it is necessary to jump some segments ahead to be able to
continue playback as old segments are usually deleted from the server.
For the later, dashdemux should wait a little before attempting another
download do give time to the server to produce a new segment
When using a template based segment list, do not try to
contruct a finite segment list for the limits of the available periods.
We might not know when the period ends (for live streams) and we can
always create the segment on demand when requested by dashdemux,
avoiding use of some memory and cpu when re-creating this list.
Replaces the 2 likely larger lists with more appropriate structures
to improve performance.
Replaces S nodes GList for a GQueue, this reduces latency to startup
because of traversing the list just append an element.
Replaces the processed media segments GList for a GPtrArray as it is
constantly acessed by index during playback.
Duration from segment being unknown is a issue from the MPD and not
a programming issue, so the assert isn't useful here. Instead check
and return an error code so the caller can fallback to alternatives
When dashdemux selects its first fragment, it always selects the
first fragment listed in the manifest. For on-demand content,
this is the correct behaviour. However for live content, this
behaviour is undesirable because the first fragment listed in the
manifest might be some considerable time behind "now".
The commit uses the host's idea of UTC and tries to find the
oldest fragment that contains samples for this time of day.
https://bugzilla.gnome.org/show_bug.cgi?id=701509
According to the MPEG-DASH spec, certain elements (i.e.
SegmentBase, SegmentTemplate, and SegmentList) should inherit
attributes from the same elements in the containing AdaptationSet
or Period.
Updated the SegmentBase, SegmentTemplate, and SegmentList parsers
to properly inherit attributes from the corresponding elements in
AdaptationSet and/or Period.
https://bugzilla.gnome.org/show_bug.cgi?id=702677
Convert all xml attribute/content parsing functions to return a
boolean value indicating whether or not the attribute/content was
present. We need this finer-grained control in order to properly
implement the inheritance policies described in the spec
Also fixed several memory leak conditions when handling errors in
the xml attribute/content parsing functions.
https://bugzilla.gnome.org/show_bug.cgi?id=702677
Ensure that g_free/xmlFree is used correctly based on how the
memory was allocated.
When deallocating GLists, there were many places that were using
g_list_foreach and g_list_free. Converted these occurrences to
call g_list_free_full.
Add NULL checks to all xmlFree calls since the documentation does
not guarantee that passing NULL is safe
In places where we are strdup'ing memory allocated by libxml2,
changed those calls to use xmlMemStrdup().
There were several places where we were missing g_slice_free when
deallocating a top-level node structure.
https://bugzilla.gnome.org/show_bug.cgi?id=702837
It was not properly divided by GST_SECONDS. Also fix issue with
max-buffering-time being multiplied by GST_SECONDS every time the
property is retrieved.
https://bugzilla.gnome.org/show_bug.cgi?id=700487
We only want to adjust the timestamps so that they start from 0 for live
streams. Non-live streams already start from 0 and after a seek we actually want
to timestamp to be the position we seek to.
Non-live streams should timestamp buffers with a running-time starting from
0. Since we already push a 0 -> -1 segment, bring the timestamps to 0
by subtracting the initial timestamp.
The xmlCleanupParser function seems to cleanup all statically
allocated libxml variables, making it unusable. We can't guarantee
that dashdemux won't need it anymore, so better not call it.
Manifest updates should be done periodically for live streams,
this patch makes the demuxer create a new manifest client for
the new version and transfers the stream position to the new
one, discarding the old one afterwards.
A small struct that keeps a short history of fragment download bitrates
to have an average measure of N last fragments instead of using only
the last downloaded bitrate
Do not use a global bitrate as the sizes of the fragments matter
when calculating the download rate as the connection setup time is
also being taken into the download duration, a smaller fragment
will have a lower bitrate than a larger one.
This avoids switching the bitrates for streams frequently because
of bitrate mismatches
Instead of downloading 1 fragment per stream per download loop,
select the stream with the earlier timestamp and get a fragment
only for that one.
The old algorithm would lead to problems when the fragment durations
were too different for streams.
dashdemux shouldn't emit the buffering message as that can pause
the pipeline. It has no proper knowledge of the downstream buffering
status so it can pause the pipeline when it isn't necessary. It should
have an internal buffer for downloading the streams ahead of playback,
but that shouldn't make it able to stop the pipeline for buffering.
A particular case in which this is bad is when a pad switch happens
(changing bitrates for example), the new pads dashdemux creates
will get linked to demuxers and new queues will be created,
these queues are initially empty and dashdemux will quickly
drain its buffers by pushing them to those queues. So it
would have no more buffers internally and would emit a
buffering message with a low ratio, causing the pipeline
to pause when it wouldn't be necessary.
Put EOS on the streams queues after the last fragment from the
last period for each stream. This way we keep it serialized
with the buffers and it will work when streams have different
ending times
The smallest queue should be used to prevent blocking the download
thread when a stream has too much data buffered, leaving the other
streams starving from fragments